On the Importance of Cycle Minimum in Sunspot Cycle Prediction
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Sunspot Activity Near Cycle Minimum and What it Might Suggest for Cycle 24, the Next Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2009-01-01
In late 2008, 12-month moving averages of sunspot number, number of spotless days, number of groups, area of sunspots, and area per group were reflective of sunspot cycle minimum conditions for cycle 24, these values being of or near record value. The first spotless day occurred in January 2004 and the first new-cycle, high-latitude spot was reported in January 2008, although old-cycle, low-latitude spots have continued to be seen through April 2009, yielding an overlap of old and new cycle spots of at least 16 mo. New-cycle spots first became dominant over old-cycle spots in September 2008. The minimum value of the weighted mean latitude of sunspots occurred in May 2007, measuring 6.6 deg, and the minimum value of the highest-latitude spot followed in June 2007, measuring 11.7 deg. A cycle length of at least 150 mo is inferred for cycle 23, making it the longest cycle of the modern era. Based on both the maximum-minimum and amplitude-period relationships, cycle 24 is expected to be only of average to below-average size, peaking probably in late 2012 to early 2013, unless it proves to be a statistical outlier.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
An Examination of Sunspot Number Rates of Growth and Decay in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
On the basis of annual sunspot number averages, sunspot number rates of growth and decay are examined relative to both minimum and maximum amplitudes and the time of their occurrences using cycles 12 through present, the most reliably determined sunspot cycles. Indeed, strong correlations are found for predicting the minimum and maximum amplitudes and the time of their occurrences years in advance. As applied to predicting sunspot minimum for cycle 24, the next cycle, its minimum appears likely to occur in 2006, especially if it is a robust cycle similar in nature to cycles 17-23.
Anticipating Cycle 24 Minimum and its Consequences: An Update
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.
40 CFR 180.960 - Polymers; exemptions from the requirement of a tolerance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 9014-92-026401-47-8 1, 2-Ethanediamine, polymer with methyl oxirane and oxirane, minimum number average...(oxyethylene) content averages 30 moles None α-(p-Nonylphenyl)-ω-hydroxypoly(oxyethylene) sulfate, and its...
Proposal for Support of Miami Inner City Marine Summer Intern Program, Dade County.
1987-12-21
employer NUMBER OF POSITIONS ONE MINIMUM AGE 16 SPECIAL REQUIREMENTS * General Science * Basic knowledge of library orncedures, an interest in library ... science in helpful * Minimum Grade Point Average 3.0 DRESS REQUIREMENTS Discuss with employer JOB DESCRIPTION p. * Catalogs and files new sets of
40 CFR 180.960 - Polymers; exemptions from the requirement of a tolerance.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-hydroxypoly (oxypropylene) and/or poly (oxyethylene) polymers where the alkyl chain contains a minimum of six... (oxypropylene) poly(oxyethylene) block copolymer; the minimum poly(oxypropylene) content is 27 moles and the... number average molecular weight (in amu), 900,000 62386-95-2 Monophosphate ester of the block copolymer α...
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.
Energetic O+ and H+ Ions in the Plasma Sheet: Implications for the Transport of Ionospheric Ions
NASA Technical Reports Server (NTRS)
Ohtani, S.; Nose, M.; Christon, S. P.; Lui, A. T.
2011-01-01
The present study statistically examines the characteristics of energetic ions in the plasma sheet using the Geotail/Energetic Particle and Ion Composition data. An emphasis is placed on the O+ ions, and the characteristics of the H+ ions are used as references. The following is a summary of the results. (1) The average O+ energy is lower during solar maximum and higher during solar minimum. A similar tendency is also found for the average H+ energy, but only for geomagnetically active times; (2) The O+ -to -H+ ratios of number and energy densities are several times higher during solar maximum than during solar minimum; (3) The average H+ and O+ energies and the O+ -to -H+ ratios of number and energy densities all increase with geomagnetic activity. The differences among different solar phases not only persist but also increase with increasing geomagnetic activity; (4) Whereas the average H+ energy increases toward Earth, the average O+ energy decreases toward Earth. The average energy increases toward dusk for both the H+ and O+ ions; (5) The O+ -to -H+ ratios of number and energy densities increase toward Earth during all solar phases, but most clearly during solar maximum. These results suggest that the solar illumination enhances the ionospheric outflow more effectively with increasing geomagnetic activity and that a significant portion of the O+ ions is transported directly from the ionosphere to the near ]Earth region rather than through the distant tail.
75 FR 12695 - Tetraethoxysilane, Polymer with Hexamethyldisiloxane; Tolerance Exemption
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-17
...] Tetraethoxysilane, Polymer with Hexamethyldisiloxane; Tolerance Exemption AGENCY: Environmental Protection Agency... tolerance for residues of tetraethoxysilane, polymer with hexamethyldisiloxane, minimum number average... tetraethoxysilane, polymer with hexamethyldisiloxane, on food or feed commodities. DATES: This regulation is...
The Changing Recreational Use of the Boundary Waters Canoe Area
Robert C. Lucas
1967-01-01
Although data on use for 1961 and 1966 are not always comparable, a bare-minimum estimate of the increase in number of visitors between those years in 19 percent. The greatest increase was in number of canoeists and boaters, which rose on the average 9 or 10 percent a year.
Forecast of Frost Days Based on Monthly Temperatures
NASA Astrophysics Data System (ADS)
Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.
2009-04-01
Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2011-01-01
On the basis of 12-month moving averages (12-mma) of monthly mean sunspot number (R), sunspot cycle 24 had its minimum amplitude (Rm = 1.7) in December 2008. At 12 mo past minimum, R measured 8.3, and at 18 mo past minimum, it measured 16.4. Thus far, the maximum month-to-month rate of rise in 12-mma values of monthly mean sunspot number (AR(t) max) has been 1.7, having occurred at elapsed times past minimum amplitude (t) of 14 and 15 mo. Compared to other sunspot cycles of the modern era, cycle 24?s Rm and AR(t) max (as observed so far) are the smallest on record, suggesting that it likely will be a slow-rising, long-period sunspot cycle of below average maximum amplitude (RM). Supporting this view is the now observed relative strength of cycle 24?s geomagnetic minimum amplitude as measured using the 12-mma value of the aa-geomagnetic index (aam = 8.4), which also is the smallest on record, having occurred at t equals 8 and 9 mo. From the method of Ohl (the inferred preferential association between RM and aam), one predicts RM = 55 +/- 17 (the ?1 se prediction interval) for cycle 24. Furthermore, from the Waldmeier effect (the inferred preferential association between the ascent duration (ASC) and RM) one predicts an ASC longer than 48 mo for cycle 24; hence, maximum amplitude occurrence should be after December 2012. Application of the Hathaway-Wilson-Reichmann shape-fitting function, using an RM = 70 and ASC = 56 mo, is found to adequately fit the early sunspot number growth of cycle 24.
Death from respiratory diseases and temperature in Shiraz, Iran (2006-2011).
Dadbakhsh, Manizhe; Khanjani, Narges; Bahrampour, Abbas; Haghighi, Pegah Shoae
2017-02-01
Some studies have suggested that the number of deaths increases as temperatures drops or rises above human thermal comfort zone. The present study was conducted to evaluate the relation between respiratory-related mortality and temperature in Shiraz, Iran. In this ecological study, data about the number of respiratory-related deaths sorted according to age and gender as well as average, minimum, and maximum ambient air temperatures during 2007-2011 were examined. The relationship between air temperature and respiratory-related deaths was calculated by crude and adjusted negative binomial regression analysis. It was adjusted for humidity, rainfall, wind speed and direction, and air pollutants including CO, NO x , PM 10 , SO 2 , O 3 , and THC. Spearman and Pearson correlations were also calculated between air temperature and respiratory-related deaths. The analysis was done using MINITAB16 and STATA 11. During this period, 2598 respiratory-related deaths occurred in Shiraz. The minimum number of respiratory-related deaths among all subjects happened in an average temperature of 25 °C. There was a significant inverse relationship between average temperature- and respiratory-related deaths among all subjects and women. There was also a significant inverse relationship between average temperature and respiratory-related deaths among all subjects, men and women in the next month. The results suggest that cold temperatures can increase the number of respiratory-related deaths and therefore policies to reduce mortality in cold weather, especially in patients with respiratory diseases should be implemented.
Death from respiratory diseases and temperature in Shiraz, Iran (2006-2011)
NASA Astrophysics Data System (ADS)
Dadbakhsh, Manizhe; Khanjani, Narges; Bahrampour, Abbas; Haghighi, Pegah Shoae
2017-02-01
Some studies have suggested that the number of deaths increases as temperatures drops or rises above human thermal comfort zone. The present study was conducted to evaluate the relation between respiratory-related mortality and temperature in Shiraz, Iran. In this ecological study, data about the number of respiratory-related deaths sorted according to age and gender as well as average, minimum, and maximum ambient air temperatures during 2007-2011 were examined. The relationship between air temperature and respiratory-related deaths was calculated by crude and adjusted negative binomial regression analysis. It was adjusted for humidity, rainfall, wind speed and direction, and air pollutants including CO, NOx, PM10, SO2, O3, and THC. Spearman and Pearson correlations were also calculated between air temperature and respiratory-related deaths. The analysis was done using MINITAB16 and STATA 11. During this period, 2598 respiratory-related deaths occurred in Shiraz. The minimum number of respiratory-related deaths among all subjects happened in an average temperature of 25 °C. There was a significant inverse relationship between average temperature- and respiratory-related deaths among all subjects and women. There was also a significant inverse relationship between average temperature and respiratory-related deaths among all subjects, men and women in the next month. The results suggest that cold temperatures can increase the number of respiratory-related deaths and therefore policies to reduce mortality in cold weather, especially in patients with respiratory diseases should be implemented.
Potential air pollutant emission from private vehicles based on vehicle route
NASA Astrophysics Data System (ADS)
Huboyo, H. S.; Handayani, W.; Samadikun, B. P.
2017-06-01
Air emissions related to the transportation sector has been identified as the second largest emitter of ambient air quality in Indonesia. This is due to large numbers of private vehicles commuting within the city as well as inter-city. A questionnaire survey was conducted in Semarang city involving 711 private vehicles consisting of cars and motorcycles. The survey was conducted in random parking lots across the Semarang districts and in vehicle workshops. Based on the parking lot survey, the average distance private cars travelled in kilometers (VKT) was 17,737 km/year. The machine start-up number of cars during weekdays; weekends were on average 5.19 and 3.79 respectively. For motorcycles the average of kilometers travelled was 27,092 km/year. The machine start-up number of motorcycles during weekdays and weekends were on average 5.84 and 3.98, respectively. The vehicle workshop survey showed the average kilometers travelled to be 9,510 km/year for motorcycles, while for private cars the average kilometers travelled was 21,347 km/year. Odometer readings for private cars showed a maximum of 3,046,509 km and a minimum of 700 km. Meanwhile, for motorcycles, odometer readings showed a maximum of 973,164 km and a minimum of roughly 54.24 km. Air pollutant emissions on East-West routes were generally higher than those on South-North routes. Motorcycles contribute significantly to urban air pollution, more so than cars. In this study, traffic congestion and traffic volume contributed much more to air pollution than the impact of fluctuating terrain.
Anticipating Cycle 24 Minimum and Its Consequences
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2007-01-01
On the basis of the 12-mo moving average of monthly mean sunspot number (R) through November 2006, cycle 23 has persisted for 126 mo, having had a minimum of 8.0 in May 1996, a peak of 120.8 in April 2000, and an ascent duration of 47 mo. In November 2006, the 12-mo moving average of monthly mean sunspot number was 12.7, a value just outside the upper observed envelope of sunspot minimum values for the most recent cycles 16-23 (range 3.4-12.3), but within the 90-percent prediction interval (7.8 +/- 6.7). The first spotless day during the decline of cycle 23 occurred in January 2004, and the first occurrence of 10 or more and 20 or more spotless days was February 2006 and April 2007, respectively, inferring that sunspot minimum for cycle 24 is imminent. Through May 2007, 121 spotless days have accumulated. In terms of the weighted mean latitude (weighed by spot area) (LAT) and the highest observed latitude spot (HLS) in November 2006, 12-mo moving averages of these parameters measured 7.9 and 14.6 deg, respectively, these values being the lowest values yet observed during the decline of cycle 23 and being below corresponding mean values found for cycles 16-23. As yet, no high-latitude new-cycle spots have been seen nor has there been an upturn in LAT and HLS, these conditions having always preceded new cycle minimum by several months for past cycles. Together, these findings suggest that cycle 24 s minimum amplitude still lies well beyond November 2006. This implies that cycle 23 s period either will lie in the period "gap" (127-134 mo), a first for a sunspot cycle, or it will be longer than 134 mo, thus making cycle 23 a long-period cycle (like cycle 20) and indicating that cycle 24 s minimum will occur after July 2007. Should cycle 23 prove to be a cycle of longer period, a consequence might be that the maximum amplitude for cycle 24 may be smaller than previously predicted.
NASA Astrophysics Data System (ADS)
Imber, S. M.; Milan, S. E.; Lester, M.
2012-04-01
We present a long term study, from 1995 - 2011, of the latitude of the Heppner-Maynard Boundary (HMB) determined using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection. We find that the average latitude of the HMB at midnight is 61° magnetic latitude during the solar maximum of 2003, but it moves significantly poleward during solar minimum, averaging 64° latitude during 1996, and 68° during 2010. This poleward motion is observed despite the increasing number of low latitude radars built in recent years as part of the StormDARN network, and so is not an artefact of data coverage. We believe that the recent extreme solar minimum lead to an average HMB location that was further poleward than previous solar cycles. We also calculated the open-closed field line boundary (OCB) from auroral images during the years 2000-2002 and find that on average the HMB is located equatorward of the OCB by ~6°. We suggest that the HMB may be a useful proxy for the OCB when global auroral images are not available.
The influence of climate variables on dengue in Singapore.
Pinto, Edna; Coelho, Micheline; Oliver, Leuda; Massad, Eduardo
2011-12-01
In this work we correlated dengue cases with climatic variables for the city of Singapore. This was done through a Poisson Regression Model (PRM) that considers dengue cases as the dependent variable and the climatic variables (rainfall, maximum and minimum temperature and relative humidity) as independent variables. We also used Principal Components Analysis (PCA) to choose the variables that influence in the increase of the number of dengue cases in Singapore, where PC₁ (Principal component 1) is represented by temperature and rainfall and PC₂ (Principal component 2) is represented by relative humidity. We calculated the probability of occurrence of new cases of dengue and the relative risk of occurrence of dengue cases influenced by climatic variable. The months from July to September showed the highest probabilities of the occurrence of new cases of the disease throughout the year. This was based on an analysis of time series of maximum and minimum temperature. An interesting result was that for every 2-10°C of variation of the maximum temperature, there was an average increase of 22.2-184.6% in the number of dengue cases. For the minimum temperature, we observed that for the same variation, there was an average increase of 26.1-230.3% in the number of the dengue cases from April to August. The precipitation and the relative humidity, after analysis of correlation, were discarded in the use of Poisson Regression Model because they did not present good correlation with the dengue cases. Additionally, the relative risk of the occurrence of the cases of the disease under the influence of the variation of temperature was from 1.2-2.8 for maximum temperature and increased from 1.3-3.3 for minimum temperature. Therefore, the variable temperature (maximum and minimum) was the best predictor for the increased number of dengue cases in Singapore.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
1999-01-01
Recently, Ahluwalia reviewed the solar and geomagnetic data for the last 6 decades and remarked that these data "indicate the existence of a three-solar-activity-cycle quasiperiodicity in them." Furthermore, on the basis of this inferred quasiperiodicity, he asserted that cycle 23 represents the initial cycle in a new three-cycle string, implying that it "will be more modest (a la cycle 17) with an annual mean sunspot number count of 119.3 +/- 30 at the maximum", a prediction that is considerably below the consensus prediction of 160 +/- 30 by Joselin et al. and of similar predictions by others based on a variety of predictive techniques. Several major sticking points of Ahluwalia's presentation, however, must be readdressed, and these issues form the basis of this comment. First, Ahluwalia appears to have based his analysis on a data set of Ap index values that is erroneous. For example, he depicts for the interval of 1932-1997 the variation of the Ap index in terms of annual averages, contrasting them against annual averages of sunspot number (SSN), and he lists for cycles 17-23 the minimum and maximum value of each, as well as the years in which they occur and a quantity which he calls "Amplitude" (defined as the numeric difference between the maximum and minimum values). In particular, he identifies the minimum Ap index (i.e., the minimum value of the Ap index in the vicinity of sunspot cycle minimum, which usually occurs in the year following sunspot minimum and which will be called hereafter, simply, Ap min) and the year in which it occur for cycles 17 - 23 respectively.
Impact of cigarette minimum price laws on the retail price of cigarettes in the USA.
Tynan, Michael A; Ribisl, Kurt M; Loomis, Brett R
2013-05-01
Cigarette price increases prevent youth initiation, reduce cigarette consumption and increase the number of smokers who quit. Cigarette minimum price laws (MPLs), which typically require cigarette wholesalers and retailers to charge a minimum percentage mark-up for cigarette sales, have been identified as an intervention that can potentially increase cigarette prices. 24 states and the District of Columbia have cigarette MPLs. Using data extracted from SCANTRACK retail scanner data from the Nielsen company, average cigarette prices were calculated for designated market areas in states with and without MPLs in three retail channels: grocery stores, drug stores and convenience stores. Regression models were estimated using the average cigarette pack price in each designated market area and calendar quarter in 2009 as the outcome variable. The average difference in cigarette pack prices are 46 cents in the grocery channel, 29 cents in the drug channel and 13 cents in the convenience channel, with prices being lower in states with MPLs for all three channels. The findings that MPLs do not raise cigarette prices could be the result of a lack of compliance and enforcement by the state or could be attributed to the minimum state mark-up being lower than the free-market mark-up for cigarettes. Rather than require a minimum mark-up, which can be nullified by promotional incentives and discounts, states and countries could strengthen MPLs by setting a simple 'floor price' that is the true minimum price for all cigarettes or could prohibit discounts to consumers and retailers.
ERIC Educational Resources Information Center
Hilgenkamp, Thessa; Van Wijck, Ruud; Evenhuis, Heleen
2012-01-01
The minimum number of days of pedometer monitoring needed to estimate valid average weekly step counts and reactivity was investigated for older adults with intellectual disability. Participants (N = 268) with borderline to severe intellectual disability ages 50 years and older were instructed to wear a pedometer for 14 days. The outcome measure…
Statistical Analysis of Adaptive Beam-Forming Methods
1988-05-01
minimum amount of computing resources? * What are the tradeoffs being made when a system design selects block averaging over exponential averaging? Will...understood by many signal processing practitioners, however, is how system parameters and the number of sensors effect the distribution of the... system performance improve and if so by how much? b " It is well known that the noise sampled at adjacent sensors is not statistically independent
Suicide and meteorological factors in São Paulo, Brazil, 1996-2011: a time series analysis.
Bando, Daniel H; Teng, Chei T; Volpe, Fernando M; Masi, Eduardo de; Pereira, Luiz A; Braga, Alfésio L
2017-01-01
Considering the scarcity of reports from intertropical latitudes and the Southern Hemisphere, we aimed to examine the association between meteorological factors and suicide in São Paulo. Weekly suicide records stratified by sex were gathered. Weekly averages for minimum, mean, and maximum temperature (°C), insolation (hours), irradiation (MJ/m2), relative humidity (%), atmospheric pressure (mmHg), and rainfall (mm) were computed. The time structures of explanatory variables were modeled by polynomial distributed lag applied to the generalized additive model. The model controlled for long-term trends and selected meteorological factors. The total number of suicides was 6,600 (5,073 for men), an average of 6.7 suicides per week (8.7 for men and 2.0 for women). For overall suicides and among men, effects were predominantly acute and statistically significant only at lag 0. Weekly average minimum temperature had the greatest effect on suicide; there was a 2.28% increase (95%CI 0.90-3.69) in total suicides and a 2.37% increase (95%CI 0.82-3.96) among male suicides with each 1 °C increase. This study suggests that an increase in weekly average minimum temperature has a short-term effect on suicide in São Paulo.
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the design minimum and average flame zone temperatures and combustion zone residence time; and shall... establish the design exhaust vent stream organic compound concentration level, adsorption cycle time, number... regeneration cycle, design carbon bed temperature after regeneration, design carbon bed regeneration time, and...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the design minimum and average flame zone temperatures and combustion zone residence time; and shall... establish the design exhaust vent stream organic compound concentration level, adsorption cycle time, number... regeneration cycle, design carbon bed temperature after regeneration, design carbon bed regeneration time, and...
Queueing system analysis of multi server model at XYZ insurance company in Tasikmalaya city
NASA Astrophysics Data System (ADS)
Muhajir, Ahmad; Binatari, Nikenasih
2017-08-01
Queueing theory or waiting line theory is a theory that deals with the queue process from the customer comes, queue to be served, served and left on service facilities. Queue occurs because of a mismatch between the numbers of customers that will be served with the available number of services, as an example at XYZ insurance company in Tasikmalaya. This research aims to determine the characteristics of the queue system which then to optimize the number of server in term of total cost. The result shows that the queue model can be represented by (M/M/4):(GD/∞/∞), where the arrivals are Poisson distributed while the service time is following exponential distribution. The probability of idle customer service is 2,39% of the working time, the average number of customer in the queue is 3 customers, the average number of customer in a system is 6 customers, the average time of a customer spent in the queue is 15,9979 minutes, the average time a customer spends in the system is 34,4141 minutes, and the average number of busy customer servicer is 3 server. The optimized number of customer service is 5 servers, and the operational cost has minimum cost at Rp 4.323.
21 CFR 172.260 - Oxidized polyethylene.
Code of Federal Regulations, 2010 CFR
2010-04-01
... HUMAN CONSUMPTION (CONTINUED) FOOD ADDITIVES PERMITTED FOR DIRECT ADDITION TO FOOD FOR HUMAN CONSUMPTION... polyethylene has a minimum number average molecular weight of 1,200, as determined by high temperature vapor pressure osmometry; contains a maximum of 5 percent by weight of total oxygen; and has an acid value of 9...
MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging
NASA Astrophysics Data System (ADS)
Chen, Lei; Kamel, Mohamed S.
2016-01-01
In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhtiari, M; Schmitt, J; Sarfaraz, M
2015-06-15
Purpose: To establish a minimum number of patients required to obtain statistically accurate Planning Target Volume (PTV) margins for prostate Intensity Modulated Radiation Therapy (IMRT). Methods: A total of 320 prostate patients, consisting of a total number of 9311 daily setups, were analyzed. These patients had gone under IMRT treatments. Daily localization was done using the skin marks and the proper shifts were determined by the CBCT to match the prostate gland. The Van Herk formalism is used to obtain the margins using the systematic and random setup variations. The total patient population was divided into different grouping sizes varyingmore » from 1 group of 320 patients to 64 groups of 5 patients. Each grouping was used to determine the average PTV margin and its associated standard deviation. Results: Analyzing all 320 patients lead to an average Superior-Inferior margin of 1.15 cm. The grouping with 10 patients per group (32 groups) resulted to an average PTV margin between 0.6–1.7 cm with the mean value of 1.09 cm and a standard deviation (STD) of 0.30 cm. As the number of patients in groups increases the mean value of average margin between groups tends to converge to the true average PTV of 1.15 cm and STD decreases. For groups of 20, 64, and 160 patients a Superior-Inferior margin of 1.12, 1.14, and 1.16 cm with STD of 0.22, 0.11, and 0.01 cm were found, respectively. Similar tendency was observed for Left-Right and Anterior-Posterior margins. Conclusion: The estimation of the required margin for PTV strongly depends on the number of patients studied. According to this study at least ∼60 patients are needed to calculate a statistically acceptable PTV margin for a criterion of STD < 0.1 cm. Numbers greater than ∼60 patients do little to increase the accuracy of the PTV margin estimation.« less
Minimum size limits for yellow perch (Perca flavescens) in western Lake Erie
Hartman, Wilbur L.; Nepszy, Stephen J.; Scholl, Russell L.
1980-01-01
During the 1960's yellow perch (Perca flavescens) of Lake Erie supported a commercial fishery that produced an average annual catch of 23 million pounds, as well as a modest sport fishery. Since 1969, the resource has seriously deteriorated. Commercial landings amounted to only 6 million pounds in 1976, and included proportionally more immature perch than in the 1960's. Moreover, no strong year classes were produced between 1965 and 1975. An interagency technical committee was appointed in 1975 by the Lake Erie Committee of the Great Lakes Fishery Commission to develop an interim management strategy that would provide for greater protection of perch in western Lake Erie, where declines have been the most severe. The committee first determined the age structure, growth and mortality rates, maturation schedule, and length-fecundity relationship for the population, and then applied Ricker-type equilibrium yield models to determine the effects of various minimum length limits on yield, production, average stock weight, potential egg deposition, and the Abrosov spawning frequency indicator (average number of spawning opportunities per female). The committee recommended increasing the minimum length limit of 5.0 inches to at least 8.5 inches. Theoretically, this change would increase the average stock weight by 36% and potential egg deposition by 44%, without significantly decreasing yield. Abrosov's spawning frequency indicator would rise from the existing 0.6 to about 1.2.
Abraham, Sara A; Kearfott, Kimberlee J; Jawad, Ali H; Boria, Andrew J; Buth, Tobias J; Dawson, Alexander S; Eng, Sheldon C; Frank, Samuel J; Green, Crystal A; Jacobs, Mitchell L; Liu, Kevin; Miklos, Joseph A; Nguyen, Hien; Rafique, Muhammad; Rucinski, Blake D; Smith, Travis; Tan, Yanliang
2017-03-01
Optically-stimulated luminescent dosimeters are capable of being interrogated multiple times post-irradiation. Each interrogation removes a fraction of the signal stored within the optically-stimulated luminescent dosimeter. This signal loss must be corrected to avoid systematic errors in estimating the average signal of a series of optically-stimulated luminescent dosimeter interrogations and requires a minimum number of consecutive readings to determine an average signal that is within a desired accuracy of the true signal with a desired statistical confidence. This paper establishes a technical basis for determining the required number of readings for a particular application of these dosimeters when using certain OSL dosimetry systems.
Geomagnetism during solar cycle 23: Characteristics.
Zerbo, Jean-Louis; Amory-Mazaudier, Christine; Ouattara, Frédéric
2013-05-01
On the basis of more than 48 years of morphological analysis of yearly and monthly values of the sunspot number, the aa index, the solar wind speed and interplanetary magnetic field, we point out the particularities of geomagnetic activity during the period 1996-2009. We especially investigate the last cycle 23 and the long minimum which followed it. During this period, the lowest values of the yearly averaged IMF (3 nT) and yearly averaged solar wind speed (364 km/s) are recorded in 1996, and 2009 respectively. The year 2003 shows itself particular by recording the highest value of the averaged solar wind (568 km/s), associated to the highest value of the yearly averaged aa index (37 nT). We also find that observations during the year 2003 seem to be related to several coronal holes which are known to generate high-speed wind stream. From the long time (more than one century) study of solar variability, the present period is similar to the beginning of twentieth century. We especially present the morphological features of solar cycle 23 which is followed by a deep solar minimum.
NASA Technical Reports Server (NTRS)
Cliver, E. W.; Ling, A. G.; Richardson, I. G.
2003-01-01
Using a recent classification of the solar wind at 1 AU into its principal components (slow solar wind, high-speed streams, and coronal mass ejections (CMEs) for 1972-2000, we show that the monthly-averaged galactic cosmic ray intensity is anti-correlated with the percentage of time that the Earth is imbedded in CME flows. We suggest that this correlation results primarily from a CME related change in the tail of the distribution function of hourly-averaged values of the solar wind magnetic field (B) between solar minimum and solar maximum. The number of high-B (square proper subset 10 nT) values increases by a factor of approx. 3 from minimum to maximum (from 5% of all hours to 17%), with about two-thirds of this increase due to CMEs. On an hour-to-hour basis, average changes of cosmic ray intensity at Earth become negative for solar wind magnetic field values square proper subset 10 nT.
Chylek, Petr; Augustine, John A.; Klett, James D.; ...
2017-09-30
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chylek, Petr; Augustine, John A.; Klett, James D.
At thousands of stations worldwide, the mean daily surface air temperature is estimated as a mean of the daily maximum (T max) and minimum (T min) temperatures. In this paper, we use the NOAA Surface Radiation Budget Network (SURFRAD) of seven US stations with surface air temperature recorded each minute to assess the accuracy of the mean daily temperature estimate as an average of the daily maximum and minimum temperatures and to investigate how the accuracy of the estimate increases with an increasing number of daily temperature observations. We find the average difference between the estimate based on an averagemore » of the maximum and minimum temperatures and the average of 1440 1-min daily observations to be - 0.05 ± 1.56 °C, based on analyses of a sample of 238 days of temperature observations. Considering determination of the daily mean temperature based on 3, 4, 6, 12, or 24 daily temperature observations, we find that 2, 4, or 6 daily observations do not reduce significantly the uncertainty of the daily mean temperature. The bias reduction in a statistically significant manner (95% confidence level) occurs only with 12 or 24 daily observations. The daily mean temperature determination based on 24 hourly observations reduces the sample daily temperature uncertainty to - 0.01 ± 0.20 °C. Finally, estimating the parameters of population of all SURFRAD observations, the 95% confidence intervals based on 24 hourly measurements is from - 0.025 to 0.004 °C, compared to a confidence interval from - 0.15 to 0.05 °C based on the mean of T max and T min.« less
ERIC Educational Resources Information Center
Temple, Viviene A.; Stanish, Heidi I.
2009-01-01
Pedometers are objective, inexpensive, valid, and reliable measures of physical activity. The minimum number of days of pedometer monitoring needed to estimate average weekly step counts was investigated. Seven days of pedometer data were collected from 154 ambulatory men and women ("ns" = 88 and 66, respectively) with intellectual disability.…
Model of Transition from Laminar to Turbulent Flow
NASA Astrophysics Data System (ADS)
Kanda, Hidesada
2001-11-01
For circular pipe flows, a model of transition from laminar to turbulent flow has already been proposed and the minimum critical Reynolds number of approximately 2040 was obtained (Kanda, 1999). In order to prove the validity of the model, another verification is required. Thus, for plane Poiseuille flow, results of previous investigations were studied, focusing on experimental data on the critical Reynolds number Rc, the entrance length, and the transition length. Consequently, concerning the natural transition, it was confirmed from the experimental data that (i) the transition occurs in the entrance region, (ii) Rc increases as the contraction ratio in the inlet section increases, and (iii) the minimum Rc is obtained when the contraction ratio is the smallest or one, and there is no-bellshaped entrance or straight parallel plates. Its value exists in the neighborhood of 1300, based on the channel height and the average velocity. Although, for Hagen-Poiseuille flow, the minimum Rc is approximately 2000, based on the pipe diameter and the average velocity, there seems to be no significant difference in the transition from laminar to turbulent flow between Hagen-Poiseuille flow and plane Poiseuille flow (Kanda, 2001). Rc is determined by the shape of the inlet. Kanda, H., 1999, Proc. of ASME Fluids Engineering Division - 1999, FED-Vol. 250, pp. 197-204. Kanda, H., 2001, Proc. of ASME Fluids Engineering Division - 2001.
Factor Retention in Exploratory Factor Analysis: A Comparison of Alternative Methods.
ERIC Educational Resources Information Center
Mumford, Karen R.; Ferron, John M.; Hines, Constance V.; Hogarty, Kristine Y.; Kromrey, Jeffery D.
This study compared the effectiveness of 10 methods of determining the number of factors to retain in exploratory common factor analysis. The 10 methods included the Kaiser rule and a modified Kaiser criterion, 3 variations of parallel analysis, 4 regression-based variations of the scree procedure, and the minimum average partial procedure. The…
Geomagnetism during solar cycle 23: Characteristics
Zerbo, Jean-Louis; Amory-Mazaudier, Christine; Ouattara, Frédéric
2012-01-01
On the basis of more than 48 years of morphological analysis of yearly and monthly values of the sunspot number, the aa index, the solar wind speed and interplanetary magnetic field, we point out the particularities of geomagnetic activity during the period 1996–2009. We especially investigate the last cycle 23 and the long minimum which followed it. During this period, the lowest values of the yearly averaged IMF (3 nT) and yearly averaged solar wind speed (364 km/s) are recorded in 1996, and 2009 respectively. The year 2003 shows itself particular by recording the highest value of the averaged solar wind (568 km/s), associated to the highest value of the yearly averaged aa index (37 nT). We also find that observations during the year 2003 seem to be related to several coronal holes which are known to generate high-speed wind stream. From the long time (more than one century) study of solar variability, the present period is similar to the beginning of twentieth century. We especially present the morphological features of solar cycle 23 which is followed by a deep solar minimum. PMID:25685427
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2013-01-01
Examined are the annual averages, 10-year moving averages, decadal averages, and sunspot cycle (SC) length averages of the mean, maximum, and minimum surface air temperatures and the diurnal temperature range (DTR) for the Armagh Observatory, Northern Ireland, during the interval 1844-2012. Strong upward trends are apparent in the Armagh surface-air temperatures (ASAT), while a strong downward trend is apparent in the DTR, especially when the ASAT data are averaged by decade or over individual SC lengths. The long-term decrease in the decadaland SC-averaged annual DTR occurs because the annual minimum temperatures have risen more quickly than the annual maximum temperatures. Estimates are given for the Armagh annual mean, maximum, and minimum temperatures and the DTR for the current decade (2010-2019) and SC24.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Johnson, Joseph S; Lacki, Michael J
2014-01-01
A growing number of mammal species are recognized as heterothermic, capable of maintaining a high-core body temperature or entering a state of metabolic suppression known as torpor. Small mammals can achieve large energetic savings when torpid, but they are also subject to ecological costs. Studying torpor use in an ecological and physiological context can help elucidate relative costs and benefits of torpor to different groups within a population. We measured skin temperatures of 46 adult Rafinesque's big-eared bats (Corynorhinus rafinesquii) to evaluate thermoregulatory strategies of a heterothermic small mammal during the reproductive season. We compared daily average and minimum skin temperatures as well as the frequency, duration, and depth of torpor bouts of sex and reproductive classes of bats inhabiting day-roosts with different thermal characteristics. We evaluated roosts with microclimates colder (caves) and warmer (buildings) than ambient air temperatures, as well as roosts with intermediate conditions (trees and rock crevices). Using Akaike's information criterion (AIC), we found that different statistical models best predicted various characteristics of torpor bouts. While the type of day-roost best predicted the average number of torpor bouts that bats used each day, current weather variables best predicted daily average and minimum skin temperatures of bats, and reproductive condition best predicted average torpor bout depth and the average amount of time spent torpid each day by bats. Finding that different models best explain varying aspects of heterothermy illustrates the importance of torpor to both reproductive and nonreproductive small mammals and emphasizes the multifaceted nature of heterothermy and the need to collect data on numerous heterothermic response variables within an ecophysiological context. PMID:24558571
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Sizing procedures for sun-tracking PV system with batteries
NASA Astrophysics Data System (ADS)
Nezih Gerek, Ömer; Başaran Filik, Ümmühan; Filik, Tansu
2017-11-01
Deciding optimum number of PV panels, wind turbines and batteries (i.e. a complete renewable energy system) for minimum cost and complete energy balance is a challenging and interesting problem. In the literature, some rough data models or limited recorded data together with low resolution hourly averaged meteorological values are used to test the sizing strategies. In this study, active sun tracking and fixed PV solar power generation values of ready-to-serve commercial products are recorded throughout 2015-2016. Simultaneously several outdoor parameters (solar radiation, temperature, humidity, wind speed/direction, pressure) are recorded with high resolution. The hourly energy consumption values of a standard 4-person household, which is constructed in our campus in Eskisehir, Turkey, are also recorded for the same period. During sizing, novel parametric random process models for wind speed, temperature, solar radiation, energy demand and electricity generation curves are achieved and it is observed that these models provide sizing results with lower LLP through Monte Carlo experiments that consider average and minimum performance cases. Furthermore, another novel cost optimization strategy is adopted to show that solar tracking PV panels provide lower costs by enabling reduced number of installed batteries. Results are verified over real recorded data.
Garner, Alan A; van den Berg, Pieter L
2017-10-16
New South Wales (NSW), Australia has a network of multirole retrieval physician staffed helicopter emergency medical services (HEMS) with seven bases servicing a jurisdiction with population concentrated along the eastern seaboard. The aim of this study was to estimate optimal HEMS base locations within NSW using advanced mathematical modelling techniques. We used high resolution census population data for NSW from 2011 which divides the state into areas containing 200-800 people. Optimal HEMS base locations were estimated using the maximal covering location problem facility location optimization model and the average response time model, exploring the number of bases needed to cover various fractions of the population for a 45 min response time threshold or minimizing the overall average response time to all persons, both in green field scenarios and conditioning on the current base structure. We also developed a hybrid mathematical model where average response time was optimised based on minimum population coverage thresholds. Seven bases could cover 98% of the population within 45mins when optimised for coverage or reach the entire population of the state within an average of 21mins if optimised for response time. Given the existing bases, adding two bases could either increase the 45 min coverage from 91% to 97% or decrease the average response time from 21mins to 19mins. Adding a single specialist prehospital rapid response HEMS to the area of greatest population concentration decreased the average state wide response time by 4mins. The optimum seven base hybrid model that was able to cover 97.75% of the population within 45mins, and all of the population in an average response time of 18 mins included the rapid response HEMS model. HEMS base locations can be optimised based on either percentage of the population covered, or average response time to the entire population. We have also demonstrated a hybrid technique that optimizes response time for a given number of bases and minimum defined threshold of population coverage. Addition of specialized rapid response HEMS services to a system of multirole retrieval HEMS may reduce overall average response times by improving access in large urban areas.
Computational and experimental studies of LEBUs at high device Reynolds numbers
NASA Technical Reports Server (NTRS)
Bertelrud, Arild; Watson, R. D.
1988-01-01
The present paper summarizes computational and experimental studies for large-eddy breakup devices (LEBUs). LEBU optimization (using a computational approach considering compressibility, Reynolds number, and the unsteadiness of the flow) and experiments with LEBUs at high Reynolds numbers in flight are discussed. The measurements include streamwise as well as spanwise distributions of local skin friction. The unsteady flows around the LEBU devices and far downstream are characterized by strain-gage measurements on the devices and hot-wire readings downstream. Computations are made with available time-averaged and quasi-stationary techniques to find suitable device profiles with minimum drag.
Trends in Middle East climate extreme indices from 1950 to 2003
NASA Astrophysics Data System (ADS)
Zhang, Xuebin; Aguilar, Enric; Sensoy, Serhat; Melkonyan, Hamlet; Tagiyeva, Umayra; Ahmed, Nader; Kutaladze, Nato; Rahimzadeh, Fatemeh; Taghipour, Afsaneh; Hantosh, T. H.; Albert, Pinhas; Semawi, Mohammed; Karam Ali, Mohammad; Said Al-Shabibi, Mansoor Halal; Al-Oulan, Zaid; Zatari, Taha; Al Dean Khelet, Imad; Hamoud, Saleh; Sagir, Ramazan; Demircan, Mesut; Eken, Mehmet; Adiguzel, Mustafa; Alexander, Lisa; Peterson, Thomas C.; Wallis, Trevor
2005-11-01
A climate change workshop for the Middle East brought together scientists and data for the region to produce the first area-wide analysis of climate extremes for the region. This paper reports trends in extreme precipitation and temperature indices that were computed during the workshop and additional indices data that became available after the workshop. Trends in these indices were examined for 1950-2003 at 52 stations covering 15 countries, including Armenia, Azerbaijan, Bahrain, Cyprus, Georgia, Iran, Iraq, Israel, Jordan, Kuwait, Oman, Qatar, Saudi Arabia, Syria, and Turkey. Results indicate that there have been statistically significant, spatially coherent trends in temperature indices that are related to temperature increases in the region. Significant, increasing trends have been found in the annual maximum of daily maximum and minimum temperature, the annual minimum of daily maximum and minimum temperature, the number of summer nights, and the number of days where daily temperature has exceeded its 90th percentile. Significant negative trends have been found in the number of days when daily temperature is below its 10th percentile and daily temperature range. Trends in precipitation indices, including the number of days with precipitation, the average precipitation intensity, and maximum daily precipitation events, are weak in general and do not show spatial coherence. The workshop attendees have generously made the indices data available for the international research community.
Path Finding on High-Dimensional Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Díaz Leines, Grisell; Ensing, Bernd
2012-07-01
We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.
Mutch, Sarah A.; Gadd, Jennifer C.; Fujimoto, Bryant S.; Kensel-Hammes, Patricia; Schiro, Perry G.; Bajjalieh, Sandra M.; Chiu, Daniel T.
2013-01-01
This protocol describes a method to determine both the average number and variance of proteins in the few to tens of copies in isolated cellular compartments, such as organelles and protein complexes. Other currently available protein quantification techniques either provide an average number but lack information on the variance or are not suitable for reliably counting proteins present in the few to tens of copies. This protocol entails labeling the cellular compartment with fluorescent primary-secondary antibody complexes, TIRF (total internal reflection fluorescence) microscopy imaging of the cellular compartment, digital image analysis, and deconvolution of the fluorescence intensity data. A minimum of 2.5 days is required to complete the labeling, imaging, and analysis of a set of samples. As an illustrative example, we describe in detail the procedure used to determine the copy number of proteins in synaptic vesicles. The same procedure can be applied to other organelles or signaling complexes. PMID:22094731
Herwald, Sanna E; Spies, James B; Yucel, E Kent
2017-02-01
The first participants in the independent interventional radiology (IR) residency match will begin prerequisite diagnostic radiology (DR) residencies before the anticipated launch of the independent IR programs in 2020. The aim of this study was to estimate the competitiveness level of the first independent IR residency matches before these applicants have already committed to DR residencies and possibly early specialization in IR (ESIR) programs. The Society of Chairs of Academic Radiology Departments (SCARD) Task Force on the IR Residency distributed a survey to all active SCARD members using SurveyMonkey. The survey requested the number of planned IR residency and ESIR positions. The average, minimum, and maximum of the range of planned independent IR residency positions were compared with the average, maximum, and minimum, respectively, of the range of planned ESIR positions, to model matches of average, high, and low competitiveness. Seventy-four active SCARD members (56%) answered at least one survey question. The respondents' programs planned to fill, in total, 98 to 102 positions in integrated IR residency programs, 61 to 76 positions in independent IR residency programs, and 50 to 77 positions in ESIR DR residency programs each year. The ranges indicate the uncertainty of some programs regarding the number of positions. The survey suggests that participating programs will fill sufficient independent IR residency positions to accommodate all ESIR applicants in a match year of average or low competitiveness, but not in a match year of high competitiveness. This suggestion does not account for certain difficult-to-predict factors that may affect the independent IR residency match. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan
2017-12-01
Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).
Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin
2015-01-01
Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.
Predictions of Solar Cycle 24: How are We Doing?
NASA Technical Reports Server (NTRS)
Pesnell, William D.
2016-01-01
Predictions of solar activity are an essential part of our Space Weather forecast capability. Users are requiring usable predictions of an upcoming solar cycle to be delivered several years before solar minimum. A set of predictions of the amplitude of Solar Cycle 24 accumulated in 2008 ranged from zero to unprecedented levels of solar activity. The predictions formed an almost normal distribution, centered on the average amplitude of all preceding solar cycles. The average of the current compilation of 105 predictions of the annual-average sunspot number is 106 +/- 31, slightly lower than earlier compilations but still with a wide distribution. Solar Cycle 24 is on track to have a below-average amplitude, peaking at an annual sunspot number of about 80. Our need for solar activity predictions and our desire for those predictions to be made ever earlier in the preceding solar cycle will be discussed. Solar Cycle 24 has been a below-average sunspot cycle. There were peaks in the daily and monthly averaged sunspot number in the Northern Hemisphere in 2011 and in the Southern Hemisphere in 2014. With the rapid increase in solar data and capability of numerical models of the solar convection zone we are developing the ability to forecast the level of the next sunspot cycle. But predictions based only on the statistics of the sunspot number are not adequate for predicting the next solar maximum. I will describe how we did in predicting the amplitude of Solar Cycle 24 and describe how solar polar field predictions could be made more accurate in the future.
NASA Astrophysics Data System (ADS)
Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.
2018-04-01
The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendes, Milrian S.; Felinto, Daniel
2011-12-15
We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomicmore » ensembles. The measurement by avalanche photodetectors is modeled by ''ON/OFF'' positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.« less
NASA Astrophysics Data System (ADS)
Mendes, Milrian S.; Felinto, Daniel
2011-12-01
We analyze the efficiency and scalability of the Duan-Lukin-Cirac-Zoller (DLCZ) protocol for quantum repeaters focusing on the behavior of the experimentally accessible measures of entanglement for the system, taking into account crucial imperfections of the stored entangled states. We calculate then the degradation of the final state of the quantum-repeater linear chain for increasing sizes of the chain, and characterize it by a lower bound on its concurrence and the ability to violate the Clausner-Horne-Shimony-Holt inequality. The states are calculated up to an arbitrary number of stored excitations, as this number is not fundamentally bound for experiments involving large atomic ensembles. The measurement by avalanche photodetectors is modeled by “ON/OFF” positive operator-valued measure operators. As a result, we are able to consistently test the approximation of the real fields by fields with a finite number of excitations, determining the minimum number of excitations required to achieve a desired precision in the prediction of the various measured quantities. This analysis finally determines the minimum purity of the initial state that is required to succeed in the protocol as the size of the chain increases. We also provide a more accurate estimate for the average time required to succeed in each step of the protocol. The minimum purity analysis and the new time estimates are then combined to trace the perspectives for implementation of the DLCZ protocol in present-day laboratory setups.
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample... per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method... appendix A of this part) Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour...
Periodicity of sunspot group number during the Maunder Minimum
NASA Astrophysics Data System (ADS)
Gao, P. X.
2017-12-01
Applying the Hilbert-Huang Transform (HHT) method to the yearly average sunspot group (SG) number reconstructed by Svalgaard & Schatten, we investigate the periodicity of SG number from 1610 to 2015. Our main findings are summarized below. Periodicities of 3.56 ± 0.24 (Quasi-Triennial Oscillations), 9.22 ± 0.13 (Schwabe Cycle), 16.91 ± 0.99 (Hale Cycle), 49.25 ± 0.96, 118.64 ± 2.52 (Centennial Gleissberg Cycle), and 206.32 ± 4.60 yr are statistically significant in the SG numbers. During the Maunder Minimum (MM), the occurrences of the Schwabe Cycle and the Hale Cycle, extracted from SG numbers, are suspended; before and after the MM, Schwabe Cycle and the Hale Cycle, extracted from SG numbers, all exist. The results of applying the Morlet Wavelet Analysis to the SG number confirm that, for SG number, the occurrence of the Schwabe Cycle is suspended during the MM, and, before and after the MM, the Schwabe Cycle all exist. Then we investigate the periodicity in the annual 10Be data from 1391 to 1983, which are given in a supplementary file to McCracken & Beer, using HHT and the Morlet wavelet transform. We find that, for the 10Be data, the Schwabe Cycle and the Hale Cycle persist throughout the MM. Our results support the suggestion that the Schwabe Cycle is too weak to be detected in the sunspot data.
A quadratic regression modelling on paddy production in the area of Perlis
NASA Astrophysics Data System (ADS)
Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2017-08-01
Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.
The high-energy-density counterpropagating shear experiment and turbulent self-heating
Doss, F. W.; Fincke, J. R.; Loomis, E. N.; ...
2013-12-06
The counterpropagating shear experiment has previously demonstrated the ability to create regions of shockdriven shear, balanced symmetrically in pressure and experiencing minimal net drift. This allows for the creation of a high-Mach-number high-energy-density shear environment. New data from the counterpropagating shear campaign is presented, and both hydrocode modeling and theoretical analysis in the context of a Reynolds-averaged-Navier-Stokes model suggest turbulent dissipation of energy from the supersonic flow bounding the layer is a significant driver in its expansion. A theoretical minimum shear flow Mach number threshold is suggested for substantial thermal-turbulence coupling.
Survey of Occupational Noise Exposure in CF Personnel in Selected High-Risk Trades
2003-11-01
peak, maximum level , minimum level , average sound level , time weighted average, dose, projected 8-hour dose, and upper limit time were measured for...10 4.4.2 Maximum Sound Level ...11 4.4.3 Minimum Sound Level
Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow
NASA Astrophysics Data System (ADS)
Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke
2017-04-01
Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.
Study of percolation behavior depending on molecular structure design
NASA Astrophysics Data System (ADS)
Yu, Ji Woong; Lee, Won Bo
Each differently designed anisotropic nano-crystals(ANCs) are studied using Langevin dynamic simulation and their percolation behaviors are presented. Popular molecular dynamics software LAMMPS was used to design the system and perform the simulation. We calculated the minimum number density at which percolation occurs(i.e. percolation threshold), radial distribution function, and the average number of ANCs for a cluster. Electrical conductivity is improved when the number of transfers of electrons between ANCs, so called ''inter-hopping process'', which has the considerable contribution to resistance decreases and the number of inter-hopping process is directly related with the concentration of ANCs. Therefore, with the investigation of relationship between molecular architecture and percolation behavior, optimal design of ANC can be achieved.
Code of Federal Regulations, 2014 CFR
2014-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... test protocol and the means by which sampling variability and analytical variability were accounted for... also establish the design minimum and average temperature in the combustion zone and the combustion... the design minimum and average temperatures across the catalyst bed inlet and outlet. (C) For a boiler...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... design minimum and average temperature in the combustion zone and the combustion zone residence time. (B... establish the design minimum and average flame zone temperatures and combustion zone residence time, and... carbon bed temperature after regeneration, design carbon bed regeneration time, and design service life...
N(2)O in small para-hydrogen clusters: Structures and energetics.
Zhu, Hua; Xie, Daiqian
2009-04-30
We present the minimum-energy structures and energetics of clusters of the linear N(2)O molecule with small numbers of para-hydrogen molecules with pairwise additive potentials. Interaction energies of (p-H(2))-N(2)O and (p-H(2))-(p-H(2)) complexes were calculated by averaging the corresponding full-dimensional potentials over the H(2) angular coordinates. The averaged (p-H(2))-N(2)O potential has three minima corresponding to the T-shaped and the linear (p-H(2))-ONN and (p-H(2))-NNO structures. Optimization of the minimum-energy structures was performed using a Genetic Algorithm. It was found that p-H(2) molecules fill three solvation rings around the N(2)O axis, each of them containing up to five p-H(2) molecules, followed by accumulation of two p-H(2) molecules at the oxygen and nitrogen ends. The first solvation shell is completed at N = 17. The calculated chemical potential oscillates with cluster size up to the completed first solvation shell. These results are consistent with the available experimental measurements. (c) 2009 Wiley Periodicals, Inc.
Implementation of GAMMON - An efficient load balancing strategy for a local computer system
NASA Technical Reports Server (NTRS)
Baumgartner, Katherine M.; Kling, Ralph M.; Wah, Benjamin W.
1989-01-01
GAMMON (Global Allocation from Maximum to Minimum in cONstant time), an efficient load-balancing algorithm, is described. GAMMON uses the available broadcast capability of multiaccess networks to implement an efficient search technique for finding hosts with maximal and minimal loads. The search technique has an average overhead which is independent of the number of participating stations. The transition from the theoretical concept to a practical, reliable, and efficient implementation is described.
Predictions of Sunspot Cycle 24: A Comparison with Observations
NASA Astrophysics Data System (ADS)
Bhatt, N. J.; Jain, R.
2017-12-01
The space weather is largely affected due to explosions on the Sun viz. solar flares and CMEs, which, however, in turn depend upon the magnitude of the solar activity i e. number of sunspots and their magnetic configuration. Owing to these space weather effects, predictions of sunspot cycle are important. Precursor techniques, particularly employing geomagnetic indices, are often used in the prediction of the maximum amplitude of a sunspot cycle. Based on the average geomagnetic activity index aa (since 1868 onwards) for the year of the sunspot minimum and the preceding four years, Bhatt et al. (2009) made two predictions for sunspot cycle 24 considering 2008 as the year of sunspot minimum: (i) The annual maximum amplitude would be 92.8±19.6 (1-sigma accuracy) indicating a somewhat weaker cycle 24 as compared to cycles 21-23, and (ii) smoothed monthly mean sunspot number maximum would be in October 2012±4 months (1-sigma accuracy). However, observations reveal that the sunspot minima extended up to 2009, and the maximum amplitude attained is 79, with a monthly mean sunspot number maximum of 102.3 in February 2014. In view of the observations and particularly owing to the extended solar minimum in 2009, we re-examined our prediction model and revised the prediction results. We find that (i) The annual maximum amplitude of cycle 24 = 71.2 ± 19.6 and (ii) A smoothed monthly mean sunspot number maximum in January 2014±4 months. We discuss our failure and success aspects and present improved predictions for the maximum amplitude as well as for the timing, which are now in good agreement with the observations. Also, we present the limitations of our forecasting in the view of long term predictions. We show if year of sunspot minimum activity and magnitude of geomagnetic activity during sunspot minimum are taken correctly then our prediction method appears to be a reliable indicator to forecast the sunspot amplitude of the following solar cycle. References:Bhatt, N.J., Jain, R. & Aggarwal, M.: 2009, Sol. Phys. 260, 225
UNCOVERING THE INTRINSIC VARIABILITY OF GAMMA-RAY BURSTS
NASA Astrophysics Data System (ADS)
Golkhou, V. Zach; Butler, Nathaniel R
2014-08-01
We develop a robust technique to determine the minimum variability timescale for gamma-ray burst (GRB) light curves, utilizing Haar wavelets. Our approach averages over the data for a given GRB, providing an aggregate measure of signal variation while also retaining sensitivity to narrow pulses within complicated time series. In contrast to previous studies using wavelets, which simply define the minimum timescale in reference to the measurement noise floor, our approach identifies the signature of temporally smooth features in the wavelet scaleogram and then additionally identifies a break in the scaleogram on longer timescales as a signature of a true, temporally unsmooth light curve feature or features. We apply our technique to the large sample of Swift GRB gamma-ray light curves and for the first time—due to the presence of a large number of GRBs with measured redshift—determine the distribution of minimum variability timescales in the source frame. We find a median minimum timescale for long-duration GRBs in the source frame of Δtmin = 0.5 s, with the shortest timescale found being on the order of 10 ms. This short timescale suggests a compact central engine (3000 km). We discuss further implications for the GRB fireball model and present a tantalizing correlation between the minimum timescale and redshift, which may in part be due to cosmological time dilation.
NASA Astrophysics Data System (ADS)
Rahardiantoro, S.; Sartono, B.; Kurnia, A.
2017-03-01
In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.
NASA Astrophysics Data System (ADS)
Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.
2010-08-01
In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.
Climate Prediction Center - Monitoring and Data - Regional Climate Maps:
; Precipitation & Temperature > Regional Climate Maps: USA Menu Weekly 1-Month 3-Month 12-Month Weekly Total Precipitation Average Temperature Extreme Maximum Temperature Extreme Minimum Temperature Departure of Average Temperature from Normal Extreme Apparent Temperature Minimum Wind Chill Temperature
O'Connor, B P
2000-08-01
Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.
Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis
NASA Technical Reports Server (NTRS)
Bush, Lance B.
1992-01-01
An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.
Pielke, R.A.; Stohlgren, T.; Schell, L.; Parton, W.; Doesken, N.; Redmond, K.; Moeny, J.; McKee, T.; Kittel, T.G.F.
2002-01-01
We evaluated long-term trends in average maximum and minimum temperatures, threshold temperatures, and growing season in eastern Colorado, USA, to explore the potential shortcomings of many climate-change studies that either: (1) generalize regional patterns from single stations, single seasons, or a few parameters over short duration from averaging dissimilar stations: or (2) generalize an average regional pattern from coarse-scale general circulation models. Based on 11 weather stations, some trends were weakly regionally consistent with previous studies of night-time temperature warming. Long-term (80 + years) mean minimum temperatures increased significantly (P < 0.2) in about half the stations in winter, spring, and autumn and six stations had significant decreases in the number of days per year with temperatures ??? - 17.8 ??C (???0??F). However, spatial and temporal variation in the direction of change was enormous for all the other weather parameters tested, and, in the majority of tests, few stations showed significant trends (even at P < 0.2). In summer, four stations had significant increases and three stations had significant decreases in minimum temperatures, producing a strongly mixed regional signal. Trends in maximum temperature varied seasonally and geographically, as did trends in threshold temperature days ???32.2??C (???90??F) or days ???37.8??C (???100??F). There was evidence of a subregional cooling in autumn's maximum temperatures, with five stations showing significant decreasing trends. There were many geographic anomalies where neighbouring weather stations differed greatly in the magnitude of change or where they had significant and opposite trends. We conclude that sub-regional spatial and seasonal variation cannot be ignored when evaluating the direction and magnitude of climate change. It is unlikely that one or a few weather stations are representative of regional climate trends, and equally unlikely that regionally projected climate change from coarse-scale general circulation models will accurately portray trends at sub-regional scales. However, the assessment of a group of stations for consistent more qualitative trends (such as the number of days less than - 17.8??C, such as we found) provides a reasonably robust procedure to evaluate climate trends and variability. Copyright ?? 2002 Royal Meteorological Society.
Conklin, Annalijn I; Ponce, Ninez A; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24-49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity.
DFT study of gases adsorption on sharp tip nano-catalysts surface for green fertilizer synthesis
NASA Astrophysics Data System (ADS)
Yahya, Noorhana; Irfan, Muhammad; Shafie, Afza; Soleimani, Hassan; Alqasem, Bilal; Rehman, Zia Ur; Qureshi, Saima
2016-11-01
The energy minimization and spin modifications of sorbates with sorbents in magnetic induction method (MIM) play a vital role in yield of fertilizer. Hence, in this article the focus of study is the interaction of sorbates/reactants (H2, N2 and CO2) in term of average total adsorption energies, average isosteric heats of adsorption energies, magnetic moments, band gaps energies and spin modifications over identical cone tips nanocatalyst (sorbents) of Fe2O3, Fe3O4 (magnetic), CuO and Al2O3 (non-magnetic) for green nano-fertilizer synthesis. Study of adsorption energy, band structures and density of states of reactants with sorbents are purely classical and quantum mechanical based concepts that are vividly illustrated and supported by ADSORPTION LOCATOR and Cambridge Seriel Total Energy Package (CASTEP) modules following classical and first principle DFT simulation study respectively. Maximum values of total average energies, total average adsorption energies and average adsorption energies of H2, N2 and CO2 molecules are reported as -14.688 kcal/mol, -13.444 kcal/mol, -3.130 kcal/mol, - kcal/mol and -6.348 kcal/mol over Al2O3 cone tips respectively and minimum over magnetic cone tips. Whereas, the maximum and average minimum values of average isosteric heats of adsorption energies of H2, N2 and CO2 molecules are figured out to be 3.081 kcal/mol, 4.842 kcal/mol and 6.848 kcal/mol, 0.988 kcal/mol, 1.554 kcal/mol and 2.236 kcal/mol over aluminum oxide and Fe3O4 cone tips respectively. In addition to the adsorption of reactants over identical cone sorbents the maximum and minimum values of net spin, electrons and number of bands for magnetite and aluminum oxide cone structures are attributed to 82 and zero, 260 and 196, 206 and 118 for Fe3O4 and Al2O3 cones respectively. Maximum and least observed values of band gap energies are figured out to be 0.188 eV and 0.018 eV with Al2O3 and Fe3O4 cone structures respectively. Ultimately, with the adsorption of reactants an identical increment of 14 electrons each in up and down spins is resulted.
Wang, Ya Liang; Zhang, Yu Ping; Xiang, Jing; Wang, Lei; Chen, Hui Zhe; Zhang, Yi Kai; Zhang, Wen Qian; Zhu, De Feng
2017-11-01
In this study, three rice varieties, including three-line hybrid indica rice Wuyou308 and Tianyouhuazhan, and inbred indica rice Huanghuazhan were used to investigate the effects of air temperature and solar radiation on rice growth duration and spikelet differentiation and degeneration. Ten sowing-date treatments were conducted in this field experiment. The results showed that the growth duration of three indica rice varieties were more sensitive to air temperature than to day-length. With average temperature increase of 1 ℃, panicle initiation advanced 1.5 days, but the panicle growth duration had no significant correlation with the temperature and day-length. The number of spikelets and differentiated spikelets revealed significant differences among different sowing dates. Increases in average temperature, maximum temperature, minimum temperature, effective accumulated temperature, temperature gap and the solar radiation benefited dry matter accumulation and spikelet differentiation of all varieties. With increases of effective accumulated temperature, diurnal temperature gap and solar radiation by 50 ℃, 1 ℃, 50 MJ·m -2 during panicle initiation stage, the number of differentiated spikelets increased 10.5, 14.3, 17.1 respectively. The rate of degenerated spikelets had a quadratic correlation with air temperature, extreme high and low temperature aggravated spikelets degeneration, and low temperature stress made worse effect than high temperature stress. The rate of spikelet degeneration dramatically rose with the temperature falling below the critical temperature, the critical effective accumulated temperature, daily average temperature, daily maximum temperature and minimum temperature during panicle initiation were 550-600 ℃, 24.0-26.0 ℃, 32.0-34.0 ℃, 21.0-23.0 ℃, respectively. In practice, the natural condition of appropriate high temperature, large diurnal temperature gap and strong solar radiation were conducive to spikelet differentiation, and hindered the spikelet degeneration.
Lanza, Ezio; Banfi, Giuseppe; Serafini, Giovanni; Lacelli, Francesca; Orlandi, Davide; Bandirali, Michele; Sardanelli, Francesco; Sconfienza, Luca Maria
2015-07-01
We performed a systematic review of current evidence regarding ultrasound-guided percutaneous irrigation of calcific tendinopathy (US-PICT) in the shoulder aimed to: assess different published techniques; evaluate clinical outcome in a large combined cohort; and propose suggestions for homogeneous future reporting. Cochrane Collaboration for Systematic Reviews of Interventions Guidelines were followed. We searched MEDLINE/MEDLINE In-Process/EMBASE/Cochrane databases from 1992-2013 using the keywords 'ultrasound, shoulder, needling, calcification, lavage, rotator cuff' combined in appropriate algorithms. References of resulting papers were also screened. Risk of bias was assessed with a modified Newcastle-Ottawa Scale. Of 284 papers found, 15 were included, treating 1,450 shoulders in 1,403 patients (females, n = 838; mean age interval 40-63 years). There was no exclusion due to risk of bias. US-PICT of rotator cuff is a safe and effective procedure, with an estimated average 55% pain improvement at an average of 11 months, with a 10% minor complication rate. No evidence exists in favour of using a specific size/number of needles. Imaging follow-up should not be used routinely. Future studies should aim at structural uniformity, including the use of the Constant Score to assess outcomes and 1-year minimum follow-up. Alternatives to steroid injections should also be explored. • US-PICT of rotator cuff is a safe and effective procedure. • On average 55% pain improvement with 10% minor complication rate. • No evidence exists in favour of using a specific size/number of needles. • Future need to assess outcome using Constant Score with 1-year minimum follow-up.
Performance of Velicer's Minimum Average Partial Factor Retention Method with Categorical Variables
ERIC Educational Resources Information Center
Garrido, Luis E.; Abad, Francisco J.; Ponsoda, Vicente
2011-01-01
Despite strong evidence supporting the use of Velicer's minimum average partial (MAP) method to establish the dimensionality of continuous variables, little is known about its performance with categorical data. Seeking to fill this void, the current study takes an in-depth look at the performance of the MAP procedure in the presence of…
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
Hydrologic and climatic changes in three small watersheds after timber harvest.
W.B. Fowler; J.D. Helvey; E.N. Felix
1987-01-01
No significant increases in annual water yield were shown for three small watersheds in northeastern Oregon after shelterwood cutting (30-percent canopy removal, 50-percent basal area removal) and clearcutting. Average maximum air temperature increased after harvest and average minimum air temperature decreased by up to 2.6 °C. Both maximum and minimum water...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
Park, J-H; Sulyok, M; Lemons, A R; Green, B J; Cox-Ganser, J M
2018-05-04
Recent developments in molecular and chemical methods have enabled the analysis of fungal DNA and secondary metabolites, often produced during fungal growth, in environmental samples. We compared 3 fungal analytical methods by analysing floor dust samples collected from an office building for fungi using viable culture, internal transcribed spacer (ITS) sequencing and secondary metabolites using liquid chromatography-tandem mass spectrometry. Of the 32 metabolites identified, 29 had a potential link to fungi with levels ranging from 0.04 (minimum for alternariol monomethylether) to 5700 ng/g (maximum for neoechinulin A). The number of fungal metabolites quantified per sample ranged from 8 to 16 (average = 13/sample). We identified 216 fungal operational taxonomic units (OTUs) with the number per sample ranging from 6 to 29 (average = 18/sample). We identified 37 fungal species using culture, and the number per sample ranged from 2 to 13 (average = 8/sample). Agreement in identification between ITS sequencing and culturing was weak (kappa = -0.12 to 0.27). The number of cultured fungal species poorly correlated with OTUs, which did not correlate with the number of metabolites. These suggest that using multiple measurement methods may provide an improved understanding of fungal exposures in indoor environments and that secondary metabolites may be considered as an additional source of exposure. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Conklin, Annalijn I.; Ponce, Ninez A.; Frank, John; Nandi, Arijit; Heymann, Jody
2016-01-01
Objectives To describe the relationship between minimum wage and overweight and obesity across countries at different levels of development. Methods A cross-sectional analysis of 27 countries with data on the legislated minimum wage level linked to socio-demographic and anthropometry data of non-pregnant 190,892 adult women (24–49 y) from the Demographic and Health Survey. We used multilevel logistic regression models to condition on country- and individual-level potential confounders, and post-estimation of average marginal effects to calculate the adjusted prevalence difference. Results We found the association between minimum wage and overweight/obesity was independent of individual-level SES and confounders, and showed a reversed pattern by country development stage. The adjusted overweight/obesity prevalence difference in low-income countries was an average increase of about 0.1 percentage points (PD 0.075 [0.065, 0.084]), and an average decrease of 0.01 percentage points in middle-income countries (PD -0.014 [-0.019, -0.009]). The adjusted obesity prevalence difference in low-income countries was an average increase of 0.03 percentage points (PD 0.032 [0.021, 0.042]) and an average decrease of 0.03 percentage points in middle-income countries (PD -0.032 [-0.036, -0.027]). Conclusion This is among the first studies to examine the potential impact of improved wages on an important precursor of non-communicable diseases globally. Among countries with a modest level of economic development, higher minimum wage was associated with lower levels of obesity. PMID:26963247
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...
The predatory mite Phytoseiulus persimilis adjusts patch-leaving to own and progeny prey needs.
Vanas, V; Enigl, M; Walzer, A; Schausberger, P
2006-01-01
Integration of optimal foraging and optimal oviposition theories suggests that predator females should adjust patch leaving to own and progeny prey needs to maximize current and future reproductive success. We tested this hypothesis in the predatory mite Phytoseiulus persimilis and its patchily distributed prey, the two-spotted spider mite Tetranychus urticae. In three separate experiments we assessed (1) the minimum number of prey needed to complete juvenile development, (2) the minimum number of prey needed to produce an egg, and (3) the ratio between eggs laid and spider mites left when a gravid P. persimilis female leaves a patch. Experiments (1) and (2) were the pre-requirements to assess the fitness costs associated with staying or leaving a prey patch. Immature P. persimilis needed at least 7 and on average 14+/-3.6 (SD) T. urticae eggs to reach adulthood. Gravid females needed at least 5 and on average 8.5+/-3.1 (SD) T. urticae eggs to produce an egg. Most females left the initial patch before spider mite extinction, leaving prey for progeny to develop to adulthood. Females placed in a low density patch left 5.6+/-6.1 (SD) eggs per egg laid, whereas those placed in a high density patch left 15.8+/-13.7 (SD) eggs per egg laid. The three experiments in concert suggest that gravid P. persimilis females are able to balance the trade off between optimal foraging and optimal oviposition and adjust patch-leaving to own and progeny prey needs.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.
Baugh, Tiffany P; Franzese, Christine B
2017-06-01
Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.
Relationship between cervical vertebral maturation and mandibular growth.
Ball, Gina; Woodside, Donald; Tompson, Bryan; Hunter, W Stuart; Posluns, James
2011-05-01
The cervical vertebrae have been proposed as a method of determining biologic maturity. The purposes of this study were to establish a pattern of mandibular growth and to relate this pattern to the stages of cervical vertebral maturation. Cephalometric radiographs, taken annually from ages 9 to 18 years, were evaluated for 90 boys from the Burlington Growth Center, Toronto, Ontario, Canada. Mandibular lengths were measured from articulare to gnathion, and incremental growth was determined. Cervical vertebral maturation stages were assessed by using a 6-stage method. Advanced, average, and delayed maturation groups were established. The prepubertal mandibular growth minimum velocity occurred during cervical stages 1 through 4 (P = 0.7327). Peak mandibular growth velocity occurred most frequently during stage 4 in all 3 maturation groups, with a statistical difference in the average and delayed groups (P <0.0001) and the advanced group (P = 0.0143). The average number of years spent in stage 4 was 3.79 (P <0.0001). The average amount of mandibular growth occurring during stage 4 was 9.40 mm (P <0.0001). The average amount of growth in stages 5 and 6 combined was 7.09 mm. Progression from cervical stages 1 through 6 does not occur annually; time spent in each stage varies depending on the stage and the maturation group. Cervical vertebral maturation stages cannot accurately identify the mandibular prepubertal growth minimum and therefore cannot predict the onset of the peak in mandibular growth. The cervical vertebral maturation stages should be used with other methods of biologic maturity assessment when considering both dentofacial orthopedic treatment and orthognathic surgery. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Love, Jeffrey J.; Rigler, J.
2012-01-01
[1] Analysis is made of the geomagnetic-activityaaindex covering solar cycle 11 to the beginning of 24, 1868–2011. Autocorrelation shows 27.0-d recurrent geomagnetic activity that is well-known to be prominent during solar-cycle minima; some minima also exhibit a smaller amount of 13.5-d recurrence. Previous work has shown that the recent solar minimum 23–24 exhibited 9.0 and 6.7-d recurrence in geomagnetic and heliospheric data, but those recurrence intervals were not prominently present during the preceding minima 21–22 and 22–23. Using annual-averages and solar-cycle averages of autocorrelations of the historicalaadata, we put these observations into a long-term perspective: none of the 12 minima preceding 23–24 exhibited prominent 9.0 and 6.7-d geomagnetic activity recurrence. We show that the detection of these recurrence intervals can be traced to an unusual combination of sectorial spherical-harmonic structure in the solar magnetic field and anomalously low sunspot number. We speculate that 9.0 and 6.7-d recurrence is related to transient large-scale, low-latitude organization of the solar dynamo, such as seen in some numerical simulations.
Analysis of a long drought in Piedmont, Italy - Autumn 2001
NASA Astrophysics Data System (ADS)
Gandini, D.; Marchisio, C.; Paesano, G.; Pelosini, P.
2003-04-01
A long period of drought and cold temperatures has characterised the seasons of Autumn 2001 and Winter 2001-2002 on the regions of the southern Alpine chain. The analysis of precipitation's data, collected by the Regional Monitoring network of Piedmont Region (on the south-west side of Alps), shows that they are far below the mean values and very close to the historical minimum of the last century. The six months accumulated precipitation in Turin (Piedmont chief town), from June to December 2001, has reached the historical minimum value of 206 mm in comparison with a mean value of 540 mm. The drought has been remarkable also in the mountain areas with the lack of snowfalls and critical consequences for water reservoirs. At the same time, the number of days with daily averaged temperature below or close to 0°C in December 2001 has been the greatest value of the last 50 years, much higher than the 50 years average, for the whole Piedmont region. This study contains a detailed analysis of observed data to characterise the drought episode, associated with a climatological analysis of meteorological parameters in order to detect the typical large scale pattern of the drought periods and their persistency's features.
Variability of fractal dimension of solar radio flux
NASA Astrophysics Data System (ADS)
Bhatt, Hitaishi; Sharma, Som Kumar; Trivedi, Rupal; Vats, Hari Om
2018-04-01
In the present communication, the variation of the fractal dimension of solar radio flux is reported. Solar radio flux observations on a day to day basis at 410, 1415, 2695, 4995, and 8800 MHz are used in this study. The data were recorded at Learmonth Solar Observatory, Australia from 1988 to 2009 covering an epoch of two solar activity cycles (22 yr). The fractal dimension is calculated for the listed frequencies for this period. The fractal dimension, being a measure of randomness, represents variability of solar radio flux at shorter time-scales. The contour plot of fractal dimension on a grid of years versus radio frequency suggests high correlation with solar activity. Fractal dimension increases with increasing frequency suggests randomness increases towards the inner corona. This study also shows that the low frequency is more affected by solar activity (at low frequency fractal dimension difference between solar maximum and solar minimum is 0.42) whereas, the higher frequency is less affected by solar activity (here fractal dimension difference between solar maximum and solar minimum is 0.07). A good positive correlation is found between fractal dimension averaged over all frequencies and yearly averaged sunspot number (Pearson's coefficient is 0.87).
Reconstruction of solar spectral irradiance since the Maunder minimum
NASA Astrophysics Data System (ADS)
Krivova, N. A.; Vieira, L. E. A.; Solanki, S. K.
2010-12-01
Solar irradiance is the main external driver of the Earth's climate. Whereas the total solar irradiance is the main source of energy input into the climate system, solar UV irradiance exerts control over chemical and physical processes in the Earth's upper atmosphere. The time series of accurate irradiance measurements are, however, relatively short and limit the assessment of the solar contribution to the climate change. Here we reconstruct solar total and spectral irradiance in the range 115-160,000 nm since 1610. The evolution of the solar photospheric magnetic flux, which is a central input to the model, is appraised from the historical record of the sunspot number using a simple but consistent physical model. The model predicts an increase of 1.25 W/m2, or about 0.09%, in the 11-year averaged solar total irradiance since the Maunder minimum. Also, irradiance in individual spectral intervals has generally increased during the past four centuries, the magnitude of the trend being higher toward shorter wavelengths. In particular, the 11-year averaged Ly-α irradiance has increased by almost 50%. An exception is the spectral interval between about 1500 and 2500 nm, where irradiance has slightly decreased (by about 0.02%).
Estimating Controller Intervention Probabilities for Optimized Profile Descent Arrivals
NASA Technical Reports Server (NTRS)
Meyn, Larry A.; Erzberger, Heinz; Huynh, Phu V.
2011-01-01
Simulations of arrival traffic at Dallas/Fort-Worth and Denver airports were conducted to evaluate incorporating scheduling and separation constraints into advisories that define continuous descent approaches. The goal was to reduce the number of controller interventions required to ensure flights maintain minimum separation distances of 5 nmi horizontally and 1000 ft vertically. It was shown that simply incorporating arrival meter fix crossing-time constraints into the advisory generation could eliminate over half of the all predicted separation violations and more than 80% of the predicted violations between two arrival flights. Predicted separation violations between arrivals and non-arrivals were 32% of all predicted separation violations at Denver and 41% at Dallas/Fort-Worth. A probabilistic analysis of meter fix crossing-time errors is included which shows that some controller interventions will still be required even when the predicted crossing-times of the advisories are set to add a 1 or 2 nmi buffer above the minimum in-trail separation of 5 nmi. The 2 nmi buffer was shown to increase average flight delays by up to 30 sec when compared to the 1 nmi buffer, but it only resulted in a maximum decrease in average arrival throughput of one flight per hour.
The Effect of the Minimum Compensating Cash Balance on School District Investments.
ERIC Educational Resources Information Center
Dembowski, Frederick L.
Banks are usually reimbursed for their checking account services either by a fixed service charge or by requiring a minimum or minimum-average compensating cash balance. This paper demonstrates how to determine the optimal minimum balance for a school district to maintain in its account. It is assumed that both the bank and the school district use…
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.
The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.
ERIC Educational Resources Information Center
Macpherson, David A.; Even, William E.
The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…
Code of Federal Regulations, 2011 CFR
2011-07-01
....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...
Selected low-flow frequency statistics for continuous-record streamgages in Georgia, 2013
Gotvald, Anthony J.
2016-04-13
This report presents the annual and monthly minimum 1- and 7-day average streamflows with the 10-year recurrence interval (1Q10 and 7Q10) for 197 continuous-record streamgages in Georgia. Streamgages used in the study included active and discontinued stations having a minimum of 10 complete climatic years of record as of September 30, 2013. The 1Q10 and 7Q10 flow statistics were computed for 85 streamgages on unregulated streams with minimal diversions upstream, 43 streamgages on regulated streams, and 69 streamgages known, or considered, to be affected by varying degrees of diversions upstream. Descriptive information for each of these streamgages, including the U.S. Geological Survey (USGS) station number, station name, latitude, longitude, county, drainage area, and period of record analyzed also is presented.Kendall’s tau nonparametric test was used to determine the statistical significance of trends in annual and monthly minimum 1-day and 7-day average flows for the 197 streamgages. Significant negative trends in the minimum annual 1-day and 7-day average streamflow were indicated for 77 of the 197 streamgages. Many of these significant negative trends are due to the period of record ending during one of the recent droughts in Georgia, particularly those streamgages with record through the 2013 water year. Long-term unregulated streamgages with 70 or more years of record indicate significant negative trends in the annual minimum 7-day average flow for central and southern Georgia. Watersheds for some of these streamgages have experienced minimal human impact, thus indicating that the significant negative trends observed in flows at the long-term streamgages may be influenced by changing climatological conditions. A Kendall-tau trend analysis of the annual air temperature and precipitation totals for Georgia indicated no significant trends. A comprehensive analysis of causes of the trends in annual and monthly minimum 1-day and 7-day average flows in central and southern Georgia is outside the scope of this study. Further study is needed to determine some of the causes, including both climatological and human impacts, of the significant negative trends in annual minimum 1-day and 7-day average flows in central and southern Georgia.To assess the changes in the annual 1Q10 and 7Q10 statistics over time for long-term continuous streamgages with significant trends in record, the annual 1Q10 and 7Q10 statistics were computed on a decadal accumulated basis for 39 streamgages having 40 or more years of record that indicated a significant trend. Records from most of the streamgages showed a decline in 7Q10 statistics for the decades of 1980–89, 1990–99, and 2000–09 because of the recent droughts in Georgia. Twenty four of the 39 streamgages had complete records from 1980 to 2010, and records from 23 of these gages exhibited a decline in the 7Q10 statistics during this period, ranging from –6.3 to –76.2 percent with a mean of –27.3 percent. No attempts were made during this study to adjust streamflow records or statistical analyses on the basis of trends.The monthly and annual 1Q10 and 7Q10 flow statistics for the entire period of record analyzed in the study are incorporated into the USGS StreamStatsDB, which is a database accessible to users through the recently released USGS StreamStats application for Georgia. StreamStats is a Web-based geographic information system that provides users with access to an assortment of analytical tools that are useful for water-resources planning and management, and for engineering design applications, such as the design of bridges. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and other information for user-selected streamgages.
NASA Astrophysics Data System (ADS)
Tehsin, Sara; Rehman, Saad; Riaz, Farhan; Saeed, Omer; Hassan, Ali; Khan, Muazzam; Alam, Muhammad S.
2017-05-01
A fully invariant system helps in resolving difficulties in object detection when camera or object orientation and position are unknown. In this paper, the proposed correlation filter based mechanism provides the capability to suppress noise, clutter and occlusion. Minimum Average Correlation Energy (MACE) filter yields sharp correlation peaks while considering the controlled correlation peak value. Difference of Gaussian (DOG) Wavelet has been added at the preprocessing stage in proposed filter design that facilitates target detection in orientation variant cluttered environment. Logarithmic transformation is combined with a DOG composite minimum average correlation energy filter (WMACE), capable of producing sharp correlation peaks despite any kind of geometric distortion of target object. The proposed filter has shown improved performance over some of the other variant correlation filters which are discussed in the result section.
Wiley, Jeffrey B.
2006-01-01
Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin
Walker, J.F.; Osen, L.L.; Hughes, P.E.
1987-01-01
A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.
1980-12-01
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that themore » use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.« less
78 FR 57585 - Minimum Training Requirements for Entry-Level Commercial Motor Vehicle Operators
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... specific minimum number of training hours. Instead, these commenters support a performance-based approach... support a minimum hours-based approach to training. They stated that FMCSA must specify the minimum number...\\ Additionally, some supporters of an hours-based training approach believed that the Agency's proposal did not...
Prelude to Cycle 23: The Case for a Fast-Rising, Large Amplitude Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
For the common data-available interval of cycles 12 to 22, we show that annual averages of sunspot number for minimum years (R(min)) and maximum years (R(max)) and of the minimum value of the aa geomagnetic index in the vicinity of sunspot minimum (aa(min)) are consistent with the notion that each has embedded within its respective record a long-term, linear, secular increase. Extrapolating each of these fits to cycle 23, we infer that it will have R(min) = 12.7 +/- 5.7, R(max) = 176.7 +/- 61.8, and aa(min) = 21.0 +/- 5.0 (at the 95-percent level of confidence), suggesting that cycle 23 will have R(min) greater than 7.0, R(max) greater than 114.9, and aa(min) greater than 16.0 (at the 97.5-percent level of confidence). Such values imply that cycle 23 will be larger than average in size and, consequently (by the Waidmeier effect), will be a fast riser. We also infer from the R(max) and aa(min) records the existence of an even- odd cycle effect, one in which the odd-following cycle is numerically larger in value than the even-leading cycle. For cycle 23, the even-odd cycle effect suggests that R(max) greater than 157.6 and aa(min) greater than 19.0, values that were recorded for cycle 22, the even-leading cycle of the current even-odd cycle pair (cycles 22 and 23). For 1995, the annual average of the aa index measured about 22, while for sunspot number, it was about 18. Because aa(min) usually lags R(min) by 1 year (true for 8 of 11 cycles) and 1996 seems destined to be the year of R(min) for cycle 23, it may be that aa(min) will occur in 1997, although it could occur in 1996 in conjunction with R(min) (true for 3 of 11 cycles). Because of this ambiguity in determining aa(min), no formal prediction based on the correlation of R(max) against aa(min), having r = 0.90, or of R(max) against the combined effects of R(min) and aa(min)-the bivariate technique-having r = 0.99, is possible until 1997, at the earliest.
NASA Astrophysics Data System (ADS)
Jaiswal, P.; van Westen, C. J.; Jetten, V.
2011-06-01
A quantitative procedure for estimating landslide risk to life and property is presented and applied in a mountainous area in the Nilgiri hills of southern India. Risk is estimated for elements at risk located in both initiation zones and run-out paths of potential landslides. Loss of life is expressed as individual risk and as societal risk using F-N curves, whereas the direct loss of properties is expressed in monetary terms. An inventory of 1084 landslides was prepared from historical records available for the period between 1987 and 2009. A substantially complete inventory was obtained for landslides on cut slopes (1042 landslides), while for natural slopes information on only 42 landslides was available. Most landslides were shallow translational debris slides and debris flowslides triggered by rainfall. On natural slopes most landslides occurred as first-time failures. For landslide hazard assessment the following information was derived: (1) landslides on natural slopes grouped into three landslide magnitude classes, based on landslide volumes, (2) the number of future landslides on natural slopes, obtained by establishing a relationship between the number of landslides on natural slopes and cut slopes for different return periods using a Gumbel distribution model, (3) landslide susceptible zones, obtained using a logistic regression model, and (4) distribution of landslides in the susceptible zones, obtained from the model fitting performance (success rate curve). The run-out distance of landslides was assessed empirically using landslide volumes, and the vulnerability of elements at risk was subjectively assessed based on limited historic incidents. Direct specific risk was estimated individually for tea/coffee and horticulture plantations, transport infrastructures, buildings, and people both in initiation and run-out areas. Risks were calculated by considering the minimum, average, and maximum landslide volumes in each magnitude class and the corresponding minimum, average, and maximum run-out distances and vulnerability values, thus obtaining a range of risk values per return period. The results indicate that the total annual minimum, average, and maximum losses are about US 44 000, US 136 000 and US 268 000, respectively. The maximum risk to population varies from 2.1 × 10-1 for one or more lives lost to 6.0 × 10-2 yr-1 for 100 or more lives lost. The obtained results will provide a basis for planning risk reduction strategies in the Nilgiri area.
External morphology and calling song characteristics in Tibicen plebejus (Hemiptera: Cicadidae).
Mehdipour, Maedeh; Sendi, Jalal Jalali; Zamanian, Hossein
2015-02-01
Tibicen plebejus is the largest cicada native to the ecosystem in northern Iran. The male cicada produces a loud calling song for attracting females from a long distance. It is presumed that the female selects a mate based on a combination of passive and active mechanisms, but it is not known if she selects for size, nor if the male's size correlates with any characteristic of the advertisement call. In this study, we report the relationship between calling song features and morphological characters in the male of T. plebejus. Research was conducted in northern Iran during the summer of 2010. Seventeen males were collected and their calling songs were recorded in a natural environment. Two morphological characters were measured: length and weight. Maximum, minimum and average of values of 10 key acoustic variables of the calling song were analyzed: phrase duration, phrase part 1, phrase part 2, number of phrases per minute, echeme duration, echeme period, interecheme interval, number of echeme per second, echeme/intereheme ratio, and dominant frequency. The data were tested for the level of association between morphology and acoustic variables using simple linear regression. In conclusion, in terms of song structure, three significant positive correlations existed between length and (1) mean echeme duration, (2) mean echeme/interecheme ratio, (3) maximum echeme/interecheme ratio. We found out also four significant negative correlations between both length and weight with (1) minimum interecheme intervals, (2) mean dominant frequency, (3) minimum dominant frequency, (4) maximum dominant frequency, and between weight and (1) minimum interecheme intervals, (2) mean dominant frequency, (3) minimum dominant frequency, (4) maximum dominant frequency. It can be found that larger males of T. plebejus produce songs of lower frequency and are less silent between echemes. Copyright © 2014 Académie des sciences. Published by Elsevier SAS. All rights reserved.
The Advantages of Collimator Optimization for Intensity Modulated Radiation Therapy
NASA Astrophysics Data System (ADS)
Doozan, Brian
The goal of this study was to improve dosimetry for pelvic, lung, head and neck, and other cancers sites with aspherical planning target volumes (PTV) using a new algorithm for collimator optimization for intensity modulated radiation therapy (IMRT) that minimizes the x-jaw gap (CAX) and the area of the jaws (CAA) for each treatment field. A retroactive study on the effects of collimator optimization of 20 patients was performed by comparing metric results for new collimator optimization techniques in Eclipse version 11.0. Keeping all other parameters equal, multiple plans are created using four collimator techniques: CA 0, all fields have collimators set to 0°, CAE, using the Eclipse collimator optimization, CAA, minimizing the area of the jaws around the PTV, and CAX, minimizing the x-jaw gap. The minimum area and the minimum x-jaw angles are found by evaluating each field beam's eye view of the PTV with ImageJ and finding the desired parameters with a custom script. The evaluation of the plans included the monitor units (MU), the maximum dose of the plan, the maximum dose to organs at risk (OAR), the conformity index (CI) and the number of fields that are calculated to split. Compared to the CA0 plans, the monitor units decreased on average by 6% for the CAX method with a p-value of 0.01 from an ANOVA test. The average maximum dose remained within 1.1% difference between all four methods with the lowest given by CAX. The maximum dose to the most at risk organ was best spared by the CAA method, which decreased by 0.62% compared to the CA0. Minimizing the x-jaws significantly reduced the number of split fields from 61 to 37. In every metric tested the CAX optimization produced comparable or superior results compared to the other three techniques. For aspherical PTVs, CAX on average reduced the number of split fields, lowered the maximum dose, minimized the dose to the surrounding OAR, and decreased the monitor units. This is achieved while maintaining the same control of the PTV.
Fuzzy α-minimum spanning tree problem: definition and solutions
NASA Astrophysics Data System (ADS)
Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan
2016-04-01
In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.
Konstantinidis, Charalampos; Kratiras, Zisis; Samarinas, Michael; Skriapas, Konstantinos
2016-01-01
To identify the minimum bladder diary's length required to furnish reliable documentation of LUTS in a specific cohort of patients suffering from neurogenic urinary dysfunction secondary to suprapontine pathology. From January 2008 to January 2014, patients suffering from suprapontine pathology and LUTS were requested to prospectively complete a bladder diary form for 7 consecutive days. Micturitions per day, excreta per micturition, urgency and incontinence episodes and voided volume per day were evaluated from the completed diaries. We compared the averaged records of consecutive days (2-6 days) to the total 7 days records for each patient's diary, seeking the minimum diary's length that could provide records comparable to the 7 days average, the reference point in terms of reliability. From 285 subjects, 94 male and 69 female patients enrolled in the study. The records of day 1 were significantly different from the average of the 7 days records in every parameter, showing relatively small correlation and providing insuficiente documentation. Correlations gradually increased along the increase in diary's duration. According to our results a 3-day duration bladder diary is efficient and can provide results comparable to a 7 day length for four of our evaluated parameters. Regarding incontinence episodes, 3 days seems inadequate to furnish comparable results, showing a borderline difference. A 3-day diary can be used, as its reliability is efficient regarding number of micturition per day, excreta per micturition, episodes of urgency and voided volume per day. Copyright© by the International Brazilian Journal of Urology.
Sanz-Mengibar, Jose Manuel; Altschuck, Natalie; Sanchez-de-Muniain, Paloma; Bauer, Christian; Santonja-Medina, Fernando
2017-04-01
To understand whether there is a trunk postural control threshold in the sagittal plane for the transition between the Gross Motor Function Classification System (GMFCS) levels measured with 3-dimensional gait analysis. Kinematics from 97 children with spastic bilateral cerebral palsy from spine angles according to Plug-In Gait model (Vicon) were plotted relative to their GMFCS level. Only average and minimum values of the lumbar spine segment correlated with GMFCS levels. Maximal values at loading response correlated independently with age at all functional levels. Average and minimum values were significant when analyzing age in combination with GMFCS level. There are specific postural control patterns in the average and minimum values for the position between trunk and pelvis in the sagittal plane during gait, for the transition among GMFCS I-III levels. Higher classifications of gross motor skills correlate with more extended spine angles.
Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.
2013-04-16
A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.
NASA Astrophysics Data System (ADS)
Xie, ChengJun; Xu, Lin
2008-03-01
This paper presents an algorithm based on mixing transform of wave band grouping to eliminate spectral redundancy, the algorithm adapts to the relativity difference between different frequency spectrum images, and still it works well when the band number is not the power of 2. Using non-boundary extension CDF(2,2)DWT and subtraction mixing transform to eliminate spectral redundancy, employing CDF(2,2)DWT to eliminate spatial redundancy and SPIHT+CABAC for compression coding, the experiment shows that a satisfied lossless compression result can be achieved. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, when the band number is not the power of 2, lossless compression result of this compression algorithm is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, Minimum Spanning Tree and Near Minimum Spanning Tree, on the average the compression ratio of this algorithm exceeds the above algorithms by 41%,37%,35%,29%,16%,10%,8% respectively; when the band number is the power of 2, for 128 frames of the image Canal, taking 8, 16 and 32 respectively as the number of one group for groupings based on different numbers, considering factors like compression storage complexity, the type of wave band and the compression effect, we suggest using 8 as the number of bands included in one group to achieve a better compression effect. The algorithm of this paper has priority in operation speed and hardware realization convenience.
Resource Constrained Planning of Multiple Projects with Separable Activities
NASA Astrophysics Data System (ADS)
Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya
In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.
Distribution of cataract surgical rate and its economic inequality in Iran.
Hashemi, Hassan; Rezvan, Farhad; Fotouhi, Akbar; Khabazkhoob, Mehdi; Gilasi, Hamidreza; Etemad, Koroush; Mahdavi, Alireza; Mehravaran, Shiva; Asgari, Soheila
2015-06-01
To determine the distribution of the cataract surgical number per million population per year (CSR), the CSR in the population older than 50 years (CSR 50+) in the provinces of Iran, and their economic inequality in 2010. As part of the cross-sectional 2011 CSR survey, the provincial CSR and CSR 50+ were calculated as the total number of surgeries in major and minor centers divided by the total population and the population older than 50 years in each province. Economic inequality was determined using the average province income, the average urban and rural household incomes, and the percentage of urban and rural population in each province. Tehran and Ilam provinces had the highest and lowest CSR (12,465 vs. 359), respectively. Fars and Ilam provinces had the highest and lowest CSR 50+ (71,381 vs. 2481), respectively. Low CSR (<3000) was detected in 9 provinces where 2.4 to 735.7% increase is needed to reach the minimum required. High CSR (>5000) was observed in 14 provinces (45.2%) where rates were 0.6 to 59.9% higher than the global target. Cataract surgical rate increased at higher economic quintiles. Differences between the first, second, and fifth (poorest) quintiles were statistically significant. The CSR concentration index was 0.1964 (95% confidence interval, 0.0964 to 0.2964). In line with the goals of the Vision 2020 initiative to eliminate cataract blindness, more than 70% of geographic areas in Iran have achieved the minimum CSR of 3000 or more. However, a large gap still exists in less than 30% of areas, mainly attributed to the economic status.
Analysis of large system black box verification test data
NASA Technical Reports Server (NTRS)
Clapp, Kenneth C.; Iyer, Ravishankar Krishnan
1993-01-01
Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.
Sampling large random knots in a confined space
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Graph Theoretical Analysis Reveals: Women's Brains Are Better Connected than Men's.
Szalkai, Balázs; Varga, Bálint; Grolmusz, Vince
2015-01-01
Deep graph-theoretic ideas in the context with the graph of the World Wide Web led to the definition of Google's PageRank and the subsequent rise of the most popular search engine to date. Brain graphs, or connectomes, are being widely explored today. We believe that non-trivial graph theoretic concepts, similarly as it happened in the case of the World Wide Web, will lead to discoveries enlightening the structural and also the functional details of the animal and human brains. When scientists examine large networks of tens or hundreds of millions of vertices, only fast algorithms can be applied because of the size constraints. In the case of diffusion MRI-based structural human brain imaging, the effective vertex number of the connectomes, or brain graphs derived from the data is on the scale of several hundred today. That size facilitates applying strict mathematical graph algorithms even for some hard-to-compute (or NP-hard) quantities like vertex cover or balanced minimum cut. In the present work we have examined brain graphs, computed from the data of the Human Connectome Project, recorded from male and female subjects between ages 22 and 35. Significant differences were found between the male and female structural brain graphs: we show that the average female connectome has more edges, is a better expander graph, has larger minimal bisection width, and has more spanning trees than the average male connectome. Since the average female brain weighs less than the brain of males, these properties show that the female brain has better graph theoretical properties, in a sense, than the brain of males. It is known that the female brain has a smaller gray matter/white matter ratio than males, that is, a larger white matter/gray matter ratio than the brain of males; this observation is in line with our findings concerning the number of edges, since the white matter consists of myelinated axons, which, in turn, roughly correspond to the connections in the brain graph. We have also found that the minimum bisection width, normalized with the edge number, is also significantly larger in the right and the left hemispheres in females: therefore, the differing bisection widths are independent from the difference in the number of edges.
Graph Theoretical Analysis Reveals: Women’s Brains Are Better Connected than Men’s
Szalkai, Balázs; Varga, Bálint; Grolmusz, Vince
2015-01-01
Deep graph-theoretic ideas in the context with the graph of the World Wide Web led to the definition of Google’s PageRank and the subsequent rise of the most popular search engine to date. Brain graphs, or connectomes, are being widely explored today. We believe that non-trivial graph theoretic concepts, similarly as it happened in the case of the World Wide Web, will lead to discoveries enlightening the structural and also the functional details of the animal and human brains. When scientists examine large networks of tens or hundreds of millions of vertices, only fast algorithms can be applied because of the size constraints. In the case of diffusion MRI-based structural human brain imaging, the effective vertex number of the connectomes, or brain graphs derived from the data is on the scale of several hundred today. That size facilitates applying strict mathematical graph algorithms even for some hard-to-compute (or NP-hard) quantities like vertex cover or balanced minimum cut. In the present work we have examined brain graphs, computed from the data of the Human Connectome Project, recorded from male and female subjects between ages 22 and 35. Significant differences were found between the male and female structural brain graphs: we show that the average female connectome has more edges, is a better expander graph, has larger minimal bisection width, and has more spanning trees than the average male connectome. Since the average female brain weighs less than the brain of males, these properties show that the female brain has better graph theoretical properties, in a sense, than the brain of males. It is known that the female brain has a smaller gray matter/white matter ratio than males, that is, a larger white matter/gray matter ratio than the brain of males; this observation is in line with our findings concerning the number of edges, since the white matter consists of myelinated axons, which, in turn, roughly correspond to the connections in the brain graph. We have also found that the minimum bisection width, normalized with the edge number, is also significantly larger in the right and the left hemispheres in females: therefore, the differing bisection widths are independent from the difference in the number of edges. PMID:26132764
Estimating watershed level nonagricultural pesticide use from golf courses using geospatial methods
Fox, G.A.; Thelin, G.P.; Sabbagh, G.J.; Fuchs, J.W.; Kelly, I.D.
2008-01-01
Limited information exists on pesticide use for nonagricultural purposes, making it difficult to estimate pesticide loadings from nonagricultural sources to surface water and to conduct environmental risk assessments. A method was developed to estimate the amount of pesticide use on recreational turf grasses, specifically golf course turf grasses, for watersheds located throughout the conterminous United States (U.S.). The approach estimates pesticide use: (1) based on the area of recreational turf grasses (used as a surrogate for turf associated with golf courses) within the watershed, which was derived from maps of land cover, and (2) from data on the location and average treatable area of golf courses. The area of golf course turf grasses determined from these two methods was used to calculate the percentage of each watershed planted in golf course turf grass (percent crop area, or PCA). Turf-grass PCAs derived from the two methods were used with recommended application rates provided on pesticide labels to estimate total pesticide use on recreational turf within 1,606 watersheds associated with surface-water sources of drinking water. These pesticide use estimates made from label rates and PCAs were compared to use estimates from industry sales data on the amount of each pesticide sold for use within the watershed. The PCAs derived from the land-cover data had an average value of 0.4% of a watershed with minimum of 0.01% and a maximum of 9.8%, whereas the PCA values that are based on the number of golf courses in a watershed had an average of 0.3% of a watershed with a minimum of <0.01% and a maximum of 14.2%. Both the land-cover method and the number of golf courses method produced similar PCA distributions, suggesting that either technique may be used to provide a PCA estimate for recreational turf. The average and maximum PCAs generally correlated to watershed size, with the highest PCAs estimated for small watersheds. Using watershed specific PCAs, combined with label rates, resulted in greater than two orders of magnitude over-estimation of the pesticide use compared to estimates from sales data. ?? 2008 American Water Resources Association.
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
Markoulli, Maria; Duong, Tran Bao; Lin, Margaret; Papas, Eric
2018-02-01
To compare non-invasive break-up time (NIBUT) when measured with the Tearscope-Plus™ and the Oculus® Keratograph 5M, and to compare lipid layer thicknesses (LLT) when measured with the Tearscope-Plus™ and the LipiView®. This study also set out to establish the repeatability of these methods. The following measurements were taken from both eyes of 24 participants on two occasions: non-invasive keratograph break-up time using the Oculus® (NIKBUT-1 and NIKBUT-average), NIBUT using the Tearscope-Plus™, and LLT using the LipiView® (minimum, maximum, and average) and Tearscope-Plus™. The Tearscope-Plus™ grades were converted to nanometers. There were no significant differences between eyes (Tearscope-Plus™ NIBUT: p = 0.52; NIKBUT-1: p = 0.052; NIKBUT-average: p = 0.73; Tearscope-Plus™ LLT: p = 0.13; LipiView® average, maximum, or minimum: p = 0.68, 0.39 and 0.50, respectively) or days (Tearscope-Plus™ NIBUT: p = 0.32; NIKBUT-1: p = 0.65; NIKBUT-average: p = 0.54; Tearscope-Plus™ LLT: p = 0.26; LipiView® average, maximum, or minimum: p = 0.20, 0.09, and 0.10, respectively). LLT was significantly greater with the Tearscope-Plus™ (80.4 ± 34.0 nm) compared with the LipiView® average (56.3 ± 16.1 nm, p = 0.007), minimum (50.1 ± 15.8 nm, p < 0.001) but not maximum (67.2 ± 19.6 nm, p = 0.55). NIBUT was significantly greater with the Tearscope-Plus™ (15.9 ± 10.7 seconds) compared to the NIKBUT-1 (8.2 ± 3.5 seconds, p = 0.006) but not NIKBUT-average (10.9 ± 3.9 seconds, p = 0.08). The Tearscope-Plus™ is not interchangeable with either the Oculus® K5M measurement of tear stability (NIKBUT-1) or the LipiView® maximum and minimum lipid thickness.
NASA Astrophysics Data System (ADS)
Sudharsanan, Subramania I.; Mahalanobis, Abhijit; Sundareshan, Malur K.
1990-12-01
Discrete frequency domain design of Minimum Average Correlation Energy filters for optical pattern recognition introduces an implementational limitation of circular correlation. An alternative methodology which uses space domain computations to overcome this problem is presented. The technique is generalized to construct an improved synthetic discriminant function which satisfies the conflicting requirements of reduced noise variance and sharp correlation peaks to facilitate ease of detection. A quantitative evaluation of the performance characteristics of the new filter is conducted and is shown to compare favorably with the well known Minimum Variance Synthetic Discriminant Function and the space domain Minimum Average Correlation Energy filter, which are special cases of the present design.
Sigma Routing Metric for RPL Protocol.
Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian
2018-04-21
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.
WEAMR — A Weighted Energy Aware Multipath Reliable Routing Mechanism for Hotline-Based WSNs
Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung
2013-01-01
Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs. PMID:23669714
WEAMR-a weighted energy aware multipath reliable routing mechanism for hotline-based WSNs.
Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung
2013-05-13
Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs.
Sigma Routing Metric for RPL Protocol
Rojas, Aldo; Fernandez, Luis
2018-01-01
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524
Izquierdo, M; González-Badillo, J J; Häkkinen, K; Ibáñez, J; Kraemer, W J; Altadill, A; Eslava, J; Gorostiaga, E M
2006-09-01
The purpose of this study was to examine the effect of different loads on repetition speed during single sets of repetitions to failure in bench press and parallel squat. Thirty-six physical active men performed 1-repetition maximum in a bench press (1 RM (BP)) and half squat position (1 RM (HS)), and performed maximal power-output continuous repetition sets randomly every 10 days until failure with a submaximal load (60 %, 65 %, 70 %, and 75 % of 1RM, respectively) during bench press and parallel squat. Average velocity of each repetition was recorded by linking a rotary encoder to the end part of the bar. The values of 1 RM (BP) and 1 RM (HS) were 91 +/- 17 and 200 +/- 20 kg, respectively. The number of repetitions performed for a given percentage of 1RM was significantly higher (p < 0.001) in half squat than in bench press performance. Average repetition velocity decreased at a greater rate in bench press than in parallel squat. The significant reductions observed in the average repetition velocity (expressed as a percentage of the average velocity achieved during the initial repetition) were observed at higher percentage of the total number of repetitions performed in parallel squat (48 - 69 %) than in bench press (34 - 40 %) actions. The major finding in this study was that, for a given muscle action (bench press or parallel squat), the pattern of reduction in the relative average velocity achieved during each repetition and the relative number of repetitions performed was the same for all percentages of 1RM tested. However, relative average velocity decreased at a greater rate in bench press than in parallel squat performance. This would indicate that in bench press the significant reductions observed in the average repetition velocity occurred when the number of repetitions was over one third (34 %) of the total number of repetitions performed, whereas in parallel squat it was nearly one half (48 %). Conceptually, this would indicate that for a given exercise (bench press or squat) and percentage of maximal dynamic strength (1RM), the pattern of velocity decrease can be predicted over a set of repetitions, so that a minimum repetition threshold to ensure maximal speed performance is determined.
Code of Federal Regulations, 2011 CFR
2011-07-01
... time 1 Method for demonstrating compliance 2 Particulate matter mg/dscm (gr/dscf) 197 (0.086) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method 26A or 29 of appendix A-8 of part 60. Carbon monoxide ppmv 40 3-run average (1-hour minimum...
Code of Federal Regulations, 2011 CFR
2011-07-01
... time 1 Method for demonstrating compliance 2 Particulate matter mg/dscm (gr/dscf) 87 (0.038) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method 26A or 29 of appendix A-8 of part 60. Carbon monoxide ppmv 20 3-run average (1-hour minimum...
Code of Federal Regulations, 2013 CFR
2013-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...
Code of Federal Regulations, 2014 CFR
2014-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...
Adjusted monthly temperature and precipitation values for Guinea Conakry (1941-2010) using HOMER.
NASA Astrophysics Data System (ADS)
Aguilar, Enric; Aziz Barry, Abdoul; Mestre, Olivier
2013-04-01
Africa is a data sparse region and there are very few studies presenting homogenized monthly records. In this work, we introduce a dataset consisting of 12 stations spread over Guinea Conakry containing daily values of maximum and minimum temperature and accumulated rainfall for the period 1941-2010. The daily values have been quality controlled using R-Climdex routines, plus other interactive quality control applications, coded by the authors. After applying the different tests, more than 200 daily values were flagged as doubtful and carefully checked against the statistical distribution of the series and the rest of the dataset. Finally, 40 values were modified or set to missing and the rest were validated. The quality controlled daily dataset was used to produce monthly means and homogenized with HOMER, a new R-pacakge which includes the relative methods that performed better in the experiments conducted in the framework of the COST-HOME action. A total number of 38 inhomogeneities were found for temperature. As a total of 788 years of data were analyzed, the average ratio was one break every 20.7 years. The station with a larger number of inhomogeneities was Conakry (5 breaks) and one station, Kissidougou, was identified as homogeneous. The average number of breaks/station was 3.2. The mean value of the monthly factors applied to maximum (minimum) temperature was 0.17 °C (-1.08 °C) . For precipitation, due to the demand of a denser network to correctly homogenize this variable, only two major inhomogeneities in Conakry (1941-1961, -12%) and Kindia (1941-1976, -10%) were corrected. The adjusted dataset was used to compute regional series for the three variables and trends for the 1941-2010 period. The regional mean has been computed by simply averaging anomalies to 1971-2000 of the 12 time series. Two different versions have been obtained: a first one (A) makes use of the missing values interpolation made by HOMER (so all annual values in the regional series are an average of 12 anomalies); the second one (B) removes the missing values, and each value of the regional series is an average of 5 to 12 anomalies. In this case, a variance stabilization factor has been applied. As a last step a trend analysis has been applied over the regional series. This has been done using two different approaches: standard least squares regression (LS) and the implementation by Zhang of the Sen slope estimator (SEN), applied using the zyp R-package. The results for the A & B series and the different trend calculations are very similar, in terms of slopes and signification. All the identified trends are significant at the 95% confidence level or better. Using the A series and the SEN slope, the annual regional mean of maximum temperatures has increased 0.135 °C/decade (95% confidence interval: 0.087 / 0.173) and the annual regional mean of minimum temperatures 0.092 °C/decade (0.050/0.135). Maximum temperatures present high values in the 1940s to 1950s and a large increase in the last decades. In contrast, minimum temperatures were relatively cooler in the 1940s and 1950s and the increase in the last decades is more moderate. Finally, the regional mean of annual accumulated precipitation decreased between 1941 and 2010 by -2.20 mm (-3.82/-0.64). The precipitation series are dominated by the high values before 1970, followed by a well known decrease in rainfall. This homogenized monthly series will improve future analysis over this portion of Western Africa.
Documentation of a deep percolation model for estimating ground-water recharge
Bauer, H.H.; Vaccaro, J.J.
1987-01-01
A deep percolation model, which operates on a daily basis, was developed to estimate long-term average groundwater recharge from precipitation. It has been designed primarily to simulate recharge in large areas with variable weather, soils, and land uses, but it can also be used at any scale. The physical and mathematical concepts of the deep percolation model, its subroutines and data requirements, and input data sequence and formats are documented. The physical processes simulated are soil moisture accumulation, evaporation from bare soil, plant transpiration, surface water runoff, snow accumulation and melt, and accumulation and evaporation of intercepted precipitation. The minimum data sets for the operation of the model are daily values of precipitation and maximum and minimum air temperature, soil thickness and available water capacity, soil texture, and land use. Long-term average annual precipitation, actual daily stream discharge, monthly estimates of base flow, Soil Conservation Service surface runoff curve numbers, land surface altitude-slope-aspect, and temperature lapse rates are optional. The program is written in the FORTRAN 77 language with no enhancements and should run on most computer systems without modifications. Documentation has been prepared so that program modifications may be made for inclusions of additional physical processes or deletion of ones not considered important. (Author 's abstract)
76 FR 30243 - Minimum Security Devices and Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures.... Title of Proposal: Minimum Security Devices and Procedures. OMB Number: 1550-0062. Form Number: N/A. Description: The requirement that savings associations establish a written security program is necessitated by...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castelluccio, Gustavo M.; McDowell, David L.
The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less
Castelluccio, Gustavo M.; McDowell, David L.
2015-05-22
The number of cycles required to form and grow microstructurally small fatigue cracks in metals exhibits substantial variability, particularly for low applied strain amplitudes. This variability is commonly attributed to the heterogeneity of cyclic plastic deformation within the microstructure, and presents a challenge to minimum life design of fatigue resistant components. Our paper analyzes sources of variability that contribute to the driving force of transgranular fatigue cracks within nucleant grains. We also employ crystal plasticity finite element simulations that explicitly render the polycrystalline microstructure and Fatigue Indicator Parameters (FIPs) averaged over different volume sizes and shapes relative to the anticipatedmore » fatigue damage process zone. Volume averaging is necessary to both achieve description of a finite fatigue damage process zone and to regularize mesh dependence in simulations. Furthermore, results from constant amplitude remote applied straining are characterized in terms of the extreme value distributions of volume averaged FIPs. Grain averaged FIP values effectively mitigate mesh sensitivity, but they smear out variability within grains. Furthermore, volume averaging over bands that encompass critical transgranular slip planes appear to present the most attractive approach to mitigate mesh sensitivity while preserving variability within grains.« less
Low-flow characteristics of streams in Ohio through water year 1997
Straub, David E.
2001-01-01
This report presents selected low-flow and flow-duration characteristics for 386 sites throughout Ohio. These sites include 195 long-term continuous-record stations with streamflow data through water year 1997 (October 1 to September 30) and for 191 low-flow partial-record stations with measurements into water year 1999. The characteristics presented for the long-term continuous-record stations are minimum daily streamflow; average daily streamflow; harmonic mean flow; 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 5-, 10-, 20-, and 50-year recurrence intervals; and 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, 50-, 40-, 30-, 20-, and 10-percent daily duration flows. The characteristics presented for the low-flow partial-record stations are minimum observed streamflow; estimated 1-, 7-, 30-, and 90-day minimum average low flow with 2-, 10-, and 20-year recurrence intervals; and estimated 98-, 95-, 90-, 85- and 80-percent daily duration flows. The low-flow frequency and duration analyses were done for three seasonal periods (warm weather, May 1 to November 30; winter, December 1 to February 28/29; and autumn, September 1 to November 30), plus the annual period based on the climatic year (April 1 to March 31).
Ibáñez, Javier; Vélez, M Dolores; de Andrés, M Teresa; Borrego, Joaquín
2009-11-01
Distinctness, uniformity and stability (DUS) testing of varieties is usually required to apply for Plant Breeders' Rights. This exam is currently carried out using morphological traits, where the establishment of distinctness through a minimum distance is the key issue. In this study, the possibility of using microsatellite markers for establishing the minimum distance in a vegetatively propagated crop (grapevine) has been evaluated. A collection of 991 accessions have been studied with nine microsatellite markers and pair-wise compared, and the highest intra-variety distance and the lowest inter-variety distance determined. The collection included 489 different genotypes, and synonyms and sports. Average values for number of alleles per locus (19), Polymorphic Information Content (0.764) and heterozygosities observed (0.773) and expected (0.785) indicated the high level of polymorphism existing in grapevine. The maximum intra-variety variability found was one allele between two accessions of the same variety, of a total of 3,171 pair-wise comparisons. The minimum inter-variety variability found was two alleles between two pairs of varieties, of a total of 119,316 pair-wise comparisons. In base to these results, the minimum distance required to set distinctness in grapevine with the nine microsatellite markers used could be established in two alleles. General rules for the use of the system as a support for establishing distinctness in vegetatively propagated crops are discussed.
Stockwell, Tim; Zhao, Jinhui; Sherk, Adam; Callaghan, Russell C; Macdonald, Scott; Gatley, Jodi
2017-07-01
Saskatchewan's introduction in April 2010 of minimum prices graded by alcohol strength led to an average minimum price increase of 9.1% per Canadian standard drink (=13.45 g ethanol). This increase was shown to be associated with reduced consumption and switching to lower alcohol content beverages. Police also informally reported marked reductions in night-time alcohol-related crime. This study aims to assess the impacts of changes to Saskatchewan's minimum alcohol-pricing regulations between 2008 and 2012 on selected crime events often related to alcohol use. Data were obtained from Canada's Uniform Crime Reporting Survey. Auto-regressive integrated moving average time series models were used to test immediate and lagged associations between minimum price increases and rates of night-time and police identified alcohol-related crimes. Controls were included for simultaneous crime rates in the neighbouring province of Alberta, economic variables, linear trend, seasonality and autoregressive and/or moving-average effects. The introduction of increased minimum-alcohol prices was associated with an abrupt decrease in night-time alcohol-related traffic offences for men (-8.0%, P < 0.001), but not women. No significant immediate changes were observed for non-alcohol-related driving offences, disorderly conduct or violence. Significant monthly lagged effects were observed for violent offences (-19.7% at month 4 to -18.2% at month 6), which broadly corresponded to lagged effects in on-premise alcohol sales. Increased minimum alcohol prices may contribute to reductions in alcohol-related traffic-related and violent crimes perpetrated by men. Observed lagged effects for violent incidents may be due to a delay in bars passing on increased prices to their customers, perhaps because of inventory stockpiling. [Stockwell T, Zhao J, Sherk A, Callaghan RC, Macdonald S, Gatley J. Assessing the impacts of Saskatchewan's minimum alcohol pricing regulations on alcohol-related crime. Drug Alcohol Rev 2017;36:492-501]. © 2016 Australasian Professional Society on Alcohol and other Drugs.
NASA Astrophysics Data System (ADS)
Cao, X.; Du, A.
2014-12-01
We statistically studied the response time of the SYMH to the solar wind energy input ɛ by using the RFA approach. The average response time was 64 minutes. There was no clear trend among these events concerning to the minimum SYMH and storm type. It seems that the response time of magnetosphere to the solar wind energy input is independent on the storm intensity and the solar wind condition. The response function shows one peak even when the solar wind energy input and the SYMH have multi-peak. The response time exhibits as the intrinsic property of the magnetosphere that stands for the typical formation time of the ring current. This may be controlled by magnetospheric temperature, average number density, the oxygen abundance et al.
NASA Technical Reports Server (NTRS)
Eckstrom, Clinton V.
1970-01-01
A 40-foot-nominal-diameter (12.2-meter) modified ringsail parachute was flight tested as part of the NASA Supersonic High Altitude Parachute Experiment (SHAPE) program. The 41-pound (18.6-kg) test parachute system was deployed from a 239.5-pound (108.6-kg) instrumented payload by means of a deployment mortar when the payload was at an altitude of 171,400 feet (52.3 km), a Mach number of 2.95, and a free-stream dynamic pressure of 9.2 lb/sq ft (440 N/m(exp 2)). The parachute deployed properly, suspension line stretch occurring 0.54 second after mortar firing with a resulting snatch-force loading of 932 pounds (4146 newtons). The maximum loading due to parachute opening was 5162 pounds (22 962 newtons) at 1.29 seconds after mortar firing. The first near full inflation of the canopy at 1.25 seconds after mortar firing was followed immediately by a partial collapse and subsequent oscillations of frontal area until the system had decelerated to a Mach number of about 1.5. The parachute then attained a shape that provided full drag area. During the supersonic part of the test, the average axial-force coefficient varied from a minimum of about 0.24 at a Mach number of 2.7 to a maximum of 0.54 at a Mach number of 1.1. During descent under subsonic conditions, the average effective drag coefficient was 0.62 and parachute-payload oscillation angles averaged about &loo with excursions to +/-20 degrees. The recovered parachute was found to have slight damage in the vent area caused by the attached deployment bag and mortar lid.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Recursive optimal pruning with applications to tree structured vector quantizers
NASA Technical Reports Server (NTRS)
Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen
1992-01-01
A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.
SU-E-T-614: Plan Averaging for Multi-Criteria Navigation of Step-And-Shoot IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, M; Gao, H; Craft, D
2015-06-15
Purpose: Step-and-shoot IMRT is fundamentally discrete in nature, while multi-criteria optimization (MCO) is fundamentally continuous: the MCO planning consists of continuous sliding across the Pareto surface (the set of plans which represent the tradeoffs between organ-at-risk doses and target doses). In order to achieve close to real-time dose display during this sliding, it is desired that averaged plans share many of the same apertures as the pre-computed plans, since dose computation for apertures generated on-the-fly would be expensive. We propose a method to ensure that neighboring plans on a Pareto surface share many apertures. Methods: Our baseline step-and-shoot sequencing methodmore » is that of K. Engel (a method which minimizes the number of segments while guaranteeing the minimum number of monitor units), which we customize to sequence a set of Pareto optimal plans simultaneously. We also add an error tolerance to study the relationship between the number of shared apertures, the total number of apertures needed, and the quality of the fluence map re-creation. Results: We run tests for a 2D Pareto surface trading off rectum and bladder dose versus target coverage for a clinical prostate case. We find that if we enforce exact fluence map recreation, we are not able to achieve much sharing of apertures across plans. The total number of apertures for all seven beams and 4 plans without sharing is 217. With sharing and a 2% error tolerance, this number is reduced to 158 (73%). Conclusion: With the proposed method, total number of apertures can be decreased by 42% (averaging) with no increment of total MU, when an error tolerance of 5% is allowed. With this large amount of sharing, dose computations for averaged plans which occur during Pareto navigation will be much faster, leading to a real-time what-you-see-is-what-you-get Pareto navigation experience. Minghao Guo and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platt, M; Platt, M; Lamba, M
2016-06-15
Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less
Particle Fluxes Over a Ponderosa Pine Plantation
NASA Astrophysics Data System (ADS)
Baker, B.; Goldstein, A.
2006-12-01
Atmospheric aerosols can affect visibility, climate, and health. Particle fluxes were measured continuously over a 15 year-old ponderosa pine plantation in the foothills of the Sierra Nevada from mid July to the end of September in the year 2005. Air at this field site is affected by both biogenic emissions from the dense forests of the surrounding area and by urban pollution transported from the Sacramento valley. It is believed that fluxes of very reactive hydrocarbons from plants to the atmosphere have an impact on the production and growth of atmospheric particles at this site. Two condensation particle counters (CPCs) were located near the top of a 12 m measurement tower, several meters above the top of the tree canopy. Particle count data was collected at 10 Hz and particle fluxes were determined using the eddy covariance method. A set of diffusion screens was added to the inlet of one of the CPCs such that the lower particle size limit for detection was increased to a diameter of approximately 40 nm. The other CPC counted particles with minimum diameters of 3 nm. Particle concentrations showed a distinct diurnal pattern with minimum daily average concentrations of 2000 particles cm-3 occurring at dawn, and average daily maximum concentrations of 5700 particles cm-3 occurring at dusk. The evening increase of particle number corresponded to the arrival of polluted air from the Sacramento region. During the day, deposition of particles to the forest canopy (daytime average of 5.8x106 particles m-2 s-1 was generally observed. Concentrations and fluxes of particles under 40 nm could be examined by subtracting the data of one CPC from the other. On average, the fraction of particles under 40 nm increased from less than 20% at dawn to more than 50% at dusk; indicating that air coming from the Sacramento region was enriched in smaller, newly formed aerosol. Daily average deposition fluxes of particles under 40 nm were 1.0x107 particles m-2 s-1. Much of this flux was due to large deposition fluxes during the final three weeks of the experiment. Deposition of particles above 40 nm averaged 1.0x106 particles m-2 s-1. Deposition velocities for the particles under 40 nm were typically between 1 and 10 mm s-1. Particle deposition was correlated most strongly with temperature, and also showed some correlation with relative humidity, particle number concentration, and ozone.
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...
13 CFR 120.829 - Job Opportunity average a CDC must maintain.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Job Opportunity average a CDC must... LOANS Development Company Loan Program (504) Requirements for Cdc Certification and Operation § 120.829 Job Opportunity average a CDC must maintain. (a) A CDC's portfolio must maintain a minimum average of...
14 CFR 23.1443 - Minimum mass flow of supplemental oxygen.
Code of Federal Regulations, 2010 CFR
2010-01-01
... discretion. (c) If first-aid oxygen equipment is installed, the minimum mass flow of oxygen to each user may... upon an average flow rate of 3 liters per minute per person for whom first-aid oxygen is required. (d...
Vectorization of a penalty function algorithm for well scheduling
NASA Technical Reports Server (NTRS)
Absar, I.
1984-01-01
In petroleum engineering, the oil production profiles of a reservoir can be simulated by using a finite gridded model. This profile is affected by the number and choice of wells which in turn is a result of various production limits and constraints including, for example, the economic minimum well spacing, the number of drilling rigs available and the time required to drill and complete a well. After a well is available it may be shut in because of excessive water or gas productions. In order to optimize the field performance a penalty function algorithm was developed for scheduling wells. For an example with some 343 wells and 15 different constraints, the scheduling routine vectorized for the CYBER 205 averaged 560 times faster performance than the scalar version.
Weight optimization of an aerobrake structural concept for a lunar transfer vehicle
NASA Technical Reports Server (NTRS)
Bush, Lance B.; Unal, Resit; Rowell, Lawrence F.; Rehder, John J.
1992-01-01
An aerobrake structural concept for a lunar transfer vehicle was weight optimized through the use of the Taguchi design method, finite element analyses, and element sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter-depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The aerobrake structural configuration with the minimum weight was 44 percent less than the average weight of all the remaining satisfactory experimental configurations. In addition, the results of this study have served to bolster the advocacy of the Taguchi method for aerospace vehicle design. Both reduced analysis time and an optimized design demonstrated the applicability of the Taguchi method to aerospace vehicle design.
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This tabular data set represents thecatchment-average for the 30-year (1971-2000) average daily minimum temperature in Celsius multiplied by 100 compiled for every MRB_E2RF1 catchment of selected Major River Basins (MRBs, Crawford and others, 2006). The source data were the United States Average Monthly or Annual Minimum Temperature, 1971 - 2000 raster data set produced by the PRISM Group at Oregon State University. The MRB_E2RF1 catchments are based on a modified version of the Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).
Wieczorek, Michael; LaMotte, Andrew E.
2010-01-01
This tabular data set represents thecatchment-average for the 30-year (1971-2000) average daily minimum temperature in Celsius multiplied by 100 compiled for every MRB_E2RF1 catchment of selected Major River Basins (MRBs, Crawford and others, 2006). The source data were the United States Average Monthly or Annual Minimum Temperature, 1971 - 2000 raster data set produced by the PRISM Group at Oregon State University. The MRB_E2RF1 catchments are based on a modified version of the Environmental Protection Agency's (USEPA) ERF1_2 and include enhancements to support national and regional-scale surface-water quality modeling (Nolan and others, 2002; Brakebill and others, 2011). Data were compiled for every MRB_E2RF1 catchment for the conterminous United States covering New England and Mid-Atlantic (MRB1), South Atlantic-Gulf and Tennessee (MRB2), the Great Lakes, Ohio, Upper Mississippi, and Souris-Red-Rainy (MRB3), the Missouri (MRB4), the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf (MRB5), the Rio Grande, Colorado, and the Great basin (MRB6), the Pacific Northwest (MRB7) river basins, and California (MRB8).
41 CFR 302-4.704 - Must we require a minimum driving distance per day?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Federal Travel Regulation System RELOCATION ALLOWANCES PERMANENT CHANGE OF STATION (PCS) ALLOWANCES FOR... driving distance not less than an average of 300 miles per day. However, an exception to the daily minimum... reasons acceptable to you. ...
Directivity and trends of noise generated by a propeller in a wake
NASA Technical Reports Server (NTRS)
Block, P. J. W.; Gentry, C. L., Jr.
1986-01-01
An experimental study of the effects on far-field propeller noise of a pylon wake interaction was conducted with a scale model of a single-rotation propeller in a low-speed anechoic wind tunnel. A detailed mapping of the noise directivity was obtained at 10 test conditions covering a wide range of propeller power landings at several subsonic tip speeds. Two types of noise penalties were investigated-pulser and spacing. The pusher noise penalty is the difference in the average overall sound pressure level, OASPL, for pusher and tractor installations. (In a pusher installation, the propeller disk is downstream of a pylon or another aerodynamic surface.) The spacing noise penalty is the difference in the average OASPL for different distances between the pylon trailing edge and the propeller. The variations of these noise penalties with axial, or flyover, angle theta and circumferential angle phi are presented, and the trends in these noise penalties with tip Mach number and power loading are given for selected values of theta and phi. The circumferential directivity of the noise from a pusher installation showed that the addition noise due to the interaction of the pylon wake with the propeller had a broad peak over a wide range of circumferential angles approximately perpendicular to the pylon with a sharp minimum 90 deg. to the pylon for the majority of cases tested. The variation of the pusher noise penalty with theta had a minimum occurring near the propeller plane and maximum values of as much as 20 dB occurring toward the propeller axes. The magnitude of the pusher noise penalty generally decreased as propeller tip Mach number or power loading was increased.
Entropy considerations applied to shock unsteadiness in hypersonic inlets
NASA Astrophysics Data System (ADS)
Bussey, Gillian Mary Harding
The stability of curved or rectangular shocks in hypersonic inlets in response to flow perturbations can be determined analytically from the principle of minimum entropy. Unsteady shock wave motion can have a significant effect on the flow in a hypersonic inlet or combustor. According to the principle of minimum entropy, a stable thermodynamic state is one with the lowest entropy gain. A model based on piston theory and its limits has been developed for applying the principle of minimum entropy to quasi-steady flow. Relations are derived for analyzing the time-averaged entropy gain flux across a shock for quasi-steady perturbations in atmospheric conditions and angle as a perturbation in entropy gain flux from the steady state. Initial results from sweeping a wedge at Mach 10 through several degrees in AEDC's Tunnel 9 indicates the bow shock becomes unsteady near the predicted normal Mach number. Several curved shocks of varying curvature are compared to a straight shock with the same mean normal Mach number, pressure ratio, or temperature ratio. The present work provides analysis and guidelines for designing an inlet robust to off- design flight or perturbations in flow conditions an inlet is likely to face. It also suggests that inlets with curved shocks are less robust to off-design flight than those with straight shocks such as rectangular inlets. Relations for evaluating entropy perturbations for highly unsteady flow across a shock and limits on their use were also developed. The normal Mach number at which a shock could be stable to high frequency upstream perturbations increases as the speed of the shock motion increases and slightly decreases as the perturbation size increases. The present work advances the principle of minimum entropy theory by providing additional validity for using the theory for time-varying flows and applying it to shocks, specifically those in inlets. While this analytic tool is applied in the present work for evaluating the stability of shocks in hypersonic inlets, it can be used for an arbitrary application with a shock.
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... Proposed Information Collection to OMB Minimum Property Standards for Multifamily and Care-Type Occupancy... Lists the Following Information Title of Proposal: Minimum Property Standards for Multifamily and Care-Type Occupancy Housing. OMB Approval Number: 2502-0321. Form Numbers: None. Description of the Need for...
Moores, J C; Magazin, M; Ditta, G S; Leong, J
1984-01-01
A gene bank of DNA from plant growth-promoting Pseudomonas sp. strain B10 was constructed using the broad host-range conjugative cosmid pLAFR1. The recombinant cosmids contained insert DNA averaging 21.5 kilobase pairs in length. Nonfluorescent mutants of Pseudomonas sp. strain B10 were obtained by mutagenesis with N-methyl-N'-nitro-N-nitrosoguanidine, ethyl methanesulfonate, or UV light and were defective in the biosynthesis of its yellow-green, fluorescent siderophore (microbial iron transport agent) pseudobactin. No yellow-green, fluorescent mutants defective in the production of pseudobactin were identified. Nonfluorescent mutants were individually complemented by mating the gene bank en masse and identifying fluorescent transconjugants. Eight recombinant cosmids were sufficient to complement 154 nonfluorescent mutants. The pattern of complementation suggests that a minimum of 12 genes arranged in four gene clusters is required for the biosynthesis of pseudobactin. This minimum number of genes seems reasonable considering the structural complexity of pseudobactin. Images PMID:6690426
Meteorological variables and bacillary dysentery cases in Changsha City, China.
Gao, Lu; Zhang, Ying; Ding, Guoyong; Liu, Qiyong; Zhou, Maigeng; Li, Xiujun; Jiang, Baofa
2014-04-01
This study aimed to investigate the association between meteorological-related risk factors and bacillary dysentery in a subtropical inland Chinese area: Changsha City. The cross-correlation analysis and the Autoregressive Integrated Moving Average with Exogenous Variables (ARIMAX) model were used to quantify the relationship between meteorological factors and the incidence of bacillary dysentery. Monthly mean temperature, mean relative humidity, mean air pressure, mean maximum temperature, and mean minimum temperature were significantly correlated with the number of bacillary dysentery cases with a 1-month lagged effect. The ARIMAX models suggested that a 1°C rise in mean temperature, mean maximum temperature, and mean minimum temperature might lead to 14.8%, 12.9%, and 15.5% increases in the incidence of bacillary dysentery disease, respectively. Temperature could be used as a forecast factor for the increase of bacillary dysentery in Changsha. More public health actions should be taken to prevent the increase of bacillary dysentery disease with consideration of local climate conditions, especially temperature.
Meteorological Variables and Bacillary Dysentery Cases in Changsha City, China
Gao, Lu; Zhang, Ying; Ding, Guoyong; Liu, Qiyong; Zhou, Maigeng; Li, Xiujun; Jiang, Baofa
2014-01-01
This study aimed to investigate the association between meteorological-related risk factors and bacillary dysentery in a subtropical inland Chinese area: Changsha City. The cross-correlation analysis and the Autoregressive Integrated Moving Average with Exogenous Variables (ARIMAX) model were used to quantify the relationship between meteorological factors and the incidence of bacillary dysentery. Monthly mean temperature, mean relative humidity, mean air pressure, mean maximum temperature, and mean minimum temperature were significantly correlated with the number of bacillary dysentery cases with a 1-month lagged effect. The ARIMAX models suggested that a 1°C rise in mean temperature, mean maximum temperature, and mean minimum temperature might lead to 14.8%, 12.9%, and 15.5% increases in the incidence of bacillary dysentery disease, respectively. Temperature could be used as a forecast factor for the increase of bacillary dysentery in Changsha. More public health actions should be taken to prevent the increase of bacillary dysentery disease with consideration of local climate conditions, especially temperature. PMID:24591435
Trellis coding with multidimensional QAM signal sets
NASA Technical Reports Server (NTRS)
Pietrobon, Steven S.; Costello, Daniel J.
1993-01-01
Trellis coding using multidimensional QAM signal sets is investigated. Finite-size 2D signal sets are presented that have minimum average energy, are 90-deg rotationally symmetric, and have from 16 to 1024 points. The best trellis codes using the finite 16-QAM signal set with two, four, six, and eight dimensions are found by computer search (the multidimensional signal set is constructed from the 2D signal set). The best moderate complexity trellis codes for infinite lattices with two, four, six, and eight dimensions are also found. The minimum free squared Euclidean distance and number of nearest neighbors for these codes were used as the selection criteria. Many of the multidimensional codes are fully rotationally invariant and give asymptotic coding gains up to 6.0 dB. From the infinite lattice codes, the best codes for transmitting J, J + 1/4, J + 1/3, J + 1/2, J + 2/3, and J + 3/4 bit/sym (J an integer) are presented.
NASA Astrophysics Data System (ADS)
Mahmoudinezhad, S.; Rezania, A.; Yousefi, T.; Shadloo, M. S.; Rosendahl, L. A.
2018-02-01
A steady state and two-dimensional laminar free convection heat transfer in a partitioned cavity with horizontal adiabatic and isothermal side walls is investigated using both experimental and numerical approaches. The experiments and numerical simulations are carried out using a Mach-Zehnder interferometer and a finite volume code, respectively. A horizontal and adiabatic partition, with angle of θ is adjusted such that it separates the cavity into two identical parts. Effects of this angel as well as Rayleigh number on the heat transfer from the side-heated walls are investigated in this study. The results are performed for the various Rayleigh numbers over the cavity side length, and partition angles ranging from 1.5 × 105 to 4.5 × 105, and 0° to 90°, respectively. The experimental verification of natural convective flow physics has been done by using FLUENT software. For a given adiabatic partition angle, the results show that the average Nusselt number and consequently the heat transfer enhance as the Rayleigh number increases. However, for a given Rayleigh number the maximum and the minimum heat transfer occurs at θ = 45°and θ = 90°, respectively. Two responsible mechanisms for this behavior, namely blockage ratio and partition orientation, are identified. These effects are explained by numerical velocity vectors and experimental temperatures contours. Based on the experimental data, a new correlation that fairly represents the average Nusselt number of the heated walls as functions of Rayleigh number and the angel of θ for the aforementioned ranges of data is proposed.
Effect of microstructure on the thermo-oxidation of solid isotactic polypropylene-based polyolefins
Hoyos, Mario; Tiemblo, Pilar; Gómez-Elvira, José Manuel
2008-01-01
In the present work we aim to clarify the role of the microstructure and the crystalline distribution from the thermo-oxidation of solid isotactic PP (iPP) and ethylene-propylene (EP) copolymers. The effects of the content and quality of the isotacticity interruptions, together with the associated average isotactic length, on the induction time (ti) as well as on the activation energy (Eact) of the thermo-oxidation are analysed. Both parameters have been found to change markedly at an average isotactic length (n1) of 30 propylene units. While ti reaches a minimum when n1 is approximately 30 units, Eact increases quasi-exponentially as the number of units decreases from 30. This variation can be explained in terms of changes induced in the crystalline interphase, i.e. local molecular dynamics, which are closely linked to the initiation of the thermo-oxidation of isotactic PP-based polyolefins. PMID:27877971
NASA Technical Reports Server (NTRS)
Mason, G. M.; Gloeckler, G.; Fisk, L. A.; Hovestadt, D.
1980-01-01
The abundances of the major elements over the range H-Fe in solar flare energetic particles near 1 MeV/nucleon were surveyed for a large number of flares during the period 1973-1977; observations were carried out by the IMP 8 spacecraft in interplanetary space. The survey considered two types of solar flare events: (1) large events from which the average boundaries were deduced, and (2) events which have significant abundance differences from average. In addition, two He-3-rich events with abundance features that are different from previous examples are reported: one case with no enhancements of heavy ions, and a second case in which, compared to O, the heavy-ion enhancements are confined to the charge range Si-Fe rather than the usual case in which all elements Ne-Fe are enriched.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
Sousa, F A; da Silva, J A
2000-04-01
The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).
Link, Karl-Heinrich; Coy, Peter; Roitman, Mark; Link, Carola; Kornmann, Marko; Staib, Ludger
2017-01-01
Background To answer the question whether minimum caseloads need to be stipulated in the German S3 (or any other) guidelines for colorectal cancer, we analyzed the current representative literature. The question is important regarding medical quality as well as health economics and policy. Methods A literature research was conducted in PubMed for papers concerning ‘colon cancer’ (CC), ‘rectal cancer’ (RC), and ‘colorectal cancer’ (CRC), with ‘results', ‘quality’, and ‘mortality’ between the years 2000 and 2016 being relevant factors. We graded the recommendations as ‘pro’, ‘maybe’, or ‘contra’ in terms of a significant correlation between hospital volume (HV) or surgeon volume (SV) and treatment quality. We also listed the recommended numbers suggested for HV or SV as minimum caseloads and calculated and discussed the socio-economic impact of setting minimum caseloads for CRC. Results The correlations of caseloads of hospitals or surgeons turned out to be highly controversial concerning the influence of HV or SV on short- and long-term surgical treatment quality of CRC. Specialized statisticians made the point that the reports in the literature might not use the optimal biometrical analytical/reporting methods. A Dutch analysis showed that if a decision towards minimum caseloads, e.g. >50 for CRC resections, would be made, this would exclude a lot of hospitals with proven good treatment quality and include hospitals with a treatment quality below average. Our economic analysis envisioned that a yearly loss of EUR <830,000 might ensue for hospitals with volumes <50 per year. Conclusions Caseload (HV, SV) definitely is an inconsistent surrogate parameter for treatment quality in the surgery of CC, RC, or CRC. If used at all, the lowest tolerable numbers but the highest demands for structural, process and result quality in the surgical/interdisciplinary treatment of CC and RC must be imposed and independently controlled. Hospitals fulfilling these demands should be medically and socio-economically preferred concerning the treatment of CC and RC patients. PMID:28560230
Link, Karl-Heinrich; Coy, Peter; Roitman, Mark; Link, Carola; Kornmann, Marko; Staib, Ludger
2017-05-01
To answer the question whether minimum caseloads need to be stipulated in the German S3 (or any other) guidelines for colorectal cancer, we analyzed the current representative literature. The question is important regarding medical quality as well as health economics and policy. A literature research was conducted in PubMed for papers concerning 'colon cancer' (CC), 'rectal cancer' (RC), and 'colorectal cancer' (CRC), with 'results', 'quality', and 'mortality' between the years 2000 and 2016 being relevant factors. We graded the recommendations as 'pro', 'maybe', or 'contra' in terms of a significant correlation between hospital volume (HV) or surgeon volume (SV) and treatment quality. We also listed the recommended numbers suggested for HV or SV as minimum caseloads and calculated and discussed the socio-economic impact of setting minimum caseloads for CRC. The correlations of caseloads of hospitals or surgeons turned out to be highly controversial concerning the influence of HV or SV on short- and long-term surgical treatment quality of CRC. Specialized statisticians made the point that the reports in the literature might not use the optimal biometrical analytical/reporting methods. A Dutch analysis showed that if a decision towards minimum caseloads, e.g. >50 for CRC resections, would be made, this would exclude a lot of hospitals with proven good treatment quality and include hospitals with a treatment quality below average. Our economic analysis envisioned that a yearly loss of EUR <830,000 might ensue for hospitals with volumes <50 per year. Caseload (HV, SV) definitely is an inconsistent surrogate parameter for treatment quality in the surgery of CC, RC, or CRC. If used at all, the lowest tolerable numbers but the highest demands for structural, process and result quality in the surgical/interdisciplinary treatment of CC and RC must be imposed and independently controlled. Hospitals fulfilling these demands should be medically and socio-economically preferred concerning the treatment of CC and RC patients.
NASA Astrophysics Data System (ADS)
Lester, M.; Imber, S. M.; Milan, S. E.
2012-12-01
The Super Dual Auroral Radar Network (SuperDARN) provides a long term data series which enables investigations of the coupled magnetosphere-ionosphere system. The network has been in existence essentially since 1995 when 6 radars were operational in the northern hemisphere and 4 in the southern hemisphere. We have been involved in an analysis of the data over the lifetime of the project and present results here from two key studies. In the first study we calculated the amount of ionospheric scatter which is observed by the radars and see clear annual and solar cycle variations in both hemispheres. The recent extended solar minimum also produces a significant effect in the scatter occurrence. In the second study, we have determined the latitude of the Heppner-Maynard Boundary (HMB) using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection for the interval 1996 - 2011. We find that the average latitude of the HMB at midnight is 61° magnetic latitude during solar the maximum of 2003, but it moves significantly poleward during solar minimum, averaging 64° latitude during 1996, and 68° during 2010. This poleward motion is observed despite the increasing number of low latitude radars built in recent years as part of the StormDARN network, and so is not an artefact of data coverage. We believe that the recent extreme solar minimum led to an average HMB location that was further poleward than the previous solar cycle. We have also calculated the Open-Closed field line Boundary (OCB) from auroral images during a subset of the interval (2000 - 2002) and find that on average the HMB is located equatorward of the OCB by ~7o. We suggest that the HMB may be a useful proxy for the OCB when global images are not available. The work presented in this paper has been undertaken as part of the European Cluster Assimilation Technology (ECLAT) project which is funded through the EU FP7 programme and involves groups at Leicester, Helsinki, Uppsala, FMI, Graz and St. Petersburg. The aim of the project is to provide additional data sets, primarily ground based data, to the Cluster Active Archive, and its successor the Cluster Final Archive, in order to enhance the scientific productivity of the archives.
Human equivalent power: towards an optimum energy level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafner, E.
1979-01-01
How much energy would be needed to support the average individual in an efficient technological culture. Present knowledge provides information about minimum dietary power needs; but so far we have not been able to find ways of analyzing other human needs which, in a civilized society, rise far above the power of metabolism. Thus we understand the level at its minimum but not at its optimum. This paper attempts to quantify an optimum power level for civilized society. The author describes a method he uses in seminars to quantify how many servants in units of human equivalent power (HEP) aremore » needed to supply a person in a upper-middle-class lifestyle. Typical seminar participants determine a per-capita power budget of 15 HEPs (perfect servants) would be required. Each human being on earth today is, according to the author, the master of forty slaves; in the U.S., he says, the number is close to 200. He concludes that a highly civilized standard of living may be closely associated with an optimum per capita power budget of 1500 watts; and since the average individual in the U.S. participates in energy turnover at almost ten times the rate he knows intuitively to be reasonable, reformation of American power habits will require reconstruction that shakes the house from top to bottom.« less
Nieder, C
2012-10-01
Tight budgets and increasing competition for research funding pose challenges for highly specialized medical disciplines such as radiation oncology. Therefore, a systematic review was performed of successfully completed research that had a high impact on clinical practice. These data might be helpful when preparing new projects. Different measures of impact, visibility, and quality of published research are available, each with its own pros and cons. For this study, the article citation rate was chosen (minimum 15 citations per year on average). Highly cited German contributions to the fields of radiation oncology, biology, and physics (published between 1990 and 2010) were identified from the Scopus database. Between 1990 and 2010, 106 articles published in 44 scientific journals met the citation requirement. The median average of yearly citations was 21 (maximum 167, minimum 15). All articles with ≥ 40 citations per year were published between 2003 and 2009, consistent with the assumption that the citation rate gradually increases for up to 2 years after publication. Most citations per year were recorded for meta-analyses and randomized phase III trials, which typically were performed by collaborative groups. A large variety of clinical radiotherapy, biology, and physics topics achieved high numbers of citations. However, areas such as quality of life and side effects, palliative radiotherapy, and radiotherapy for nonmalignant disorders were underrepresented. Efforts to increase their visibility might be warranted.
Spatial bandwidth considerations for optical communication through a free space propagation link.
Tyler, Glenn A
2011-12-01
This Letter concentrates on the transverse limitations imposed by a finite aperture optical propagation link that supports free space optical communication. Here it is assumed that a series of states, which are the spatial component of the message, are sent through the communication channel. The spatial bandwidth of the propagation link expressed as bits per transmitted photon is computed as the product of the average link efficiency times the entropy of the link. To facilitate the evaluation, it is assumed that the transmitted states are minimum energy loss orbital angular momentum states expressed in the form of f(nm)(r)exp(imθ), where the radial function is controlled to ensure that, for each quantum number denoted by the values of n and m, the minimum energy loss is obtained. The results illustrate that the bandwidth in units of bits per transmitted photon is very nearly equal to log(2)(N(2)(f)here log(2)(·) denotes the logarithm in base 2 and the Fresnel number, N(f)=(π/4)D(1)D(2)/(λz), where D(1) is the diameter of the transmitting aperture, D(2) is the diameter of the receiving aperture, λ is the wavelength of the light used, and z is the propagation distance. © 2011 Optical Society of America
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2012 CFR
2012-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2011 CFR
2011-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2010 CFR
2010-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2013 CFR
2013-04-01
...′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15-18 moles)], average molecular weight 3,000. Propylene glycol. α,α′,α″-[Propylidynetris (methylene)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3-Tetramethylbutyl) - phenyl]-omega...
HOT WATER DRILL FOR TEMPERATE ICE.
Taylor, Philip L.
1984-01-01
The development of a high-pressure hot-water drill is described, which has been used reliably in temperate ice to depths of 400 meters with an average drill rate of about 1. 5 meters per minute. One arrangement of the equipment weighs about 500 kilograms, and can be contained on two sleds, each about 3 meters long. Simplified performance equations are given, and experiments with nozzle design suggest a characteristic number describing the efficiency of each design, and a minimum bore-hole diameter very close to 6 centimeters for a hot water drill. Also discussed is field experience with cold weather, water supply, and contact with englacial cavities and the glacier bed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-12-15
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... upon which your application for a modification is based: —BOD5 ___ mg/L —Suspended solids ___ mg/L —pH... dry weather —average wet weather —maximum —annual average BOD5 (mg/L) for the following plant flows: —minimum —average dry weather —average wet weather —maximum —annual average Suspended solids (mg/L) for the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... upon which your application for a modification is based: —BOD5 ___ mg/L —Suspended solids ___ mg/L —pH... dry weather —average wet weather —maximum —annual average BOD5 (mg/L) for the following plant flows: —minimum —average dry weather —average wet weather —maximum —annual average Suspended solids (mg/L) for the...
21 CFR 177.1680 - Polyurethane resins.
Code of Federal Regulations, 2014 CFR
2014-04-01
...′-(Isopropylidenedi-p-phenylene)bis[omega-hydroxypoly (oxy-pro-pylene)(3-4 moles)], average molecular weight 675... propylene oxide). Polypropylene glycol. α,α′,α″-1,2,3-Propanetriyltris [omega-hydroxypoly (oxypropylene) (15...)] tris [omega-hydroxypoly (oxypropylene) (minimum 1.5 moles)], minimum molecular weight 400. α-[ρ(1,1,3,3...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... flows and/or tracer gas concentrations for transient and ramped modal cycles to validate the minimum... mode-average values instead of continuous measurements for discrete mode steady-state duty cycles... molar flow data. This involves determination of at least two of the following three quantities: Raw...
Implications of Liebig’s law of the minimum for tree-ring reconstructions of climate
NASA Astrophysics Data System (ADS)
Stine, A. R.; Huybers, P.
2017-11-01
A basic principle of ecology, known as Liebig’s Law of the Minimum, is that plant growth reflects the strongest limiting environmental factor. This principle implies that a limiting environmental factor can be inferred from historical growth and, in dendrochronology, such reconstruction is generally achieved by averaging collections of standardized tree-ring records. Averaging is optimal if growth reflects a single limiting factor and noise but not if growth also reflects locally variable stresses that intermittently limit growth. In this study a collection of Arctic tree ring records is shown to follow scaling relationships that are inconsistent with the signal-plus-noise model of tree growth but consistent with Liebig’s Law acting at the local level. Also consistent with law-of-the-minimum behavior is that reconstructions based on the least-stressed trees in a given year better-follow variations in temperature than typical approaches where all tree-ring records are averaged. Improvements in reconstruction skill occur across all frequencies, with the greatest increase at the lowest frequencies. More comprehensive statistical-ecological models of tree growth may offer further improvement in reconstruction skill.
van Rossum, Huub H; Kemperman, Hans
2017-02-01
To date, no practical tools are available to obtain optimal settings for moving average (MA) as a continuous analytical quality control instrument. Also, there is no knowledge of the true bias detection properties of applied MA. We describe the use of bias detection curves for MA optimization and MA validation charts for validation of MA. MA optimization was performed on a data set of previously obtained consecutive assay results. Bias introduction and MA bias detection were simulated for multiple MA procedures (combination of truncation limits, calculation algorithms and control limits) and performed for various biases. Bias detection curves were generated by plotting the median number of test results needed for bias detection against the simulated introduced bias. In MA validation charts the minimum, median, and maximum numbers of assay results required for MA bias detection are shown for various bias. Their use was demonstrated for sodium, potassium, and albumin. Bias detection curves allowed optimization of MA settings by graphical comparison of bias detection properties of multiple MA. The optimal MA was selected based on the bias detection characteristics obtained. MA validation charts were generated for selected optimal MA and provided insight into the range of results required for MA bias detection. Bias detection curves and MA validation charts are useful tools for optimization and validation of MA procedures.
Gauging the Nearness and Size of Cycle Minimum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1997-01-01
By definition, the conventional onset for the start of a sunspot cycle is the time when smoothed sunspot number (i.e., the 12-month moving average) has decreased to its minimum value (called minimum amplitude) prior to the rise to its maximum value (called maximum amplitude) for the given sunspot cycle. On the basis (if the modern era sunspot cycles 10-22 and on the presumption that cycle 22 is a short-period cycle having a cycle length of 120 to 126 months (the observed range of short-period modern era cycles), conventional onset for cycle 23 should not occur until sometime between September 1996 and March 1997, certainly between June 1996 and June 1997, based on the 95-percent confidence level deduced from the mean and standard deviation of period for the sample of six short-pei-iod modern era cycles. Also, because the first occurrence of a new cycle, high-latitude (greater than or equal to 25 degrees) spot has always preceded conventional onset of the new cycle by at least 3 months (for the data-available interval of cycles 12-22), conventional onset for cycle 23 is not expected until about August 1996 or later, based on the first occurrence of a new cycle 23, high-latitude spot during the decline of old cycle 22 in May 1996. Although much excitement for an earlier-occurring minimum (about March 1996) for cycle 23 was voiced earlier this year, the present study shows that this exuberance is unfounded. The decline of cycle 22 continues to favor cycle 23 minimum sometime during the latter portion of 1996 to the early portion of 1997.
An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.
Ranganayaki, V; Deepa, S N
2016-01-01
Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.
Atkins, Charisma Y.; Thomas, Timothy K.; Lenaker, Dane; Day, Gretchen M.; Hennessy, Thomas W.; Meltzer, Martin I.
2016-01-01
Objective We conducted a cost-effectiveness analysis of five specific dental interventions to help guide resource allocation. Methods We developed a spreadsheet-based tool, from the healthcare payer perspective, to evaluate the cost effectiveness of specific dental interventions that are currently used among Alaska Native children (6-60 months). Interventions included: water fluoridation, dental sealants, fluoride varnish, tooth brushing with fluoride toothpaste, and conducting initial dental exams on children <18 months of age. We calculated the cost-effectiveness ratio of implementing the proposed interventions to reduce the number of carious teeth and full mouth dental reconstructions (FMDRs) over 10 years. Results A total of 322 children received caries treatments completed by a dental provider in the dental chair, while 161 children received FMDRs completed by a dental surgeon in an operating room. The average cost of treating dental caries in the dental chair was $1,467 (~258,000 per year); while the cost of treating FMDRs was $9,349 (~1.5 million per year). All interventions were shown to prevent caries and FMDRs; however tooth brushing prevented the greatest number of caries at minimum and maximum effectiveness with 1,433 and 1,910, respectively. Tooth brushing also prevented the greatest number of FMDRs (159 and 211) at minimum and maximum effectiveness. Conclusions All of the dental interventions evaluated were shown to produce cost savings. However, the level of that cost saving is dependent on the intervention chosen. PMID:26990678
An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems
Ranganayaki, V.; Deepa, S. N.
2016-01-01
Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973
On the Relation Between Spotless Days and the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2005-01-01
Spotless days are examined as a predictor for the size and timing of a sunspot cycle. For cycles 16-23 the first spotless day for a new cycle, which occurs during the decline of the old cycle, is found to precede minimum amplitude for the new cycle by about approximately equal to 34 mo, having a range of 25-40 mo. Reports indicate that the first spotless day for cycle 24 occurred in January 2004, suggesting that minimum amplitude for cycle 24 should be expected before April 2007, probably sometime during the latter half of 2006. If true, then cycle 23 will be classified as a cycle of shorter period, inferring further that cycle 24 likely will be a cycle of larger than average minimum and maximum amplitudes and faster than average rise, peaking sometime in 2010.
Flow convergence caused by a salinity minimum in a tidal channel
Warner, John C.; Schoellhamer, David H.; Burau, Jon R.; Schladow, S. Geoffrey
2006-01-01
Residence times of dissolved substances and sedimentation rates in tidal channels are affected by residual (tidally averaged) circulation patterns. One influence on these circulation patterns is the longitudinal density gradient. In most estuaries the longitudinal density gradient typically maintains a constant direction. However, a junction of tidal channels can create a local reversal (change in sign) of the density gradient. This can occur due to a difference in the phase of tidal currents in each channel. In San Francisco Bay, the phasing of the currents at the junction of Mare Island Strait and Carquinez Strait produces a local salinity minimum in Mare Island Strait. At the location of a local salinity minimum the longitudinal density gradient reverses direction. This paper presents four numerical models that were used to investigate the circulation caused by the salinity minimum: (1) A simple one-dimensional (1D) finite difference model demonstrates that a local salinity minimum is advected into Mare Island Strait from the junction with Carquinez Strait during flood tide. (2) A three-dimensional (3D) hydrodynamic finite element model is used to compute the tidally averaged circulation in a channel that contains a salinity minimum (a change in the sign of the longitudinal density gradient) and compares that to a channel that contains a longitudinal density gradient in a constant direction. The tidally averaged circulation produced by the salinity minimum is characterized by converging flow at the bed and diverging flow at the surface, whereas the circulation produced by the constant direction gradient is characterized by converging flow at the bed and downstream surface currents. These velocity fields are used to drive both a particle tracking and a sediment transport model. (3) A particle tracking model demonstrates a 30 percent increase in the residence time of neutrally buoyant particles transported through the salinity minimum, as compared to transport through a constant direction density gradient. (4) A sediment transport model demonstrates increased deposition at the near-bed null point of the salinity minimum, as compared to the constant direction gradient null point. These results are corroborated by historically noted large sedimentation rates and a local maximum of selenium accumulation in clams at the null point in Mare Island Strait.
40 CFR 62.14455 - What if my HMIWI goes outside of a parameter limit?
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature (3-hour rolling average) simultaneously The PM, CO, and dioxin/furan emission limits. (c) Except..., daily average for batch HMIWI), and below the minimum dioxin/furan sorbent flow rate (3-hour rolling average) simultaneously The dioxin/furan emission limit. (3) Operates above the maximum charge rate (3...
40 CFR Table 2 to Subpart Dddd of... - Operating Requirements
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimum temperature established during the performance test Maintain the 3-hour block average THC... representative sample of the catalyst at least every 12 months Maintain the 3-hour block average THC... established according to § 63.2262(m) Maintain the 24-hour block average THC concentration a in the biofilter...
40 CFR Table 2 to Subpart Dddd of... - Operating Requirements
Code of Federal Regulations, 2010 CFR
2010-07-01
... minimum temperature established during the performance test Maintain the 3-hour block average THC... representative sample of the catalyst at least every 12 months Maintain the 3-hour block average THC... established according to § 63.2262(m) Maintain the 24-hour block average THC concentration a in the biofilter...
46 CFR 11.705 - Route familiarization requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... limitations specified in this section, the number of round trips required to qualify an applicant for a... endorsement as first-class pilot shall furnish evidence of having completed a minimum number of round trips... sought. Evidence of having completed a minimum number of round trips while serving as an observer...
46 CFR 11.705 - Route familiarization requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... limitations specified in this section, the number of round trips required to qualify an applicant for a... endorsement as first-class pilot shall furnish evidence of having completed a minimum number of round trips... sought. Evidence of having completed a minimum number of round trips while serving as an observer...
46 CFR 11.705 - Route familiarization requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... limitations specified in this section, the number of round trips required to qualify an applicant for a... endorsement as first-class pilot shall furnish evidence of having completed a minimum number of round trips... sought. Evidence of having completed a minimum number of round trips while serving as an observer...
Accuracy of measurement in electrically evoked compound action potentials.
Hey, Matthias; Müller-Deile, Joachim
2015-01-15
Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Kim, Hyokyung
2016-01-01
For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.
Droplet squeezing through a narrow constriction: Minimum impulse and critical velocity
NASA Astrophysics Data System (ADS)
Zhang, Zhifeng; Drapaca, Corina; Chen, Xiaolin; Xu, Jie
2017-07-01
Models of a droplet passing through narrow constrictions have wide applications in science and engineering. In this paper, we report our findings on the minimum impulse (momentum change) of pushing a droplet through a narrow circular constriction. The existence of this minimum impulse is mathematically derived and numerically verified. The minimum impulse happens at a critical velocity when the time-averaged Young-Laplace pressure balances the total minor pressure loss in the constriction. Finally, numerical simulations are conducted to verify these concepts. These results could be relevant to problems of energy optimization and studies of chemical and biomedical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baxter, V. D.; Rice, K.; Murphy, R.
Between October 2008 and May 2013 ORNL and ClimateMaster, Inc. (CM) engaged in a Cooperative Research and Development Agreement (CRADA) to develop a groundsource integrated heat pump (GS-IHP) system for the US residential market. A initial prototype was designed and fabricated, lab-tested, and modeled in TRNSYS (SOLAR Energy Laboratory, et al, 2010) to predict annual performance relative to 1) a baseline suite of equipment meeting minimum efficiency standards in effect in 2006 (combination of air-source heat pump (ASHP) and resistance water heater) and 2) a state-of-the-art (SOA) two-capacity ground-source heat pump with desuperheater water heater (WH) option (GSHPwDS). Predicted totalmore » annual energy savings, while providing space conditioning and water heating for a 2600 ft{sup 2} (242 m{sup 2}) house at 5 U.S. locations, ranged from 52 to 59%, averaging 55%, relative to the minimum efficiency suite. Predicted energy use for water heating was reduced 68 to 78% relative to resistance WH. Predicted total annual savings for the GSHPwDS relative to the same baseline averaged 22.6% with water heating energy use reduced by 10 to 30% from desuperheater contributions. The 1st generation (or alpha) prototype design for the GS-IHP was finalized in 2010 and field test samples were fabricated for testing by CM and by ORNL. Two of the alpha units were installed in 3700 ft{sup 2} (345 m{sup 2}) houses at the ZEBRAlliance site in Oak Ridge and field tested during 2011. Based on the steady-state performance demonstrated by the GS-IHPs it was projected that it would achieve >52% energy savings relative to the minimum efficiency suite at this specific site. A number of operational issues with the alpha units were identified indicating design changes needed to the system before market introduction could be accomplished. These were communicated to CM throughout the field test period. Based on the alpha unit test results and the diagnostic information coming from the field test experience, CM developed a 2nd generation (or beta) prototype in 2012. Field test verification units were fabricated and installed at the ZEBRAlliance site in Oak Ridge in May 2012 and at several sites near CM headquarters in Oklahoma. Field testing of the units continued through February 2013. Annual performance analyses of the beta unit (prototype 2) with vertical well ground heat exchangers (GHX) in 5 U.S. locations predict annual energy savings of 57% to 61%, averaging 59% relative to the minimum efficiency suite and 38% to 56%, averaging 46% relative to the SOA GSHPwDS. Based on the steady-state performance demonstrated by the test units it was projected that the 2nd generation units would achieve ~58% energy savings relative to the minimum efficiency suite at the Zebra Alliance site with horizontal GHX. A new product based on the beta unit design was announced by CM in 2012 – the Trilogy 40® Q-mode™ (http://cmdealernet.com/trilogy_40.html). The unit was formally introduced in a March 2012 press release (see Appendix A) and was available for order beginning in December 2012.« less
Tracking of white-tailed deer migration by Global Positioning System
Nelson, M.E.; Mech, L.D.; Frame, P.F.
2004-01-01
Based on global positioning system (GPS) radiocollars in northeastern Minnesota, deer migrated 23-45 km in spring during 31-356 h, deviating a maximum 1.6-4.0 km perpendicular from a straight line of travel between their seasonal ranges. They migrated a minimum of 2.1-18.6 km/day over 11-56 h during 2-14 periods of travel. Minimum travel during 1-h intervals averaged 1.5 km/h. Deer paused 1-12 times, averaging 24 h/pause. Deer migrated similar distances in autumn with comparable rates and patterns of travel.
Minimum Number of Observation Points for LEO Satellite Orbit Estimation by OWL Network
NASA Astrophysics Data System (ADS)
Park, Maru; Jo, Jung Hyun; Cho, Sungki; Choi, Jin; Kim, Chun-Hwey; Park, Jang-Hyun; Yim, Hong-Suh; Choi, Young-Jun; Moon, Hong-Kyu; Bae, Young-Ho; Park, Sun-Youp; Kim, Ji-Hye; Roh, Dong-Goo; Jang, Hyun-Jung; Park, Young-Sik; Jeong, Min-Ji
2015-12-01
By using the Optical Wide-field Patrol (OWL) network developed by the Korea Astronomy and Space Science Institute (KASI) we generated the right ascension and declination angle data from optical observation of Low Earth Orbit (LEO) satellites. We performed an analysis to verify the optimum number of observations needed per arc for successful estimation of orbit. The currently functioning OWL observatories are located in Daejeon (South Korea), Songino (Mongolia), and Oukaïmeden (Morocco). The Daejeon Observatory is functioning as a test bed. In this study, the observed targets were Gravity Probe B, COSMOS 1455, COSMOS 1726, COSMOS 2428, SEASAT 1, ATV-5, and CryoSat-2 (all in LEO). These satellites were observed from the test bed and the Songino Observatory of the OWL network during 21 nights in 2014 and 2015. After we estimated the orbit from systematically selected sets of observation points (20, 50, 100, and 150) for each pass, we compared the difference between the orbit estimates for each case, and the Two Line Element set (TLE) from the Joint Space Operation Center (JSpOC). Then, we determined the average of the difference and selected the optimal observation points by comparing the average values.
Jia, Huanguang; Pei, Qinglin; Sullivan, Charles T; Cowper Ripley, Diane C; Wu, Samuel S; Bates, Barbara E; Vogel, W Bruce; Bidelspach, Douglas E; Wang, Xinping; Hoffman, Nannette
2016-03-01
Effective poststroke rehabilitation care can speed patient recovery and minimize patient functional disabilities. Veterans affairs (VA) community living centers (CLCs) and VA-contracted community nursing homes (CNHs) are the 2 major sources of institutional long-term care for Veterans with stroke receiving care under VA auspices. This study compares rehabilitation therapy and restorative nursing care among Veterans residing in VA CLCs versus those Veterans in VA-contracted CNHs. Retrospective observational. All Veterans diagnosed with stroke, newly admitted to the CLCs or CNHs during the study period who completed at least 2 Minimum Data Set assessments postadmission. The outcomes were numbers of days for rehabilitation therapy and restorative nursing care received by the Veterans during their stays in CLCs or CNHs as documented in the Minimum Data Set databases. For rehabilitation therapy, the CLC Veterans had lower user rates (75.2% vs. 76.4%, P=0.078) and fewer observed therapy days (4.9 vs. 6.4, P<0.001) than CNH Veterans. However, the CLC Veterans had higher adjusted odds for therapy (odds ratio=1.16, P=0.033), although they had fewer average therapy days (coefficient=-1.53±0.11, P<0.001). For restorative nursing care, CLC Veterans had higher user rates (33.5% vs. 30.6%, P<0.001), more observed average care days (9.4 vs. 5.9, P<0.001), higher adjusted odds (odds ratio=2.28, P<0.001), and more adjusted days for restorative nursing care (coefficient=5.48±0.37, P<0.001). Compared with their counterparts at VA-contracted CNHs, Veterans at VA CLCs had fewer average rehabilitation therapy days (both unadjusted and adjusted), but they were significantly more likely to receive restorative nursing care both before and after risk adjustment.
Shin, Hye-Young; Park, Hae-Young Lopilly; Jung, Kyoung-In; Choi, Jin-A; Park, Chan Kee
2014-01-01
To determine whether the ganglion cell-inner plexiform layer (GCIPL) or circumpapillary retinal nerve fiber layer (cpRNFL) is better at distinguishing eyes with early glaucoma from normal eyes on the basis of the the initial location of the visual field (VF) damage. Retrospective, observational study. Eighty-four patients with early glaucoma and 43 normal subjects were enrolled. The patients with glaucoma were subdivided into 2 groups according to the location of VF damage: (1) an isolated parafoveal scotoma (PFS, N = 42) within 12 points of a central 10 degrees in 1 hemifield or (2) an isolated peripheral nasal step (PNS, N = 42) within the nasal periphery outside 10 degrees of fixation in 1 hemifield. All patients underwent macular and optic disc scanning using Cirrus high-definition optical coherence tomography (Carl Zeiss Meditec, Dublin, CA). The GCIPL and cpRNFL thicknesses were compared between groups. Areas under the receiver operating characteristic curves (AUCs) were calculated. Comparison of diagnostic ability using AUCs. The average and minimum GCIPL of the PFS group were significantly thinner than those of the PNS group, whereas there was no significant difference in the average retinal nerve fiber layer (RNFL) thickness between the 2 groups. The AUCs of the average (0.962) and minimum GCIPL (0.973) thicknesses did not differ from that of the average RNFL thickness (0.972) for discriminating glaucomatous changes between normal and all glaucoma eyes (P =0.566 and 0.974, respectively). In the PFS group, the AUCs of the average (0.988) and minimum GCIPL (0.999) thicknesses were greater than that of the average RNFL thickness (0.961, P =0.307 and 0.125, respectively). However, the AUCs of the average (0.936) and minimum GCIPL (0.947) thicknesses were lower than that of the average RNFL thickness (0.984) in the PNS group (P =0.032 and 0.069, respectively). The GCIPL parameters were more valuable than the cpRNFL parameters for detecting glaucoma in eyes with parafoveal VF loss, and the cpRNFL parameters were better than the GCIPL parameters for detecting glaucoma in eyes with peripheral VF loss. Clinicians should know that the diagnostic capability of macular GCIPL parameters depends largely on the location of the VF loss. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Bogucki, Artur J
2014-01-01
The knee joint is a bicondylar hinge two-level joint with six degrees of freedom. The location of the functional axis of flexion-extension motion is still a subject of research and discussions. During the swing phase, the femoral condyles do not have direct contact with the tibial articular surfaces and the intra-articular space narrows with increasing weight bearing. The geometry of knee movements is determined by the shape of articular surfaces. A digital recording of the gait of a healthy volunteer was analysed. In the first experimental variant, the subject was wearing a knee orthosis controlling flexion and extension with a hinge-type single-axis joint. In the second variant, the examination involved a hinge-type double-axis orthosis. Statistical analysis involved mathematically calculated values of displacement P. Scatter graphs with a fourth-order polynomial trend line with a confidence interval of 0.95 due to noise were prepared for each experimental variant. In Variant 1, the average displacement was 15.1 mm, the number of tests was 43, standard deviation was 8.761, and the confidence interval was 2.2. The maximum value of displacement was 30.9 mm and the minimum value was 0.7 mm. In Variant 2, the average displacement was 13.4 mm, the number of tests was 44, standard deviation was 7.275, and the confidence interval was 1.8. The maximum value of displacement was 30.2 mm and the minimum value was 3.4 mm. An analysis of moving averages for both experimental variants revealed that displacement trends for both types of orthosis were compatible from the mid-stance to the mid-swing phase. 1. The method employed in the experiment allows for determining the alignment between the axis of the knee joint and that of shin and thigh orthoses. 2. Migration of the single and double-axis orthoses during the gait cycle exceeded 3 cm. 3. During weight bearing, the double-axis orthosis was positioned more correctly. 4. The study results may be helpful in designing new hinge-type knee joints.
Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume
2012-01-01
We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.
Scaup migration patterns in North Dakota relative to temperatures and water conditions
Austin, J.E.; Granfors, D.A.; Johnson, M.A.; Kohn, S.C.
2002-01-01
Greater (Aythya marila) and lesser scaup (A. affinis) have protracted spring migrations. Migrants may still be present on southern breeding areas when the annual Waterfowl Breeding Population and Habitat Surveys (WBPHS) are being conducted. Understanding factors affecting the chronology and rate of spring migration is important for the interpretation of data from annual population surveys. We describe the general temporal pattern of scaup numbers in south-central North Dakota in spring, examine the relationships between scaup numbers and measures of local water conditions and spring temperatures, and assess timing of the WBPHS relative to numbers of scaup occurring in the study area in late May. Scaup were counted weekly on a 95-km, 400-m-wide transect from late March through May, 1957-1999. Average numbers of scaup per count were positively associated with numbers of seasonal, semipermanent, and total ponds. Average minimum daily ambient temperatures showed a trend of increasing temperatures over the 43 years, and dates of peak scaup counts became progressively earlier. Weeks of early migration usually had higher temperatures than weeks of delayed migration. The relationship between temperature and timing of migration was strongest during the second and third weeks of April, which is A# 1 week before numbers peak (median date = 19 Apr). Trends in sex and pair ratios were not consistent among years. Counts in late May-early June indicated considerable annual variability in the magnitude of late migrants. Scaup numbers during this period seemed to stabilize in only 5 of the 19 years when 2 or more surveys were conducted after the WBPHS. These findings corroborate concerns regarding the accuracy of the WBPHS for representing breeding populations of scaup and the possibility of double-counting scaup in some years.
7 CFR 51.2113 - Size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of range in count of whole almond kernels per ounce or in terms of minimum, or minimum and maximum diameter. When a range in count is specified, the whole kernels shall be fairly uniform in size, and the average count per ounce shall be within the range specified. Doubles and broken kernels shall not be used...
Intrinsic coincident full-Stokes polarimeter using stacked organic photovoltaics.
Yang, Ruonan; Sen, Pratik; O'Connor, B T; Kudenov, M W
2017-02-20
An intrinsic coincident full-Stokes polarimeter is demonstrated by using strain-aligned polymer-based organic photovoltaics (OPVs) that can preferentially absorb certain polarized states of incident light. The photovoltaic-based polarimeter is capable of measuring four Stokes parameters by cascading four semitransparent OPVs in series along the same optical axis. This in-line polarimeter concept potentially ensures high temporal and spatial resolution with higher radiometric efficiency as compared to the existing polarimeter architecture. Two wave plates were incorporated into the system to modulate the S3 Stokes parameter so as to reduce the condition number of the measurement matrix and maximize the measured signal-to-noise ratio. Radiometric calibration was carried out to determine the measurement matrix. The polarimeter presented in this paper demonstrated an average RMS error of 0.84% for reconstructed Stokes vectors after normalized to S0. A theoretical analysis of the minimum condition number of the four-cell OPV design showed that for individually optimized OPV cells, a condition number of 2.4 is possible.
Code of Federal Regulations, 2010 CFR
2010-07-01
... scrubber, maintain the daily average pressure drop across the venturi within the operating range value... . . . You must . . . 1. Scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performance test. b. Maintain the daily average scrubber effluent pH...
Code of Federal Regulations, 2011 CFR
2011-07-01
... . . . You must . . . 1. Scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performance test. b. Maintain the daily average scrubber effluent pH... scrubber, maintain the daily average pressure drop across the venturi within the operating range value...
On the critical flame radius and minimum ignition energy for spherical flame initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zheng; Burke, M. P.; Ju, Yiguang
2011-01-01
Spherical flame initiation from an ignition kernel is studied theoretically and numerically using different fuel/oxygen/helium/argon mixtures (fuel: hydrogen, methane, and propane). The emphasis is placed on investigating the critical flame radius controlling spherical flame initiation and its correlation with the minimum ignition energy. It is found that the critical flame radius is different from the flame thickness and the flame ball radius and that their relationship depends strongly on the Lewis number. Three different flame regimes in terms of the Lewis number are observed and a new criterion for the critical flame radius is introduced. For mixtures with Lewis numbermore » larger than a critical Lewis number above unity, the critical flame radius is smaller than the flame ball radius but larger than the flame thickness. As a result, the minimum ignition energy can be substantially over-predicted (under-predicted) based on the flame ball radius (the flame thickness). The results also show that the minimum ignition energy for successful spherical flame initiation is proportional to the cube of the critical flame radius. Furthermore, preferential diffusion of heat and mass (i.e. the Lewis number effect) is found to play an important role in both spherical flame initiation and flame kernel evolution after ignition. It is shown that the critical flame radius and the minimum ignition energy increase significantly with the Lewis number. Therefore, for transportation fuels with large Lewis numbers, blending of small molecule fuels or thermal and catalytic cracking will significantly reduce the minimum ignition energy.« less
[Projection of prisoner numbers].
Metz, Rainer; Sohn, Werner
2015-01-01
The past and future development of occupancy rates in prisons is of crucial importance for the judicial administration of every country. Basic factors for planning the required penal facilities are seasonal fluctuations, minimum, maximum and average occupancy as well as the present situation and potential development of certain imprisonment categories. As the prisoner number of a country is determined by a complex set of interdependent conditions, it has turned out to be difficult to provide any theoretical explanations. The idea accepted in criminology for a long time that prisoner numbers are interdependent with criminal policy must be regarded as having failed. Statistical and time series analyses may help, however, to identify the factors having influenced the development of prisoner numbers in the past. The analyses presented here, first describe such influencing factors from a criminological perspective and then deal with their statistical identification and modelling. Using the development of prisoner numbers in Hesse as an example, it has been found that modelling methods in which the independent variables predict the dependent variable with a time lag are particularly helpful. A potential complication is, however, that for predicting the number of prisoners the different dynamics in German and foreign prisoners require the development of further models.
A comparative look at sunspot cycles
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1984-01-01
On the basis of cycles 8 through 20, spanning about 143 years, observations of sunspot number, smoothed sunspot number, and their temporal properties were used to compute means, standard deviations, ranges, and frequency of occurrence histograms for a number of sunspot cycle parameters. The resultant schematic sunspot cycle was contrasted with the mean sunspot cycle, obtained by averaging smoothed sunspot number as a function of time, tying all cycles (8 through 20) to their minimum occurence date. A relatively good approximation of the time variation of smoothed sunspot number for a given cycle is possible if sunspot cycles are regarded in terms of being either HIGH- or LOW-R(MAX) cycles or LONG- or SHORT-PERIOD cycles, especially the latter. Linear regression analyses were performed comparing late cycle parameters with early cycle parameters and solar cycle number. The early occurring cycle parameters can be used to estimate later occurring cycle parameters with relatively good success, based on cycle 21 as an example. The sunspot cycle record clearly shows that the trend for both R(MIN) and R(MAX) was toward decreasing value between cycles 8 through 14 and toward increasing value between cycles 14 through 20. Linear regression equations were also obtained for several measures of solar activity.
Quantification and assessment of heat and cold waves in Novi Sad, Northern Serbia
NASA Astrophysics Data System (ADS)
Basarin, Biljana; Lukić, Tin; Matzarakis, Andreas
2016-01-01
Physiologically equivalent temperature (PET) has been applied to the analysis of heat and cold waves and human thermal conditions in Novi Sad, Serbia. A series of daily minimum and maximum air temperature, relative humidity, wind, and cloud cover was used to calculate PET for the investigated period 1949-2012. The heat and cold wave analysis was carried out on days with PET values exceeding defined thresholds. Additionally, the acclimatization approach was introduced to evaluate human adaptation to interannual thermal perception. Trend analysis has revealed the presence of increasing trend in summer PET anomalies, number of days above defined threshold, number of heat waves, and average duration of heat waves per year since 1981. Moreover, winter PET anomaly as well as the number of days below certain threshold and number of cold waves per year until 1980 was decreasing, but the decrease was not statistically significant. The highest number of heat waves during summer was registered in the last two decades, but also in the first decade of the investigated period. On the other hand, the number of cold waves during six decades is quite similar and the differences are very small.
Quantification and assessment of heat and cold waves in Novi Sad, Northern Serbia.
Basarin, Biljana; Lukić, Tin; Matzarakis, Andreas
2016-01-01
Physiologically equivalent temperature (PET) has been applied to the analysis of heat and cold waves and human thermal conditions in Novi Sad, Serbia. A series of daily minimum and maximum air temperature, relative humidity, wind, and cloud cover was used to calculate PET for the investigated period 1949-2012. The heat and cold wave analysis was carried out on days with PET values exceeding defined thresholds. Additionally, the acclimatization approach was introduced to evaluate human adaptation to interannual thermal perception. Trend analysis has revealed the presence of increasing trend in summer PET anomalies, number of days above defined threshold, number of heat waves, and average duration of heat waves per year since 1981. Moreover, winter PET anomaly as well as the number of days below certain threshold and number of cold waves per year until 1980 was decreasing, but the decrease was not statistically significant. The highest number of heat waves during summer was registered in the last two decades, but also in the first decade of the investigated period. On the other hand, the number of cold waves during six decades is quite similar and the differences are very small.
Number of minimum-weight code words in a product code
NASA Technical Reports Server (NTRS)
Miller, R. L.
1978-01-01
Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.
2008-06-12
15 14 15 STANDARD DEVIATION (:1:) 3 5 4 4 4 MINIMUM VALUE 9 12 11 8 10 MAXIMUM VALUE 20 33 23 29 24 N" 56 19 14 38 28 IMm ~ ~ CORN Oil l:W...AVERAGE 9 9 10 8 9 STANDARD DEVIATION (:1:) 3 3 4 2 3 MINIMUM VAlUE 2 6 3 5 4 MAXIMUM VAlUE 20 16 23 12 15 N" 65 21 14 33 29 E.COLI DMSO ~ CORN ...21 33 25 21 23 N* 66 19 14 38 28 :wm ~ ACET CORN Oil !L!! SAUNE AVERAGE 9 10 10 9 8 STANDARD DEVlA.11ON (:I:) 3 3 4 3 2 MINIMUM VAWS . 4 6 6 6 3
Teleportation of a two-mode entangled coherent state encoded with two-qubit information
NASA Astrophysics Data System (ADS)
Mishra, Manoj K.; Prakash, Hari
2010-09-01
We propose a scheme to teleport a two-mode entangled coherent state encoded with two-qubit information, which is better than the two schemes recently proposed by Liao and Kuang (2007 J. Phys. B: At. Mol. Opt. Phys. 40 1183) and by Phien and Nguyen (2008 Phys. Lett. A 372 2825) in that our scheme gives higher value of minimum assured fidelity and minimum average fidelity without using any nonlinear interactions. For involved coherent states | ± αrang, minimum average fidelity in our case is >=0.99 for |α| >= 1.6 (i.e. |α|2 >= 2.6), while previously proposed schemes referred above report the same for |α| >= 5 (i.e. |α|2 >= 25). Since it is very challenging to produce superposed coherent states of high coherent amplitude (|α|), our teleportation scheme is at the reach of modern technology.
Mijaljica, Goran
2014-03-01
Ethics has an established place within the medical curriculum. However notable differences exist in the programme characteristics of different schools of medicine. This paper addresses the main differences in the curricula of medical schools in South East Europe regarding education in medical ethics and bioethics, with a special emphasis on research ethics, and proposes a model curriculum which incorporates significant topics in all three fields. Teaching curricula of Medical Schools in Bulgaria, Bosnia and Herzegovina, Croatia, Serbia, Macedonia and Montenegro were acquired and a total of 14 were analyzed. Teaching hours for medical ethics and/or bioethics and year of study in which the course is taught were also analyzed. The average number of teaching hours in medical ethics and bioethics is 27.1 h per year. The highest national average number of teaching hours was in Croatia (47.5 h per year), and the lowest was in Serbia (14.8). In the countries of the European Union the mean number of hours given to ethics teaching throughout the complete curriculum was 44. In South East Europe, the maximum number of teaching hours is 60, while the minimum number is 10 teaching hours. Research ethics topics also show a considerable variance within the regional medical schools. Approaches to teaching research ethics vary, even within the same country. The proposed model for education in this area is based on the United Nations Educational, Scientific and Cultural Organization Bioethics Core Curriculum. The model curriculum consists of topics in medical ethics, bioethics and research ethics, as a single course, over 30 teaching hours.
Sources of Geomagnetic Activity during Nearly Three Solar Cycles (1972-2000)
NASA Technical Reports Server (NTRS)
Richardson, I. G.; Cane, H. V.; Cliver, E. W.; White, Nicholas E. (Technical Monitor)
2002-01-01
We examine the contributions of the principal solar wind components (corotating highspeed streams, slow solar wind, and transient structures, i.e., interplanetary coronal mass ejections (CMEs), shocks, and postshock flows) to averages of the aa geomagnetic index and the interplanetary magnetic field (IMF) strength in 1972-2000 during nearly three solar cycles. A prime motivation is to understand the influence of solar cycle variations in solar wind structure on long-term (e.g., approximately annual) averages of these parameters. We show that high-speed streams account for approximately two-thirds of long-term aa averages at solar minimum, while at solar maximum, structures associated with transients make the largest contribution (approx. 50%), though contributions from streams and slow solar wind continue to be present. Similarly, high-speed streams are the principal contributor (approx. 55%) to solar minimum averages of the IMF, while transient-related structures are the leading contributor (approx. 40%) at solar maximum. These differences between solar maximum and minimum reflect the changing structure of the near-ecliptic solar wind during the solar cycle. For minimum periods, the Earth is embedded in high-speed streams approx. 55% of the time versus approx. 35% for slow solar wind and approx. 10% for CME-associated structures, while at solar maximum, typical percentages are as follows: high-speed streams approx. 35%, slow solar wind approx. 30%, and CME-associated approx. 35%. These compositions show little cycle-to-cycle variation, at least for the interval considered in this paper. Despite the change in the occurrences of different types of solar wind over the solar cycle (and less significant changes from cycle to cycle), overall, variations in the averages of the aa index and IMF closely follow those in corotating streams. Considering solar cycle averages, we show that high-speed streams account for approx. 44%, approx. 48%, and approx. 40% of the solar wind composition, aa, and the IMF strength, respectively, with corresponding figures of approx. 22%, approx. 32%, and approx. 25% for CME-related structures, and approx. 33%, approx. 19%, and approx. 33% for slow solar wind.
On Using a Space Telescope to Detect Weak-lensing Shear
NASA Astrophysics Data System (ADS)
Tung, Nathan; Wright, Edward
2017-11-01
Ignoring redshift dependence, the statistical performance of a weak-lensing survey is set by two numbers: the effective shape noise of the sources, which includes the intrinsic ellipticity dispersion and the measurement noise, and the density of sources that are useful for weak-lensing measurements. In this paper, we provide some general guidance for weak-lensing shear measurements from a “generic” space telescope by looking for the optimum wavelength bands to maximize the galaxy flux signal-to-noise ratio (S/N) and minimize ellipticity measurement error. We also calculate an effective galaxy number per square degree across different wavelength bands, taking into account the density of sources that are useful for weak-lensing measurements and the effective shape noise of sources. Galaxy data collected from the ultra-deep UltraVISTA Ks-selected and R-selected photometric catalogs (Muzzin et al. 2013) are fitted to radially symmetric Sérsic galaxy light profiles. The Sérsic galaxy profiles are then stretched to impose an artificial weak-lensing shear, and then convolved with a pure Airy Disk PSF to simulate imaging of weak gravitationally lensed galaxies from a hypothetical diffraction-limited space telescope. For our model calculations and sets of galaxies, our results show that the peak in the average galaxy flux S/N, the minimum average ellipticity measurement error, and the highest effective galaxy number counts all lie around the K-band near 2.2 μm.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2011 CFR
2011-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2012 CFR
2012-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2010 CFR
2010-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
NASA Astrophysics Data System (ADS)
Moon, Ga-Hee
2011-06-01
It is generally believed that the occurrence of a magnetic storm depends upon the solar wind conditions, particularly the southward interplanetary magnetic field (IMF) component. To understand the relationship between solar wind parameters and magnetic storms, variations in magnetic field polarity and solar wind parameters during magnetic storms are examined. A total of 156 storms during the period of 1997~2003 are used. According to the interplanetary driver, magnetic storms are divided into three types, which are coronal mass ejection (CME)-driven storms, co-rotating interaction region (CIR)-driven storms, and complicated type storms. Complicated types were not included in this study. For this purpose, the manner in which the direction change of IMF By and Bz components (in geocentric solar magnetospheric coordinate system coordinate) during the main phase is related with the development of the storm is examined. The time-integrated solar wind parameters are compared with the time-integrated disturbance storm time (Dst) index during the main phase of each magnetic storm. The time lag with the storm size is also investigated. Some results are worth noting: CME-driven storms, under steady conditions of Bz < 0, represent more than half of the storms in number. That is, it is found that the average number of storms for negative sign of IMF Bz (T1~T4) is high, at 56.4%, 53.0%, and 63.7% in each storm category, respectively. However, for the CIR-driven storms, the percentage of moderate storms is only 29.2%, while the number of intense storms is more than half (60.0%) under the Bz < 0 condition. It is found that the correlation is highest between the time-integrated IMF Bz and the time-integrated Dst index for the CME-driven storms. On the other hand, for the CIR-driven storms, a high correlation is found, with the correlation coefficient being 0.93, between time-integrated Dst index and time-integrated solar wind speed, while a low correlation, 0.51, is found between timeintegrated Bz and time-integrated Dst index. The relationship between storm size and time lag in terms of hours from Bz minimum to Dst minimum values is investigated. For the CME-driven storms, time lag of 26% of moderate storms is one hour, whereas time lag of 33% of moderate storms is two hours for the CIR-driven storms. The average values of solar wind parameters for the CME and CIR-driven storms are also examined. The average values of |Dstmin| and |Bzmin| for the CME-driven storms are higher than those of CIR-driven storms, while the average value of temperature is lower.
Code of Federal Regulations, 2010 CFR
2010-04-01
... INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM Administrative Procedures, Student Counts, and Verifications § 39.214 What is the minimum number of instructional...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... Information Collection: Comment Request; Minimum Property Standards for Multifamily and Care-Type Facilities...: Minimum Property Standards for Multifamily and Care-type facilities. OMB Control Number, if applicable... Housing and Urban Development (HUD) developed the Minimum Property Standards (MPS) program in order to...
NASA Astrophysics Data System (ADS)
Manoj, B.; Kunjomana, A. G.
2015-02-01
The results of the structural investigation of three Indian coals showed that, the structural parameters like fa & Lc increased where as interlayer spacing d002 decreased with increase in carbon content, aromaticity and coal rank. These structural parameters change just opposite with increase in volatile matter content. Considering the 'turbostratic' structure for coals, the minimum separation between aromatic lamellae was found to vary between 3.34 to 3.61 A° for these coals. As the aromaticity increased, the interlayer spacing decreased an indication of more graphitization of the sample. Volatile matter and carbon content had a strong influence on the aromaticity, interlayer spacing and stacking height on the sample. The average number of carbon atoms per aromatic lamellae and number of layers in the lamellae was found to be 16-21 and 7-8 for all the samples.
Mean-field theory for multipole ordering in f-electron systems on the basis of a j-j coupling scheme
NASA Astrophysics Data System (ADS)
Yamamura, Ryosuke; Hotta, Takashi
2018-05-01
We develop a microscopic theory for multipole ordering, applicable to the system with plural numbers of f electrons per ion, from an itinerant picture on the basis of a j-j coupling scheme. For the purpose, by introducing the Γ8 Hubbard Hamiltonian as the minimum model to discuss the multipole ordering in f-electron systems, we describe the mean-field approximation in terms of the multipole operators. For the case of n = 2 , where n denotes the average f-electron number per ion, we analyze the model on a simple cubic lattice to obtain the multipole phase diagram. In particular, we find the order of non-Kramers Γ3 quadrupoles, O20 and O22 , with different ordering vectors. We attempt to explain the phase diagram from the discussion on the interaction energy.
Design verification test matrix development for the STME thrust chamber assembly
NASA Technical Reports Server (NTRS)
Dexter, Carol E.; Elam, Sandra K.; Sparks, David L.
1993-01-01
This report presents the results of the test matrix development for design verification at the component level for the National Launch System (NLS) space transportation main engine (STME) thrust chamber assembly (TCA) components including the following: injector, combustion chamber, and nozzle. A systematic approach was used in the development of the minimum recommended TCA matrix resulting in a minimum number of hardware units and a minimum number of hot fire tests.
Quiet-Time Suprathermal ( 0.1-1.5 keV) Electrons in the Solar Wind
NASA Astrophysics Data System (ADS)
Wang, L.; Tao, J.; Zong, Q.; Li, G.; Salem, C. S.; Wimmer-Schweingruber, R. F.; He, J.; Tu, C.; Bale, S. D.
2016-12-01
We present a statistical survey of the energy spectrum of solar wind suprathermal (˜0.1-1.5 keV) electrons measured by the WIND/3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ˜0.1-1.5 keV to a Kappa distribution function with an index κ and effective temperature Teff. We also calculate the number density n and average energy Eavg of strahl and halo electrons by integrating the electron measurements between ˜0.1 and 1.5 keV. We find a strong positive correlation between κ and Teff for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity).
Quiet-time Suprathermal (~0.1-1.5 keV) Electrons in the Solar Wind
NASA Astrophysics Data System (ADS)
Tao, Jiawei; Wang, Linghua; Zong, Qiugang; Li, Gang; Salem, Chadi S.; Wimmer-Schweingruber, Robert F.; He, Jiansen; Tu, Chuanyi; Bale, Stuart D.
2016-03-01
We present a statistical survey of the energy spectrum of solar wind suprathermal (˜0.1-1.5 keV) electrons measured by the WIND 3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ˜0.1-1.5 keV to a Kappa distribution function with an index κ and effective temperature Teff. We also calculate the number density n and average energy Eavg of strahl and halo electrons by integrating the electron measurements between ˜0.1 and 1.5 keV. We find a strong positive correlation between κ and Teff for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity).
40 CFR Table 3 to Subpart Jjjjjj... - Operating Limits for Boilers With Emission Limits
Code of Federal Regulations, 2013 CFR
2013-07-01
... as defined in § 63.11237. 4. Dry sorbent or activated carbon injection control Maintain the 30-day rolling average sorbent or activated carbon injection rate at or above the minimum sorbent injection rate or minimum activated carbon injection rate as defined in § 63.11237. When your boiler operates at...
Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds
NASA Technical Reports Server (NTRS)
Jardin, Matthew R.
2004-01-01
A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.
Impact of socioeconomic status on municipal solid waste generation rate.
Khan, D; Kumar, A; Samadder, S R
2016-03-01
The solid waste generation rate was expected to vary in different socioeconomic groups due to many environmental and social factors. This paper reports the assessment of solid waste generation based on different socioeconomic parameters like education, occupation, income of the family, number of family members etc. A questionnaire survey was conducted in the study area to identify the different socioeconomic groups that may affect the solid waste generation rate and composition. The average waste generated in the municipality is 0.41 kg/capita/day in which the maximum waste was found to be generated by lower middle socioeconomic group (LMSEG) with average waste generation of 0.46 kg/capita/day. Waste characterization indicated that there was no much difference in the composition of wastes among different socioeconomic groups except ash residue and plastic. Ash residue is found to increase as we move lower down the socioeconomic groups with maximum (31%) in lower socioeconomic group (LSEG). The study area is a coal based city hence application of coal and wood as fuel for cooking in the lower socioeconomic group is the reason for high amount of ash content. Plastic waste is maximum (15%) in higher socioeconomic group (HSEG) and minimum (1%) in LSEG. Food waste is a major component of generated waste in almost every socioeconomic group with maximum (38%) in case of HSEG and minimum (28%) in LSEG. This study provides new insights on the role of various socioeconomic parameters on generation of household wastes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Van Dyke, Miriam E; Komro, Kelli A; Shah, Monica P; Livingston, Melvin D; Kramer, Michael R
2018-07-01
Despite substantial declines since the 1960's, heart disease remains the leading cause of death in the United States (US) and geographic disparities in heart disease mortality have grown. State-level socioeconomic factors might be important contributors to geographic differences in heart disease mortality. This study examined the association between state-level minimum wage increases above the federal minimum wage and heart disease death rates from 1980 to 2015 among 'working age' individuals aged 35-64 years in the US. Annual, inflation-adjusted state and federal minimum wage data were extracted from legal databases and annual state-level heart disease death rates were obtained from CDC Wonder. Although most minimum wage and health studies to date use conventional regression models, we employed marginal structural models to account for possible time-varying confounding. Quasi-experimental, marginal structural models accounting for state, year, and state × year fixed effects estimated the association between increases in the state-level minimum wage above the federal minimum wage and heart disease death rates. In models of 'working age' adults (35-64 years old), a $1 increase in the state-level minimum wage above the federal minimum wage was on average associated with ~6 fewer heart disease deaths per 100,000 (95% CI: -10.4, -1.99), or a state-level heart disease death rate that was 3.5% lower per year. In contrast, for older adults (65+ years old) a $1 increase was on average associated with a 1.1% lower state-level heart disease death rate per year (b = -28.9 per 100,000, 95% CI: -71.1, 13.3). State-level economic policies are important targets for population health research. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, R.S.
1989-06-01
For a vehicle operating across arbitrarily-contoured terrain, finding the most fuel-efficient route between two points can be viewed as a high-level global path-planning problem with traversal costs and stability dependent on the direction of travel (anisotropic). The problem assumes a two-dimensional polygonal map of homogeneous cost regions for terrain representation constructed from elevation information. The anisotropic energy cost of vehicle motion has a non-braking component dependent on horizontal distance, a braking component dependent on vertical distance, and a constant path-independent component. The behavior of minimum-energy paths is then proved to be restricted to a small, but optimal set of traversalmore » types. An optimal-path-planning algorithm, using a heuristic search technique, reduces the infinite number of paths between the start and goal points to a finite number by generating sequences of goal-feasible window lists from analyzing the polygonal map and applying pruning criteria. The pruning criteria consist of visibility analysis, heading analysis, and region-boundary constraints. Each goal-feasible window lists specifies an associated convex optimization problem, and the best of all locally-optimal paths through the goal-feasible window lists is the globally-optimal path. These ideas have been implemented in a computer program, with results showing considerably better performance than the exponential average-case behavior predicted.« less
Matthews, Leanna P; Parks, Susan E; Fournet, Michelle E H; Gabriele, Christine M; Womble, Jamie N; Klinck, Holger
2017-03-01
Source levels of harbor seal breeding vocalizations were estimated using a three-element planar hydrophone array near the Beardslee Islands in Glacier Bay National Park and Preserve, Alaska. The average source level for these calls was 144 dB RMS re 1 μPa at 1 m in the 40-500 Hz frequency band. Source level estimates ranged from 129 to 149 dB RMS re 1 μPa. Four call parameters, including minimum frequency, peak frequency, total duration, and pulse duration, were also measured. These measurements indicated that breeding vocalizations of harbor seals near the Beardslee Islands of Glacier Bay National Park are similar in duration (average total duration: 4.8 s, average pulse duration: 3.0 s) to previously reported values from other populations, but are 170-220 Hz lower in average minimum frequency (78 Hz).
Frank, Rachel M.; Kim, Jae; O’Donnell, Patrick Joseph; O’Brien, Michael; Newgren, Jonathan; Verma, Nikhil N.; Nicholson, Gregory P.; Cole, Brian J.; Romeo, Anthony A.; Provencher, Matthew T.
2017-01-01
Objectives: Recently, the use of fresh distal tibia allograft (DTA) for glenoid reconstruction in anterior shoulder instability has been described, with encouraging short-term outcomes, however, there is little available comparative data to the Latarjet procedure, long considered the gold standard for bone loss treatment. Thus, the purpose of this study was to determine the clinical outcomes for patients undergoing DTA compared to a matched cohort of patients undergoing Latarjet. Methods: A review of prospectively collected data of patients with a minimum 15% anterior glenoid bone loss who underwent shoulder stabilization with either DTA or Latarjet with a minimum follow-up of 2 years was conducted. Consecutive patients undergoing DTA were matched by age, body mass index, and number of previous ipsilateral shoulder surgeries to patients undergoing Latarjet in a 1-to- 1 format. Patients were evaluated preoperatively and at a minimum 2 years post operatively with American Shoulder and Elbow Surgeons (ASES), Single Assessment Numeric Evaluation (SANE), and Western Ontario Shoulder Instability Index (WOSI) outcomes assessments. Complications, reoperations, and episodes of recurrent instability were also analyzed. Statistical analysis was performed with student T-tests, with P<0.05 considered significant. Results: A total of 60 patients (30 Latarjet, 30 DTA) with an average age of 26.5±7.8 years were analyzed at an average 46±17 months (range, 24-87) following surgery. Twenty-two patients (73%) in each group underwent prior ipsilateral shoulder surgery (range, 1 to 3 surgeries) prior to Latarjet or DTA. There were no statistical differences in age, BMI, or number of prior surgeries between the groups. There were no differences between the groups in regards to recurrent instability events, subluxation, or apprehension on final examination (P>0.8). Patients in both groups experienced significant improvements in all outcomes scores following surgery (P>0.05 for all). When comparing final outcomes of Latarjet versus DTA, no significant differences were found in postoperative ASES, WOSI or SANE scores between the groups (P>0.05 for all). In the Latarjet group, 1 patient underwent reoperation (3.3%) with arthroscopic debridement with subacromial decompression for persistent anterolateral shoulder pain. In the DTA group, 1 patient (3.3%) underwent reoperation with DTA revision for asymptomatic hardware failure. There were no cases of neurovascular injuries or other complications in either cohort. Conclusion: At an average follow-up of nearly 4 years, fresh DTA reconstruction for recurrent anterior shoulder instability results in a clinically stable joint with similar clinical outcomes and recurrence rates compared to Latarjet. Longer-term studies are needed to determine if these results are maintained over time.
10 CFR 905.16 - What are the requirements for the minimum investment report alternative?
Code of Federal Regulations, 2010 CFR
2010-01-01
... number, email and Website if applicable, and contact person; (2) Authority or requirement to undertake a..., in writing, a minimum investment report every 5 years. (h) Maintaining minimum investment reports. (1...
Short-term Lost Productivity per Victim: Intimate Partner Violence, Sexual Violence, or Stalking.
Peterson, Cora; Liu, Yang; Kresnow, Marcie-Jo; Florence, Curtis; Merrick, Melissa T; DeGue, Sarah; Lokey, Colby N
2018-07-01
The purpose of this study is to estimate victims' lifetime short-term lost productivity because of intimate partner violence, sexual violence, or stalking. U.S. nationally representative data from the 2012 National Intimate Partner and Sexual Violence Survey were used to estimate a regression-adjusted average per victim (female and male) and total population number of cumulative short-term lost work and school days (or lost productivity) because of victimizations over victims' lifetimes. Victims' lost productivity was valued using a U.S. daily production estimate. Analysis was conducted in 2017. Non-institutionalized adults with some lifetime exposure to intimate partner violence, sexual violence, or stalking (n=6,718 respondents; survey-weighted n=130,795,789) reported nearly 741 million lost productive days because of victimizations by an average of 2.5 perpetrators per victim. The adjusted per victim average was 4.9 (95% CI=3.9, 5.9) days, controlling for victim, perpetrator, and violence type factors. The estimated societal cost of this short-term lost productivity was $730 per victim, or $110 billion across the lifetimes of all victims (2016 USD). Factors associated with victims having a higher number of lost days included a higher number of perpetrators and being female, as well as sexual violence, physical violence, or stalking victimization by an intimate partner perpetrator, stalking victimization by an acquaintance perpetrator, and sexual violence or stalking victimization by a family member perpetrator. Short-term lost productivity represents a minimum economic valuation of the immediate negative effects of intimate partner violence, sexual violence, and stalking. Victims' lost productivity affects family members, colleagues, and employers. Published by Elsevier Inc.
Pauling, Linus
1989-01-01
Consideration of the relation between bond length and bond number and the average atomic volume for different ways of packing atoms leads to the conclusion that the average ligancy of atoms in a metal should increase when a phase change occurs on increasing the pressure. Minimum volume for each value of the ligancy results from triangular coordination polyhedra (with triangular faces), such as the icosahedron and the Friauf polyhedron. Electron transfer may permit atoms of an element to assume different ligancies. Application of these principles to Cs(IV) and Cs(V), which were previously assigned structures with ligancy 8 and 6, respectively, has led to the assignment to Cs(IV) of a primitive cubic unit cell with a = 16.11 Å and with about 122 atoms in the cube and to Cs(V) of a primitive cubic unit cell resembling that of Mg32(Al,Zn)49, with a = 16.97 Å and with 162 atoms in the cube. PMID:16578839
The stretch to stray on time: Resonant length of random walks in a transient
NASA Astrophysics Data System (ADS)
Falcke, Martin; Friedhoff, Victor Nicolai
2018-05-01
First-passage times in random walks have a vast number of diverse applications in physics, chemistry, biology, and finance. In general, environmental conditions for a stochastic process are not constant on the time scale of the average first-passage time or control might be applied to reduce noise. We investigate moments of the first-passage time distribution under an exponential transient describing relaxation of environmental conditions. We solve the Laplace-transformed (generalized) master equation analytically using a novel method that is applicable to general state schemes. The first-passage time from one end to the other of a linear chain of states is our application for the solutions. The dependence of its average on the relaxation rate obeys a power law for slow transients. The exponent ν depends on the chain length N like ν = - N / ( N + 1 ) to leading order. Slow transients substantially reduce the noise of first-passage times expressed as the coefficient of variation (CV), even if the average first-passage time is much longer than the transient. The CV has a pronounced minimum for some lengths, which we call resonant lengths. These results also suggest a simple and efficient noise control strategy and are closely related to the timing of repetitive excitations, coherence resonance, and information transmission by noisy excitable systems. A resonant number of steps from the inhibited state to the excitation threshold and slow recovery from negative feedback provide optimal timing noise reduction and information transmission.
Mapping Atmospheric Moisture Climatologies across the Conterminous United States
Daly, Christopher; Smith, Joseph I.; Olson, Keith V.
2015-01-01
Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026
Efficacy of function specific 3D-motifs in enzyme classification according to their EC-numbers.
Rahimi, Amir; Madadkar-Sobhani, Armin; Touserkani, Rouzbeh; Goliaei, Bahram
2013-11-07
Due to the increasing number of protein structures with unknown function originated from structural genomics projects, protein function prediction has become an important subject in bioinformatics. Among diverse function prediction methods, exploring known 3D-motifs, which are associated with functional elements in unknown protein structures is one of the most biologically meaningful methods. Homologous enzymes inherit such motifs in their active sites from common ancestors. However, slight differences in the properties of these motifs, results in variation in the reactions and substrates of the enzymes. In this study, we examined the possibility of discriminating highly related active site patterns according to their EC-numbers by 3D-motifs. For each EC-number, the spatial arrangement of an active site, which has minimum average distance to other active sites with the same function, was selected as a representative 3D-motif. In order to characterize the motifs, various points in active site elements were tested. The results demonstrated the possibility of predicting full EC-number of enzymes by 3D-motifs. However, the discriminating power of 3D-motifs varies among different enzyme families and depends on selecting the appropriate points and features. © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Emery, Barbara A.; Richardson, Ian G.; Evans, David S.; Rich, Frederick J.; Wilson, Gordon R.
2011-01-01
The behavior of a number of solar wind, radiation belt, auroral and geomagnetic parameters is examined during the recent extended solar minimum and previous solar cycles, covering the period from January 1972 to July 2010. This period includes most of the solar minimum between Cycles 23 and 24, which was more extended than recent solar minima, with historically low values of most of these parameters in 2009. Solar rotational periodicities from S to 27 days were found from daily averages over 81 days for the parameters. There were very strong 9-day periodicities in many variables in 2005 -2008, triggered by recurring corotating high-speed streams (HSS). All rotational amplitudes were relatively large in the descending and early minimum phases of the solar cycle, when HSS are the predominant solar wind structures. There were minima in the amplitudes of all solar rotational periodicities near the end of each solar minimum, as well as at the start of the reversal of the solar magnetic field polarity at solar maximum (approx.1980, approx.1990, and approx. 2001) when the occurrence frequency of HSS is relatively low. Semiannual equinoctial periodicities, which were relatively strong in the 1995-1997 solar minimum, were found to be primarily the result of the changing amplitudes of the 13.5- and 27-day periodicities, where 13.5-day amplitudes were better correlated with heliospheric daily observations and 27-day amplitudes correlated better with Earth-based daily observations. The equinoctial rotational amplitudes of the Earth-based parameters were probably enhanced by a combination of the Russell-McPherron effect and a reduction in the solar wind-magnetosphere coupling efficiency during solstices. The rotational amplitudes were cross-correlated with each other, where the 27 -day amplitudes showed some of the weakest cross-correlations. The rotational amplitudes of the > 2 MeV radiation belt electron number fluxes were progressively weaker from 27- to 5-day periods, showing that processes in the magnetosphere act as a low-pass filter between the solar wind and the radiation belt. The A(sub p)/K(sub p) magnetic currents observed at subauroral latitudes are sensitive to proton auroral precipitation, especially for 9-day and shorter periods, while the A(sub p)/K(sub p) currents are governed by electron auroral precipitation for 13.5- and 27-day periodicities.
Kim, Jonghyeon; Dally, Leonard G; Ederer, Fred; Gaasterland, Douglas E; VanVeldhuisen, Paul C; Blackwell, Beth; Sullivan, E Kenneth; Prum, Bruce; Shafranov, George; Beck, Allen; Spaeth, George L
2004-11-01
To determine the least worsening of a visual field (VF) and the least number of confirming tests needed to identify progression of glaucomatous VF defects. Cohort study of participants in a clinical trial. Seven hundred fifty-two eyes of 565 patients with advanced glaucoma. Visual field tests were quantified with the Advanced Glaucoma Intervention Study (AGIS) VF defect score and the Humphrey Field Analyzer mean deviation (MD). Follow-up was 8 to 13 years. Two measures based on the AGIS VF defect score: (1) sustained decrease of VF (SDVF), a worsening from baseline by 2 (alternatively, 3 or 4) or more units and sustained for 2 (alternatively, 3) consecutive 6-month visits and (2) after the occurrence of SDVF, the average percent of eyes with worsening by 2 (alternatively, 3 or 4) or more units from baseline. Two similar measures based on MD. Based on the original AGIS criteria for SDVF (a worsening of 4 units in the AGIS score sustained during 3 consecutive 6-month visits), 31% of eyes had an SDVF. The percent of eyes with a sustained event increases by approximately 10% when either the minimum number of units of field loss or the minimum number of 6-month visits during which the loss is sustained decreases by 1. During 3 years of follow-up after a sustained event, a worsening of at least 2 units was found in 72% of eyes that had a 2-visit sustained event. The same worsening was found in 84% of eyes that had a 3-visit sustained event. Through the next 10 years after a sustained event, based on worsening of 2, 3, or 4 units at 2 or 3 consecutive tests, the loss reoccurred, on average, in >/=75% of study eyes. Results for MD are similar. In patients with advanced glaucoma, a single confirmatory test 6 months after a VF worsening indicates with at least 72% probability a persistent defect when the worsening is defined by at least 2 units of AGIS score or by at least 2 decibels of MD. When the number of confirmatory tests is increased from 1 to 2, the percentage of eyes that show a persistent defect increases from 72% to 84%.
Melt density and the average composition of basalt
NASA Technical Reports Server (NTRS)
Stolper, E.; Walker, D.
1980-01-01
Densities of residual liquids produced by low pressure fractionation of olivine-rich melts pass through a minimum when pyroxene and plagioclase joint the crystallization sequence. The observation that erupted basalt compositions cluster around the degree of fractionation from picritic liquids corresponding to the density minimum in the liquid line of descent may thus suggest that the earth's crust imposes a density fiber on the liquids that pass through it, favoring the eruption of the light liquids at the density minimum over the eruption of denser more fractionated and less fractionated liquids.
Worldwide Report, Environmental Quality, No. 388, China Addresses Environmental Issues -- IV
1983-03-04
resources and collaborate in a joint effort, the large helping the small, and the strong leading the weak. The Hanxiang Plant which produces fermented bean...1981 the rain falling on Chongqing had an average pH of 4.64 and a minimum value of 3. A pH of 3 is similar to that of vinegar . This minimum value is
NASA Astrophysics Data System (ADS)
Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.
2016-11-01
Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.
1992-05-01
Plate Figure H-1. Temperature Coefficient Test Circuit The forward voltage was measured at 3 different termperatures. The average TC was calculated to be...AT, rather than the average figure given by the large area Isolation diffusion. The peak temperature , rather than the average temperature , is the...components would cause the temperatures of the components to be nearer the average , particularly those near the minimum and maximum. X-I The largest
Noise pollution levels in the pediatric intensive care unit.
Kramer, Bree; Joshi, Prashant; Heard, Christopher
2016-12-01
Patients and staff may experience adverse effects from exposure to noise. This study assessed noise levels in the pediatric intensive care unit and evaluated family and staff opinion of noise. Noise levels were recorded using a NoisePro DLX. The microphone was 1 m from the patient's head. The noise level was averaged each minute and levels above 70 and 80 dBA were recorded. The maximum, minimum, and average decibel levels were calculated and peak noise level great than 100 dBA was also recorded. A parent questionnaire concerning their evaluation of noisiness of the bedside was completed. The bedside nurse also completed a questionnaire. The average maximum dB for all patients was 82.2. The average minimum dB was 50.9. The average daily bedside noise level was 62.9 dBA. The average % time where the noise level was higher than 70 dBA was 2.2%. The average percent of time that the noise level was higher than 80 dBA was 0.1%. Patients experienced an average of 115 min/d where peak noise was greater than 100 dBA. The parents and staff identified the monitors as the major contribution to noise. Patients experienced levels of noise greater than 80 dBA. Patients experience peak noise levels in excess of 100 dB during their pediatric intensive care unit stay. Copyright © 2016 Elsevier Inc. All rights reserved.
Gadoury, R.A.; Smath, J.A.; Fontaine, R.A.
1985-01-01
The report documents the results of a study of the cost-effectiveness of the U.S. Geological Survey 's continuous-record stream-gaging programs in Massachusetts and Rhode Island. Data uses and funding sources were identified for 91 gaging stations being operated in Massachusetts are being operated to provide data for two special purpose hydrologic studies, and they are planned to be discontinued at the conclusion of the studies. Cost-effectiveness analyses were performed on 63 continuous-record gaging stations in Massachusetts and 15 stations in Rhode Island, at budgets of $353,000 and $60,500, respectively. Current operations policies result in average standard errors per station of 12.3% in Massachusetts and 9.7% in Rhode Island. Minimum possible budgets to maintain the present numbers of gaging stations in the two States are estimated to be $340,000 and $59,000, with average errors per station of 12.8% and 10.0%, respectively. If the present budget levels were doubled, average standards errors per station would decrease to 8.1% and 4.2%, respectively. Further budget increases would not improve the standard errors significantly. (USGS)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-16
... on 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex... (GPS) Y RWY 20, Amdt 1B Cambridge, MN, Cambridge Muni, Takeoff Minimums and Obstacle DP, Orig Pipestone, MN, Pipestone Muni, NDB RWY 36, Amdt 7, CANCELLED Rushford, MN, Rushford Muni, Takeoff Minimums and...
40 CFR 86.1370-2007 - Not-To-Exceed test procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... that include discrete regeneration events and that send a recordable electronic signal indicating the start and end of the regeneration event, determine the minimum averaging period for each NTE event that... averaging period is used to determine whether the individual NTE event is a valid NTE event. For engines...
40 CFR 69.41 - New exemptions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... operating specifications. At a minimum, the wind direction data will be monitored, collected and reported as 1-hour averages, starting on the hour. If the average wind direction for a given hour is from within the designated sector, the wind will be deemed to have flowed from within the sector for that hour...
40 CFR 69.41 - New exemptions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operating specifications. At a minimum, the wind direction data will be monitored, collected and reported as 1-hour averages, starting on the hour. If the average wind direction for a given hour is from within the designated sector, the wind will be deemed to have flowed from within the sector for that hour...
NASA Astrophysics Data System (ADS)
Hammer, Patrick R.
It is well established that natural flyers flap their wings to sustain flight due to poor performance of steady wing aerodynamics at low Reynolds number. Natural flyers also benefit from the propulsive force generated by flapping. Unsteady airfoils allow for simplified study of flapping wing aerodynamics. Limited previous work has suggested that both the Reynolds number and motion trajectory asymmetry play a non-negligible role in the resulting forces and wake structure of an oscillating airfoil. In this work, computations are performed to on this topic for a NACA 0012 airfoil purely pitching about its quarter-chord point. Two-dimensional computations are undertaken using the high-order, extensively validated FDL3DI Navier-Strokes solver developed at Wright-Patterson Air Force Base. The Reynolds number range of this study is 2,000-22,000, reduced frequencies as high as 16 are considered, and the pitching amplitude varies from 2° to 10°. In order to simulate the incompressible limit with the current compressible solver, freestream Mach numbers as low as 0.005 are used. The wake structure is accurately resolved using an overset grid approach. The results show that the streamwise force depends on Reynolds number such that the drag-to-thrust crossover reduced frequency decreases with increasing Reynolds number at a given amplitude. As the amplitude increases, the crossover reduced frequency decreases at a given Reynolds number. The crossover frequency data show good collapse for all pitching amplitudes considered when expressed as the Strouhal number based on trailing edge-amplitude for different Reynolds numbers. Appropriate scaling causes the thrust data to become nearly independent of Reynolds number and amplitude. An increase in propulsive efficiency is observed as the Reynolds number increases while less dependence is seen in the peak-to-peak lift and drag amplitudes. Reynolds number dependence is also seen for the wake structure. The crossover reduced frequency to produce a switch in the wake vortex configuration from von Karman (drag) to reverse von Karman (thrust) patterns decreases as the Reynolds number increases. As the pitching amplitude increases, more complex structures form in the wake, particularly at the higher Reynolds numbers considered. Although both the transverse and streamwise spacing depend on amplitude, the vortex array aspect ratio is nearly amplitude independent for each Reynolds number. Motion trajectory asymmetry produces a non-zero average lift and a decrease in average drag. Decomposition of the lift demonstrates that the majority of the average lift is a result of the component from average vortex (circulatory) lift. The average lift is positive at low reduced frequency, but as the reduced frequency increases at a given motion asymmetry, an increasing amount of negative lift occurs over a greater portion of the oscillation cycle, and eventually causes a switch in the sign of the lift. The maximum value, minimum value, and peak-to-peak amplitude of the lift and drag increase with increasing reduced frequency and asymmetry. The wake structure becomes complex with an asymmetric motion trajectory. A faster pitch-up produces a single positive vortex and one or more negative vortices, the number of which depends on the reduced frequency and asymmetry. When the airfoil motion trajectory is asymmetric, the vortex trajectories and properties in the wake exhibit asymmetric behavior.
An Earth longwave radiation climate model
NASA Technical Reports Server (NTRS)
Yang, S. K.
1984-01-01
An Earth outgoing longwave radiation (OLWR) climate model was constructed for radiation budget study. Required information is provided by on empirical 100mb water vapor mixing ratio equation of the mixing ratio interpolation scheme. Cloud top temperature is adjusted so that the calculation would agree with NOAA scanning radiometer measurements. Both clear sky and cloudy sky cases are calculated and discussed for global average, zonal average and world-wide distributed cases. The results agree well with the satellite observations. The clear sky case shows that the OLWR field is highly modulated by water vapor, especially in the tropics. The strongest longitudinal variation occurs in the tropics. This variation can be mostly explained by the strong water vapor gradient. Although in the zonal average case the tropics have a minimum in OLWR, the minimum is essentially contributed by a few very low flux regions, such as the Amazon, Indonesian and the Congo.
Time-of-day effects on voice range profile performance in young, vocally untrained adult females.
van Mersbergen, M R; Verdolini, K; Titze, I R
1999-12-01
Time-of-day effects on voice range profile performance were investigated in 20 vocally healthy untrained women between the ages of 18 and 35 years. Each subject produced two complete voice range profiles: one in the morning and one in the evening, about 36 hours apart. The order of morning and evening trials was counterbalanced across subjects. Dependent variables were (1) average minimum and average maximum intensity, (2) Voice range profile area and (3) center of gravity (median semitone pitch and median intensity). In this study, the results failed to reveal any clear evidence of time-of-day effects on voice range profile performance, for any of the dependent variables. However, a reliable interaction of time-of-day and trial order was obtained for average minimum intensity. Investigation of other subject populations, in particular trained vocalists or those with laryngeal lesions, is required for any generalization of the results.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Parameter Estimation as a Problem in Statistical Thermodynamics.
Earle, Keith A; Schneider, David J
2011-03-14
In this work, we explore the connections between parameter fitting and statistical thermodynamics using the maxent principle of Jaynes as a starting point. In particular, we show how signal averaging may be described by a suitable one particle partition function, modified for the case of a variable number of particles. These modifications lead to an entropy that is extensive in the number of measurements in the average. Systematic error may be interpreted as a departure from ideal gas behavior. In addition, we show how to combine measurements from different experiments in an unbiased way in order to maximize the entropy of simultaneous parameter fitting. We suggest that fit parameters may be interpreted as generalized coordinates and the forces conjugate to them may be derived from the system partition function. From this perspective, the parameter fitting problem may be interpreted as a process where the system (spectrum) does work against internal stresses (non-optimum model parameters) to achieve a state of minimum free energy/maximum entropy. Finally, we show how the distribution function allows us to define a geometry on parameter space, building on previous work[1, 2]. This geometry has implications for error estimation and we outline a program for incorporating these geometrical insights into an automated parameter fitting algorithm.
Enhancing listener strategies using a payoff matrix in speech-on-speech masking experiments.
Thompson, Eric R; Iyer, Nandini; Simpson, Brian D; Wakefield, Gregory H; Kieras, David E; Brungart, Douglas S
2015-09-01
Speech recognition was measured as a function of the target-to-masker ratio (TMR) with syntactically similar speech maskers. In the first experiment, listeners were instructed to report keywords from the target sentence. Data averaged across listeners showed a plateau in performance below 0 dB TMR when masker and target sentences were from the same talker. In this experiment, some listeners tended to report the target words at all TMRs in accordance with the instructions, while others reported keywords from the louder of the sentences, contrary to the instructions. In the second experiment, stimuli were the same as in the first experiment, but listeners were also instructed to avoid reporting the masker keywords, and a payoff matrix penalizing masker keywords and rewarding target keywords was used. In this experiment, listeners reduced the number of reported masker keywords, and increased the number of reported target keywords overall, and the average data showed a local minimum at 0 dB TMR with same-talker maskers. The best overall performance with a same-talker masker was obtained with a level difference of 9 dB, where listeners achieved near perfect performance when the target was louder, and at least 80% correct performance when the target was the quieter of the two sentences.
García-Martínez, Pedro; Lozano-Vidal, Ruth; Herraiz-Ortiz, María Del Carmen; Collado-Boira, Eladio
To evaluate the acquisition of skills in research and public health specialists in family and community nursing. Descriptive and analytical study on a population of specialist nurse members of with the Valencian Primary Nurse Society. Measured with anonymous self-administered questionnaire on activities implemented and turnaround time in the training period. The questionnaire was conducted and reviewed based on the training programme of the specialty. Sixteen of the 41 specialists responded. The four year groups of nurses who had finished their training were represented as well as seven national teaching units. The results show high heterogeneity in the activities developed in the training. The average rotation in public health is 7.07 weeks, with range of 0 to 16 weeks. The mean number of educational sessions is 2.69 in the two years. The average number of research projects is 1.19. The result shows a specialisation process with training gaps in the skills of research and public health that could be remedied. Some practitioners claim that they finish their specialisation without undertaking research activities or completing the minimum proposed shifts. There is no process of improvement in the four year groups studied. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Hopkins, Carl
2011-05-01
In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations.
Airborne observations of cloud condensation nuclei spectra and aerosols over East Inner Mongolia
NASA Astrophysics Data System (ADS)
Yang, Jiefan; Lei, Hengchi; Lü, Yuhuan
2017-08-01
A set of vertical profiles of aerosol number concentrations, size distributions and cloud condensation nuclei (CCN) spectra was observed using a passive cloud and aerosol spectrometer (PCASP) and cloud condensation nuclei counter, over the Tongliao area, East Inner Mongolia, China. The results showed that the average aerosol number concentration in this region was much lower than that in heavily polluted areas. Monthly average aerosol number concentrations within the boundary layer reached a maximum in May and a minimum in September, and the variations in CCN number concentrations at different supersaturations showed the same trend. The parameters c and k of the empirical function N = cS k were 539 and 1.477 under clean conditions, and their counterparts under polluted conditions were 1615 and 1.42. Measurements from the airborne probe mounted on a Yun-12 (Y12) aircraft, together with Hybrid Single-Particle Lagrangian Integrated Trajectory model backward trajectories indicated that the air mass from the south of Tongliao contained a high concentration of aerosol particles (1000-2500 cm-3) in the middle and lower parts of the troposphere. Moreover, detailed intercomparison of data obtained on two days in 2010 indicated that the activation efficiency in terms of the ratio of N CCN to N a (aerosols measured from PCASP) was 0.74 (0.4 supersaturations) when the air mass mainly came from south of Tongliao, and this value increased to 0.83 on the relatively cleaner day. Thus, long-range transport of anthropogenic pollutants from heavily polluted mega cities, such as Beijing and Tianjin, may result in slightly decreasing activation efficiencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harikrishnan, R.; Hareland, G.; Warpinski, N.R.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less
Practical implementation of channelized hotelling observers: effect of ROI size
NASA Astrophysics Data System (ADS)
Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size.
Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Sean Donovan; Beall, Patrick S; Miller, Mark L
2014-08-01
Through the SNL New Mexico Small Business Assistance (NMSBA) program, several Sandia engineers worked with the Environmental Restoration Group (ERG) Inc. to verify and validate a novel algorithm used to determine the scanning Critical Level (L c ) and Minimum Detectable Concentration (MDC) (or Minimum Detectable Areal Activity) for the 102F scanning system. Through the use of Monte Carlo statistical simulations the algorithm mathematically demonstrates accuracy in determining the L c and MDC when a nearest-neighbor averaging (NNA) technique was used. To empirically validate this approach, SNL prepared several spiked sources and ran a test with the ERG 102F instrumentmore » on a bare concrete floor known to have no radiological contamination other than background naturally occurring radioactive material (NORM). The tests conclude that the NNA technique increases the sensitivity (decreases the L c and MDC) for high-density data maps that are obtained by scanning radiological survey instruments.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and..., Takeoff Minimums and Obstacle DP, Amdt 2 Perham, MN, Perham Muni, RNAV (GPS) RWY 13, Orig Perham, MN, Perham Muni, RNAV (GPS) RWY 31, Amdt 1 Perham, MN, Perham Muni, Takeoff Minimums and Obstacle DP, Amdt 1...
Radiographic Findings in Revision Anterior Cruciate Ligament Reconstructions from the MARS Cohort
2013-01-01
The Multicenter ACL (anterior cruciate ligament) Revision Study (MARS) group was developed to investigate revision ACL reconstruction outcomes. An important part of this is obtaining and reviewing radiographic studies. The goal for this radiographic analysis is to establish radiographic findings for a large revision ACL cohort to allow comparison with future studies. The study was designed as a cohort study. Various established radiographic parameters were measured by three readers. These included sagittal and coronal femoral and tibial tunnel position, joint space narrowing, and leg alignment. Inter- and intraobserver comparisons were performed. Femoral sagittal position demonstrated 42% were more than 40% anterior to the posterior cortex. On the sagittal tibia tunnel position, 49% demonstrated some impingement on full-extension lateral radiographs. Limb alignment averaged 43% medial to the medial edge of the tibial plateau. On the Rosenberg view (45-degree flexion view), the minimum joint space in the medial compartment averaged 106% of the opposite knee, but it ranged down to a minimum of 4.6%. Lateral compartment narrowing at its minimum on the Rosenberg view averaged 91.2% of the opposite knee, but it ranged down to a minimum of 0.0%. On the coronal view, verticality as measured by the angle from the center of the tibial tunnel aperture to the center of the femoral tunnel aperture measured 15.8 degree ± 6.9% from vertical. This study represents the radiographic findings in the largest revision ACL reconstruction series ever assembled. Findings were generally consistent with those previously demonstrated in the literature. PMID:23404491
Does Change in the Arctic Sea Ice Indicate Climate Change? A Lesson Using Geospatial Technology
ERIC Educational Resources Information Center
Bock, Judith K.
2011-01-01
The Arctic sea ice has not since melted to the 2007 extent, but annual summer melt extents do continue to be less than the decadal average. Climate fluctuations are well documented by geologic records. Averages are usually based on a minimum of 10 years of averaged data. It is typical for fluctuations to occur from year to year and season to…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-13
... settled (minimum 4 MW) of metered load settled using WACM hourly pricing with no using WACM hourly pricing... than 7.5% (minimum pricing in no-penalty band. Customer 10 MW) of metered load settled using imbalance... or equal to 0.5 percent of its hourly average load, no Regulation Service charges will be assessed by...
J. N. Kochenderfer; G. W. Wendel; H. Clay Smith
1984-01-01
A "minimum-standard" forest truck road that provides efficient and environmentally acceptable access for several forest activities is described. Cost data are presented for eight of these roads constructed in the central Appalachians. The average cost per mile excluding gravel was $8,119. The range was $5,048 to $14,424. Soil loss was measured from several...
EnviroAtlas - Minimum Temperature 1950 - 2099 for the Conterminous United States
The EnviroAtlas Climate Scenarios were generated from NASA Earth Exchange (NEX) Downscaled Climate Projections (NEX-DCP30) ensemble averages (the average of over 30 available climate models) for each of the four representative concentration pathways (RCP) for the contiguous U.S. at 30 arc-second (approx. 800 m2) spatial resolution. NEX-DCP30 mean monthly minimum temperature for the 4 RCPs (2.6, 4.5, 6.0, 8.5) were organized by season (Winter, Spring, Summer, and Fall) and annually for the years 2006 00e2?? 2099. Additionally, mean monthly minimum temperature for the ensemble average of all historic runs is organized similarly for the years 1950 00e2?? 2005. This dataset was produced by the US EPA to support research and online mapping activities related to EnviroAtlas. EnviroAtlas (https://www.epa.gov/enviroatlas) allows the user to interact with a web-based, easy-to-use, mapping application to view and analyze multiple ecosystem services for the contiguous United States. The dataset is available as downloadable data (https://edg.epa.gov/data/Public/ORD/EnviroAtlas) or as an EnviroAtlas map service. Additional descriptive information about each attribute in this dataset can be found in its associated EnviroAtlas Fact Sheet (https://www.epa.gov/enviroatlas/enviroatlas-fact-sheets).
Archer, Roger J.
1978-01-01
Minimum average 7-day, 10-year flow at 67 gaging stations and 173 partial-record stations in the Hudson River basin are given in tabular form. Variation of the 7-day, 10-year low flow from point to point in selected reaches, and the corresponding times of travel, are shown graphically for Wawayanda Creek, Wallkill River, Woodbury-Moodna Creek, and the Fishkill Creek basins. The 7-day, 10-year low flow for the Saw Kill basin, and estimates of the 7-day, 10-year low flow of the Roeliff Jansen Kill at Ancram and of Birch Creek at Pine Hill, are given. Summaries of discharge from Rondout and Ashokan Reservoirs, in Ulster County, are also included. Minimum average 7-day, 10-year flow for gaging stations with 10 years or more of record were determined by log-Pearson Type III computation; those for partial-record stations were developed by correlation of discharge measurements made at the partial-record stations with discharge data from appropriate long-term gaging stations. The variation in low flows from point to point within the selected subbasins were estimated from available data and regional regression formula. Time of travel at these flows in the four subbasins was estimated from available data and Boning's equations.
QUIET-TIME SUPRATHERMAL (∼0.1–1.5 keV) ELECTRONS IN THE SOLAR WIND
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Jiawei; Wang, Linghua; Zong, Qiugang
2016-03-20
We present a statistical survey of the energy spectrum of solar wind suprathermal (∼0.1–1.5 keV) electrons measured by the WIND 3DP instrument at 1 AU during quiet times at the minimum and maximum of solar cycles 23 and 24. After separating (beaming) strahl electrons from (isotropic) halo electrons according to their different behaviors in the angular distribution, we fit the observed energy spectrum of both strahl and halo electrons at ∼0.1–1.5 keV to a Kappa distribution function with an index κ and effective temperature T{sub eff}. We also calculate the number density n and average energy E{sub avg} of strahl andmore » halo electrons by integrating the electron measurements between ∼0.1 and 1.5 keV. We find a strong positive correlation between κ and T{sub eff} for both strahl and halo electrons, and a strong positive correlation between the strahl n and halo n, likely reflecting the nature of the generation of these suprathermal electrons. In both solar cycles, κ is larger at solar minimum than at solar maximum for both strahl and halo electrons. The halo κ is generally smaller than the strahl κ (except during the solar minimum of cycle 23). The strahl n is larger at solar maximum, but the halo n shows no difference between solar minimum and maximum. Both the strahl n and halo n have no clear association with the solar wind core population, but the density ratio between the strahl and halo roughly anti-correlates (correlates) with the solar wind density (velocity)« less
Alarcón, J A; Immink, M D; Méndez, L F
1989-12-01
The present study was conducted as part of an evaluation of the economic and nutritional effects of a crop diversification program for small-scale farmers in the Western highlands of Guatemala. Linear programming models are employed in order to obtain optimal combinations of traditional and non-traditional food crops under different ecological conditions that: a) provide minimum cost diets for auto-consumption, and b) maximize net income and market availability of dietary energy. Data used were generated by means of an agroeconomic survey conducted in 1983 among 726 farming households. Food prices were obtained from the Institute of Agrarian Marketing; data on production costs, from the National Bank of Agricultural Development in Guatemala. The gestation periods for each crop were obtained from three different sources, and then averaged. The results indicated that the optimal cropping pattern for the minimum-cost diets for auto consumption include traditional foods (corn, beans, broad bean, wheat, potato), non-traditional foods (carrots, broccoli, beets) and foods of animal origin (milk, eggs). A significant number of farmers included in the sample did not have sufficient land availability to produce all foods included in the minimum-cost diet. Cropping patterns which maximize net incomes include only non-traditional foods: onions, carrots, broccoli and beets for farmers in the low highland areas, and raddish, broccoli, cauliflower and carrots for farmers in the higher parts. Optimal cropping patterns which maximize market availability of dietary energy include traditional and non-traditional foods; for farmers in the lower areas: wheat, corn, beets, carrots and onions; for farmers in the higher areas: potato, wheat, raddish, carrots and cabbage.
Exospheric hydrogen above St-Santin /France/
NASA Technical Reports Server (NTRS)
Derieux, A.; Lejeune, G.; Bauer, P.
1975-01-01
The temperature and hydrogen concentration of the exosphere was determined using incoherent scatter measurements performed above St. Santin from 1969 to 1972. The hydrogen concentration was deduced from measurements of the number density of positive hydrogen and oxygen ions. A statistical analysis is given of the hydrogen concentration as a function of the exospheric temperature and the diurnal variation of the hydrogen concentration is investigated for a few selected days of good quality observation. The data averaged with respect to the exospheric temperature without consideration of the local time exhibits a distribution consistent with a constant effective Jeans escape flux of about 9 x 10 to the 7 cu cm/s. The local time variation exhibits a maximum to minimum concentration ratio of at least 3.5.
Optical security system for the protection of personal identification information.
Doh, Yang-Hoi; Yoon, Jong-Soo; Choi, Kyung-Hyun; Alam, Mohammad S
2005-02-10
A new optical security system for the protection of personal identification information is proposed. First, authentication of the encrypted personal information is carried out by primary recognition of a personal identification number (PIN) with the proposed multiplexed minimum average correlation energy phase-encrypted (MMACE_p) filter. The MMACE_p filter, synthesized with phase-encrypted training images, can increase the discrimination capability and prevent the leak of personal identification information. After the PIN is recognized, speedy authentication of personal information can be achieved through one-to-one optical correlation by means of the optical wavelet filter. The possibility of information counterfeiting can be significantly decreased with the double-identification process. Simulation results demonstrate the effectiveness of the proposed technique.
75 FR 39500 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-09
... with ``Badge and vehicle control records that at a minimum include; name, Social Security Number (SSN... system: Badge and vehicle control records that at a minimum include; name, Social Security Number (SSN... maintenance of the system: 10 U.S.C. 8013, Secretary of the Air Force, Powers and Duties; Department of...
2015-12-15
from the ground to space solar minimum and solar maximum 5a. CONTRACT NUMBER BAA-76-11-01 5b. GRANT NUMBER N00173-12-1G010 5c. PROGRAM ELEMENT...atmospheric behavior from the ground to space under solar minimum and solar maximum conditions (Contract No.: N00173-12-1-G010 NRL) Project Summary...Dynamical response to solar radiative forcing is a crucial and poorly understood mechanisms. We propose to study the impacts of large dynamical events
NASA Technical Reports Server (NTRS)
Perovich, D.; Gerland, S.; Hendricks, S.; Meier, Walter N.; Nicolaus, M.; Richter-Menge, J.; Tschudi, M.
2013-01-01
During 2013, Arctic sea ice extent remained well below normal, but the September 2013 minimum extent was substantially higher than the record-breaking minimum in 2012. Nonetheless, the minimum was still much lower than normal and the long-term trend Arctic September extent is -13.7 per decade relative to the 1981-2010 average. The less extreme conditions this year compared to 2012 were due to cooler temperatures and wind patterns that favored retention of ice through the summer. Sea ice thickness and volume remained near record-low levels, though indications are of slightly thicker ice compared to the record low of 2012.
Upscaling species richness and abundances in tropical forests
Tovo, Anna; Suweis, Samir; Formentin, Marco; Favretti, Marco; Volkov, Igor; Banavar, Jayanth R.; Azaele, Sandro; Maritan, Amos
2017-01-01
The quantification of tropical tree biodiversity worldwide remains an open and challenging problem. More than two-fifths of the number of worldwide trees can be found either in tropical or in subtropical forests, but only ≈0.000067% of species identities are known. We introduce an analytical framework that provides robust and accurate estimates of species richness and abundances in biodiversity-rich ecosystems, as confirmed by tests performed on both in silico–generated and real forests. Our analysis shows that the approach outperforms other methods. In particular, we find that upscaling methods based on the log-series species distribution systematically overestimate the number of species and abundances of the rare species. We finally apply our new framework on 15 empirical tropical forest plots and quantify the minimum percentage cover that should be sampled to achieve a given average confidence interval in the upscaled estimate of biodiversity. Our theoretical framework confirms that the forests studied are comprised of a large number of rare or hyper-rare species. This is a signature of critical-like behavior of species-rich ecosystems and can provide a buffer against extinction. PMID:29057324
YBYRÁ facilitates comparison of large phylogenetic trees.
Machado, Denis Jacob
2015-07-01
The number and size of tree topologies that are being compared by phylogenetic systematists is increasing due to technological advancements in high-throughput DNA sequencing. However, we still lack tools to facilitate comparison among phylogenetic trees with a large number of terminals. The "YBYRÁ" project integrates software solutions for data analysis in phylogenetics. It comprises tools for (1) topological distance calculation based on the number of shared splits or clades, (2) sensitivity analysis and automatic generation of sensitivity plots and (3) clade diagnoses based on different categories of synapomorphies. YBYRÁ also provides (4) an original framework to facilitate the search for potential rogue taxa based on how much they affect average matching split distances (using MSdist). YBYRÁ facilitates comparison of large phylogenetic trees and outperforms competing software in terms of usability and time efficiency, specially for large data sets. The programs that comprises this toolkit are written in Python, hence they do not require installation and have minimum dependencies. The entire project is available under an open-source licence at http://www.ib.usp.br/grant/anfibios/researchSoftware.html .
The volume-outcome relationship and minimum volume standards--empirical evidence for Germany.
Hentschker, Corinna; Mennicken, Roman
2015-06-01
For decades, there is an ongoing discussion about the quality of hospital care leading i.a. to the introduction of minimum volume standards in various countries. In this paper, we analyze the volume-outcome relationship for patients with intact abdominal aortic aneurysm and hip fracture. We define hypothetical minimum volume standards in both conditions and assess consequences for access to hospital services in Germany. The results show clearly that patients treated in hospitals with a higher case volume have on average a significant lower probability of death in both conditions. Furthermore, we show that the hypothetical minimum volume standards do not compromise overall access measured with changes in travel times. Copyright © 2014 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2010 CFR
2010-07-01
... performance test. v. If you use a venturi scrubber, maintaining the daily average pressure drop across the.... Each new or reconstructed flame lamination affected source using a scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performanceb...
43 CFR 418.18 - Diversions at Derby Dam.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Dam must be managed to maintain minimum terminal flow to Lahontan Reservoir or the Carson River except... achieve an average terminal flow of 20 cfs or less during times when diversions to Lahontan Reservoir are not allowed (the flows must be averaged over the total time diversions are not allowed in that...
Code of Federal Regulations, 2011 CFR
2011-07-01
.... Each new or reconstructed flame lamination affected source using a scrubber a. Maintain the daily average scrubber inlet liquid flow rate above the minimum value established during the performanceb. Maintain the daily average scrubber effluent pH within the operating range established during the...
Hazratian, Teimour; Rassi, Yavar; Oshaghi, Mohammad Ali; Yaghoobi-Ershadi, Mohammad Reza; Fallah, Esmael; Shirzadi, Mohammad Reza; Rafizadeh, Sina
2011-08-01
To investigate species composition, density, accumulated degree-day and diversity of sand flies during April to October 2010 in Azarshahr district, a new focus of visceral leishmaniasis in north western Iran. Sand flies were collected using sticky traps biweekly and were stored in 96% ethanol. All specimens were mounted in Puri's medium for species identification using valid keys of sandflies. The density was calculated by the formula: number of specimens/m(2) of sticky traps and number of specimens/number of traps. Degree-day was calculated as follows: (Maximum temperature + Minimum temperature)/2-Minimum threshold. Diversity indices of the collected sand flies within different villages were estimated by the Shannon-weaver formula ( H'=∑i=1sPilog(e)Pi). Totally 5 557 specimens comprising 16 Species (14 Phlebotomus, and 2 Sergentomyia) were indentified. The activity of the species extended from April to October. Common sand-flies in resting places were Phlebotomus papatasi, Phlebotomus sergenti and Phlebotomus mongolensis. The monthly average density was 37.6, 41.1, 40.23, 30.38 and 30.67 for Almalodash, Jaragil, Segaiesh, Amirdizaj and Germezgol villages, respectively. Accumulated degree-day from early January to late May was approximately 289 degree days. The minimum threshold temperature for calculating of accumulated degree-day was 17.32°. According on the Shannon-weaver (H'), diversity of sand flies within area study were estimated as 0.917, 1.867, 1.339, 1.673, and 1.562 in Almalodash, Jaragil, Segaiesh, Amirdizaj and Germezgol villages, respectively. This study is the first detailed research in terms of species composition, density, accumulated degree-day and diversity of sand flies in an endemic focus of visceral leishamaniasis in Azarshahr district. The population dynamics of sand flies in Azarshahr district were greatly affected by climatic factors. According to this study the highest activity of the collected sand fly species occurs at the teritary week of August. It could help health authorities to predicate period of maximum risk of visceral leishamaniasis transmission and implement control program. Copyright © 2011 Hainan Medical College. Published by Elsevier B.V. All rights reserved.
On Making a Distinguished Vertex Minimum Degree by Vertex Deletion
NASA Astrophysics Data System (ADS)
Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes
For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cilla, Savino, E-mail: savinocilla@gmail.com; Deodato, Francesco; Macchia, Gabriella
We reported our initial experience in using Elekta volumetric modulated arc therapy (VMAT) and an anatomy-based treatment planning system (TPS) for single high-dose radiosurgery (SRS-VMAT) of liver metastases. This study included a cohort of 12 patients treated with a 26-Gy single fraction. Single-arc VMAT plans were generated with Ergo++ TPS. The prescription isodose surface (IDS) was selected to fulfill the 2 following criteria: 95% of planning target volume (PTV) reached 100% of the prescription dose and 99% of PTV reached a minimum of 90% of prescription dose. A 1-mm multileaf collimator (MLC) block margin was added around the PTV. Formore » a comparison of dose distributions with literature data, several conformity indexes (conformity index [CI], conformation number [CN], and gradient index [GI]) were calculated. Treatment efficiency and pretreatment dosimetric verification were assessed. Early clinical data were also reported. Our results reported that target and organ-at-risk objectives were met for all patients. Mean and maximum doses to PTVs were on average 112.9% and 121.5% of prescribed dose, respectively. A very high degree of dose conformity was obtained, with CI, CN, and GI average values equal to 1.29, 0.80, and 3.63, respectively. The beam-on-time was on average 9.3 minutes, i.e., 0.36 min/Gy. The mean number of monitor units was 3162, i.e., 121.6 MU/Gy. Pretreatment verification (3%-3 mm) showed an optimal agreement with calculated values; mean γ value was 0.27 and 98.2% of measured points resulted with γ < 1. With a median follow-up of 16 months complete response was observed in 12/14 (86%) lesions; partial response was observed in 2/14 (14%) lesions. No radiation-induced liver disease (RILD) was observed in any patients as well no duodenal ulceration or esophagitis or gastric hemorrhage. In conclusion, this analysis demonstrated the feasibility and the appropriateness of high-dose single-fraction SRS-VMAT in liver metastases performed with Elekta VMAT and Ergo++ TPS. Preliminary clinical outcomes showed a high rate of local control and minimum incidence of acute toxicity.« less
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Semi-annual Sq-variation in solar activity cycle
NASA Astrophysics Data System (ADS)
Pogrebnoy, V.; Malosiev, T.
The peculiarities of semi-annual variation in solar activity cycle have been studied. The data from observatories having long observational series and located in different latitude zones were used. The following observatories were selected: Huancayo (magnetic equator), from 1922 to 1959; Apia (low latitudes), from 1912 to 1961; Moscow (middle latitudes), from 1947 to 1965. Based on the hourly values of H-components, the average monthly diurnal amplitudes (a difference between midday and midnight values), according to five international quiet days, were computed. Obtained results were compared with R (relative sunspot numbers) in the ranges of 0-30R, 40-100R, and 140-190R. It was shown, that the amplitude of semi-annual variation increases with R, from minimum to maximum values, on average by 45%. At equatorial Huancayo observatory, the semi-annual Sq(H)-variation appears especially clearly: its maximums take place at periods of equinoxes (March-April, September-October), and minimums -- at periods of solstices (June-July, December-January). At low (Apia observatory) and middle (Moscow observatory) latitudes, the character of semi-annual variation is somewhat different: it appears during the periods of equinoxes, but considerably less than at equator. Besides, with the growth of R, semi-annual variation appears against a background of annual variation, in the form of second peaks (maximum in June). At observatories located in low and middle latitudes, second peaks become more appreciable with an increase of R (March-April and September-October). During the periods of low solar activity, they are insignificant. This work has been carried out with the support from International Scientific and Technology Center (Project #KR-214).
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
High Tensile Strength Amalgams for In-Space Fabrication and Repair
NASA Technical Reports Server (NTRS)
Grugel, Richard N.
2006-01-01
Amalgams are well known for their use in dental practice as a tooth filling material. They have a number of useful attributes that include room temperature fabrication, corrosion resistance, dimensional stability, and very good compressive strength. These properties well serve dental needs but, unfortunately, amalgams have extremely poor tensile strength, a feature that severely limits other potential applications. Improved material properties (strength and temperature) of amalgams may have application to the freeform fabrication of repairs or parts that might be necessary during an extended space mission. Advantages would include, but are not limited to: the ability to produce complex parts, a minimum number of processing steps, minimum crew interaction, high yield - minimum wasted material, reduced gravity compatibility, minimum final finishing, safety, and minimum power consumption. The work presented here shows how the properties of amalgams can be improved by changing particle geometries in conjunction with novel engineering metals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.; Landi, E.; Gibson, S. E., E-mail: lzh@umich.edu
2013-08-20
Since the unusually prolonged and weak solar minimum between solar cycles 23 and 24 (2008-2010), the sunspot number is smaller and the overall morphology of the Sun's magnetic field is more complicated (i.e., less of a dipole component and more of a tilted current sheet) compared with the same minimum and ascending phases of the previous cycle. Nearly 13 yr after the last solar maximum ({approx}2000), the monthly sunspot number is currently only at half the highest value of the past cycle's maximum, whereas the polar magnetic field of the Sun is reversing (north pole first). These circumstances make itmore » timely to consider alternatives to the sunspot number for tracking the Sun's magnetic cycle and measuring its complexity. In this study, we introduce two novel parameters, the standard deviation (SD) of the latitude of the heliospheric current sheet (HCS) and the integrated slope (SL) of the HCS, to evaluate the complexity of the Sun's magnetic field and track the solar cycle. SD and SL are obtained from the magnetic synoptic maps calculated by a potential field source surface model. We find that SD and SL are sensitive to the complexity of the HCS: (1) they have low values when the HCS is flat at solar minimum, and high values when the HCS is highly tilted at solar maximum; (2) they respond to the topology of the HCS differently, as a higher SD value indicates that a larger part of the HCS extends to higher latitude, while a higher SL value implies that the HCS is wavier; (3) they are good indicators of magnetically anomalous cycles. Based on the comparison between SD and SL with the normalized sunspot number in the most recent four solar cycles, we find that in 2011 the solar magnetic field had attained a similar complexity as compared to the previous maxima. In addition, in the ascending phase of cycle 24, SD and SL in the northern hemisphere were on the average much greater than in the southern hemisphere, indicating a more tilted and wavier HCS in the north than the south, associated with the early reversal of the polar magnetic field in the north relative to the south.« less
Prinz, P; Ronacher, B
2002-08-01
The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.
Fagan, William F; Lutscher, Frithjof
2006-04-01
Spatially explicit models for populations are often difficult to tackle mathematically and, in addition, require detailed data on individual movement behavior that are not easily obtained. An approximation known as the "average dispersal success" provides a tool for converting complex models, which may include stage structure and a mechanistic description of dispersal, into a simple matrix model. This simpler matrix model has two key advantages. First, it is easier to parameterize from the types of empirical data typically available to conservation biologists, such as survivorship, fecundity, and the fraction of juveniles produced in a study area that also recruit within the study area. Second, it is more amenable to theoretical investigation. Here, we use the average dispersal success approximation to develop estimates of the critical reserve size for systems comprising single patches or simple metapopulations. The quantitative approach can be used for both plants and animals; however, to provide a concrete example of the technique's utility, we focus on a special case pertinent to animals. Specifically, for territorial animals, we can characterize such an estimate of minimum viable habitat area in terms of the number of home ranges that the reserve contains. Consequently, the average dispersal success framework provides a framework through which home range size, natal dispersal distances, and metapopulation dynamics can be linked to reserve design. We briefly illustrate the approach using empirical data for the swift fox (Vulpes velox).
Kumari, Kavita; Lal, Madan; Saxena, Sangeeta
2017-10-01
An efficient, simple and commercially applicable protocol for rapid micropropagation of sugarcane has been designed using variety Co 05011. Pretreatment of shoot tip explants with thidiazuron (TDZ) induced high frequency regeneration of shoot cultures with improved multiplication ratio. The highest frequency (80%) of shoot initiation in explants pretreated with 10 mg/l of TDZ was obtained during the study. Maximum 65% shoot cultures could be established from the explants pretreated with TDZ as compared to minimum 40% establishment in explants without pretreatment. The explants pretreated with 10 mg/l of TDZ required minimum 40 days for the establishment of shoot cultures as compared to untreated explants which required 60 days. The highest average number of shoots per culture (19.1) could be obtained from the explants pretreated with 10 mg/l of TDZ, indicating the highest multiplication ratio (1:6). Highest rooting (over 94%) was obtained in shoots regenerated from pretreated explants on ½ strength MS medium containing 5.0 mg/l of NAA and 50 g/l of sucrose within 15 days. Higher number of tillers/clump (15.3) could be counted in plants regenerated from pretreated explants than untreated ones (10.9 tillers/clump) in field condition, three months after transplantation. Molecular analysis using RAPD and DAMD markers suggested that the pretreatment of explants with TDZ did not adversely affect the genetic stability of regenerated plants and maintained high clonal purity.
Effects of tidal current phase at the junction of two straits
Warner, J.; Schoellhamer, D.; Burau, J.; Schladow, G.
2002-01-01
Estuaries typically have a monotonic increase in salinity from freshwater at the head of the estuary to ocean water at the mouth, creating a consistent direction for the longitudinal baroclinic pressure gradient. However, Mare Island Strait in San Francisco Bay has a local salinity minimum created by the phasing of the currents at the junction of Mare Island and Carquinez Straits. The salinity minimum creates converging baroclinic pressure gradients in Mare Island Strait. Equipment was deployed at four stations in the straits for 6 months from September 1997 to March 1998 to measure tidal variability of velocity, conductivity, temperature, depth, and suspended sediment concentration. Analysis of the measured time series shows that on a tidal time scale in Mare Island Strait, the landward and seaward baroclinic pressure gradients in the local salinity minimum interact with the barotropic gradient, creating regions of enhanced shear in the water column during the flood and reduced shear during the ebb. On a tidally averaged time scale, baroclinic pressure gradients converge on the tidally averaged salinity minimum and drive a converging near-bed and diverging surface current circulation pattern, forming a "baroclinic convergence zone" in Mare Island Strait. Historically large sedimentation rates in this area are attributed to the convergence zone.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size
Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-01-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies. PMID:28943699
Atkins, Charisma Y; Thomas, Timothy K; Lenaker, Dane; Day, Gretchen M; Hennessy, Thomas W; Meltzer, Martin I
2016-06-01
We conducted a cost-effectiveness analysis of five specific dental interventions to help guide resource allocation. We developed a spreadsheet-based tool, from the healthcare payer perspective, to evaluate the cost effectiveness of specific dental interventions that are currently used among Alaska Native children (6-60 months). Interventions included: water fluoridation, dental sealants, fluoride varnish, tooth brushing with fluoride toothpaste, and conducting initial dental exams on children <18 months of age. We calculated the cost-effectiveness ratio of implementing the proposed interventions to reduce the number of carious teeth and full mouth dental reconstructions (FMDRs) over 10 years. A total of 322 children received caries treatments completed by a dental provider in the dental chair, while 161 children received FMDRs completed by a dental surgeon in an operating room. The average cost of treating dental caries in the dental chair was $1,467 (∼258,000 per year); while the cost of treating FMDRs was $9,349 (∼1.5 million per year). All interventions were shown to prevent caries and FMDRs; however tooth brushing prevented the greatest number of caries at minimum and maximum effectiveness with 1,433 and 1,910, respectively. Tooth brushing also prevented the greatest number of FMDRs (159 and 211) at minimum and maximum effectiveness. All of the dental interventions evaluated were shown to produce cost savings. However, the level of that cost saving is dependent on the intervention chosen. © 2016 The Authors. Journal of Public Health Dentistry published by Wiley Periodicals, Inc. on behalf of American Association of Public Health Dentistry.
46 CFR 169.549 - Ring lifebuoys and water lights.
Code of Federal Regulations, 2014 CFR
2014-10-01
... chapter and be international orange in color. (2) Each water light must be approved under subpart 161.010... 46 Shipping 7 2014-10-01 2014-10-01 false Ring lifebuoys and water lights. 169.549 Section 169.549... lights. (a)(1) The minimum number of life buoys and the minimum number to which water lights must be...
46 CFR 169.549 - Ring lifebuoys and water lights.
Code of Federal Regulations, 2012 CFR
2012-10-01
... chapter and be international orange in color. (2) Each water light must be approved under subpart 161.010... 46 Shipping 7 2012-10-01 2012-10-01 false Ring lifebuoys and water lights. 169.549 Section 169.549... lights. (a)(1) The minimum number of life buoys and the minimum number to which water lights must be...
A soil water based index as a suitable agricultural drought indicator
NASA Astrophysics Data System (ADS)
Martínez-Fernández, J.; González-Zamora, A.; Sánchez, N.; Gumuzzio, A.
2015-03-01
Currently, the availability of soil water databases is increasing worldwide. The presence of a growing number of long-term soil moisture networks around the world and the impressive progress of remote sensing in recent years has allowed the scientific community and, in the very next future, a diverse group of users to obtain precise and frequent soil water measurements. Therefore, it is reasonable to consider soil water observations as a potential approach for monitoring agricultural drought. In the present work, a new approach to define the soil water deficit index (SWDI) is analyzed to use a soil water series for drought monitoring. In addition, simple and accurate methods using a soil moisture series solely to obtain soil water parameters (field capacity and wilting point) needed for calculating the index are evaluated. The application of the SWDI in an agricultural area of Spain presented good results at both daily and weekly time scales when compared to two climatic water deficit indicators (average correlation coefficient, R, 0.6) and to agricultural production. The long-term minimum, the growing season minimum and the 5th percentile of the soil moisture series are good estimators (coefficient of determination, R2, 0.81) for the wilting point. The minimum of the maximum value of the growing season is the best estimator (R2, 0.91) for field capacity. The use of these types of tools for drought monitoring can aid the better management of agricultural lands and water resources, mainly under the current scenario of climate uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filwett, R. J.; Desai, M. I.; Dayeh, M. A.
2017-03-20
We have analyzed the ∼20–320 keV nucleon{sup −1} suprathermal (ST) heavy ion abundances in 41 corotating interaction regions (CIRs) observed by the Wind spacecraft from 1995 January to 2008 December. Our results are: (1) the CIR Fe/CNO and NeS/CNO ratios vary with the sunspot number, with values being closer to average solar energetic particle event values during solar maxima and lower than nominal solar wind values during solar minima. The physical mechanism responsible for the depleted abundances during solar minimum remains an open question. (2) The Fe/CNO increases with energy in the 6 events that occurred during solar maximum, whilemore » no such trends are observed for the 35 events during solar minimum. (3) The Fe/CNO shows no correlation with the average solar wind speed. (4) The Fe/CNO is well correlated with the corresponding upstream ∼20–320 keV nucleon{sup −1} Fe/CNO and not with the solar wind Fe/O measured by ACE in 31 events. Using the correlations between the upstream ∼20–40 keV nucleon{sup −1} Fe/CNO and the ∼20–320 keV nucleon{sup −1} Fe/CNO in CIRs, we estimate that, on average, the ST particles traveled ∼2 au along the nominal Parker spiral field line, which corresponds to upper limits for the radial distance of the source or acceleration location of ∼1 au beyond Earth orbit. Our results are consistent with those obtained from recent surveys, and confirm that CIR ST heavy ions are accelerated more locally, and are at odds with the traditional viewpoint that CIR ions seen at 1 au are bulk solar wind ions accelerated between 3 and 5 au.« less
Duration of the Arctic sea ice melt season: Regional and interannual variability, 1979-2001
Belchansky, G.I.; Douglas, David C.; Platonov, Nikita G.
2004-01-01
Melt onset dates, freeze onset dates, and melt season duration were estimated over Arctic sea ice, 1979–2001, using passive microwave satellite imagery and surface air temperature data. Sea ice melt duration for the entire Northern Hemisphere varied from a 104-day minimum in 1983 and 1996 to a 124-day maximum in 1989. Ranges in melt duration were highest in peripheral seas, numbering 32, 42, 44, and 51 days in the Laptev, Barents-Kara, East Siberian, and Chukchi Seas, respectively. In the Arctic Ocean, average melt duration varied from a 75-day minimum in 1987 to a 103-day maximum in 1989. On average, melt onset in annual ice began 10.6 days earlier than perennial ice, and freeze onset in perennial ice commenced 18.4 days earlier than annual ice. Average annual melt dates, freeze dates, and melt durations in annual ice were significantly correlated with seasonal strength of the Arctic Oscillation (AO). Following high-index AO winters (January–March), spring melt tended to be earlier and autumn freeze later, leading to longer melt season durations. The largest increases in melt duration were observed in the eastern Siberian Arctic, coincident with cyclonic low pressure and ice motion anomalies associated with high-index AO phases. Following a positive AO shift in 1989, mean annual melt duration increased 2–3 weeks in the northern East Siberian and Chukchi Seas. Decreasing correlations between consecutive-year maps of melt onset in annual ice during 1979–2001 indicated increasing spatial variability and unpredictability in melt distributions from one year to the next. Despite recent declines in the winter AO index, recent melt distributions did not show evidence of reestablishing spatial patterns similar to those observed during the 1979–88 low-index AO period. Recent freeze distributions have become increasingly similar to those observed during 1979–88, suggesting a recurrent spatial pattern of freeze chronology under low-index AO conditions.
Prediction of obliteration after gamma knife surgery for cerebral arteriovenous malformations.
Karlsson, B; Lindquist, C; Steiner, L
1997-03-01
To define the factors of importance for the obliteration of cerebral arteriovenous malformations (AVMs), thus making a prediction of the probability for obliteration possible. In 945 AVMs of a series of 1319 patients treated with the gamma knife during 1970 to 1990, the relationship between patient, AVMs, and treatment parameters on the one hand and the obliteration of the nidus on the other was analyzed. The obliteration rate increased both with increased minimum (lowest periphery) and average dose and decreased with increased AVM volume. The minimum dose to the AVMs was the decisive dose factor for the treatment result. The higher the minimum dose, the higher the chance for total obliteration. The curve illustrating this relation increased logarithmically to a value of 87%. A higher average dose shortened the latency to AVM obliteration. For the obliterated cases, the larger the malformation, the lower the minimum dose used. This prompted us to relate the obliteration rate to the product minimum dose (AVM volume)1/3 (K index). The obliteration rate increased linearly with the K index up to a value of approximately 27, and for higher K values, the obliteration rate had a constant value of approximately 80%. For the group of 273 cases treated with a minimum dose of at least 25 Gy, the obliteration rate at the study end point (defined as 2-yr latency) was 80% (95% confidence interval = 75-85%). If obliterations that occurred beyond the end point are included, the obliteration rate increased to 85% (81-89%). The probability of obliteration of AVMs after gamma knife surgery is related both to the lowest dose to the AVMs and the AVM volume, and it can be predicted using the K index.
Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.
Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E
2013-12-01
Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.
Systematic investigation of NLTE phenomena in the limit of small departures from LTE
NASA Astrophysics Data System (ADS)
Libby, S. B.; Graziani, F. R.; More, R. M.; Kato, T.
1997-04-01
In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff's law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models.
Simultaneous multislice refocusing via time optimal control.
Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf
2018-02-09
Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.
High-resolution observations of the polar magnetic fields of the sun
NASA Technical Reports Server (NTRS)
Lin, H.; Varsik, J.; Zirin, H.
1994-01-01
High-resolution magnetograms of the solar polar region were used for the study of the polar magnetic field. In contrast to low-resolution magnetograph observations which measure the polar magnetic field averaged over a large area, we focused our efforts on the properties of the small magnetic elements in the polar region. Evolution of the filling factor (the ratio of the area occupied by the magnetic elements to the total area) of these magnetic elements, as well as the average magnetic field strength, were studied during the maximum and declining phase of solar cycle 22, from early 1991 to mid-1993. We found that during the sunspot maximum period, the polar regions were occupied by about equal numbers of positive and negative magnetic elements, with equal average field strength. As the solar cycle progresses toward sunspot minimum, the magnetic field elements in the polar region become predominantly of one polarity. The average magnetic field of the dominant polarity elements also increases with the filling factor. In the meanwhile, both the filling factor and the average field strength of the non-dominant polarity elements decrease. The combined effects of the changing filling factors and average field strength produce the observed evolution of the integrated polar flux over the solar cycle. We compared the evolutionary histories of both filling factor and average field strength, for regions of high (70-80 deg) and low (60-70 deg) latitudes. For the south pole, we found no significant evidence of difference in the time of reversal. However, the low-latitude region of the north pole did reverse polarity much earlier than the high-latitude region. It later showed an oscillatory behavior. We suggest this may be caused by the poleward migration of flux from a large active region in 1989 with highly imbalanced flux.
Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J
2017-09-01
Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Wagenaar, Alexander C; Maldonado-Molina, Mildred M; Erickson, Darin J; Ma, Linan; Tobler, Amy L; Komro, Kelli A
2007-09-01
We examined effects of state statutory changes in DUI fine or jail penalties for firsttime offenders from 1976 to 2002. A quasi-experimental time-series design was used (n=324 monthly observations). Four outcome measures of drivers involved in alcohol-related fatal crashes are: single-vehicle nighttime, low BAC (0.01-0.07g/dl), medium BAC (0.08-0.14g/dl), high BAC (>/=0.15g/dl). All analyses of BAC outcomes included multiple imputation procedures for cases with missing data. Comparison series of non-alcohol-related crashes were included to efficiently control for effects of other factors. Statistical models include state-specific Box-Jenkins ARIMA models, and pooled general linear mixed models. Twenty-six states implemented mandatory minimum fine policies and 18 states implemented mandatory minimum jail penalties. Estimated effects varied widely from state to state. Using variance weighted meta-analysis methods to aggregate results across states, mandatory fine policies are associated with an average reduction in fatal crash involvement by drivers with BAC>/=0.08g/dl of 8% (averaging 13 per state per year). Mandatory minimum jail policies are associated with a decline in single-vehicle nighttime fatal crash involvement of 6% (averaging 5 per state per year), and a decline in low-BAC cases of 9% (averaging 3 per state per year). No significant effects were observed for the other outcome measures. The overall pattern of results suggests a possible effect of mandatory fine policies in some states, but little effect of mandatory jail policies.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
Intelligent Hybrid Vehicle Power Control - Part 1: Machine Learning of Optimal Vehicle Power
2012-06-30
time window ),[ tWt DT : vave, vmax, vmin, ac, vst and vend, where the first four parameters are, respectively, the average speed, maximum speed...minimum speed and average acceleration, during the time period ),[ tWt DT , vst is the vehicle speed at )( DTWt , and vend is the vehicle
Code of Federal Regulations, 2013 CFR
2013-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2012 CFR
2012-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2014 CFR
2014-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2011 CFR
2011-07-01
... record the desorption gas inlet temperature at least once every 15 minutes during each of the three runs... and record the average desorption gas inlet temperature. The minimum operating limit for the concentrator is 8 degrees Celsius (15 degrees Fahrenheit) below the average desorption gas inlet temperature...
Code of Federal Regulations, 2010 CFR
2010-07-01
... monitoring data I must collect with my continuous emission monitoring systems and is the data collection... monitoring systems and is the data collection requirement enforceable? (a) Where continuous emission monitoring systems are required, obtain 1-hour arithmetic averages. Make sure the averages for sulfur dioxide...
Code of Federal Regulations, 2011 CFR
2011-07-01
... monitoring data I must collect with my continuous emission monitoring systems and is the data collection... monitoring systems and is the data collection requirement enforceable? (a) Where continuous emission monitoring systems are required, obtain 1-hour arithmetic averages. Make sure the averages for sulfur dioxide...
How many surgery appointments should be offered to avoid undesirable numbers of 'extras'?
Kendrick, T; Kerry, S
1999-04-01
Patients seen as 'extras' (or 'fit-ins') are usually given less time for their problems than those in pre-booked appointments. Consequently, long queues of 'extras' should be avoided. To determine whether a predictable relationship exists between the number of available appointments at the start of the day and the number of extra patients who must be fitted in. This might be used to help plan a practice appointment system. Numbers of available appointments at the start of the day and numbers of 'extras' seen were recorded prospectively in 1995 and 1997 in one group general practice. Minimum numbers of available appointments at the start of the day, below which undesirably large numbers of extra patients could be predicted, were determined using logistic regression applied to the 1995 data. Predictive values of the minimum numbers calculated for 1995, in terms of predicting undesirable numbers of 'extras', were then determined when applied to the 1997 data. Numbers of extra patients seen correlated negatively with available appointments at the start of the day for all days of the week, with coefficients ranging from -0.66 to -0.80. Minimum numbers of available appointments below which undesirably large numbers of extras could be predicted were 26 for Mondays and four for the other week-days. When applied to 1997 data, these minimum numbers gave positive and negative predictive values of 76% and 82% respectively, similar to their values for 1995, despite increases in patient attendance and changes in the day-to-day pattern of surgery provision between the two years. A predictable relationship exists between the number of available appointments at the start of the day and the number of extras who must be fitted in, which may be used to help plan the appointment system for some years ahead, at least in this relatively stable suburban practice.
Vocal Parameters of Elderly Female Choir Singers
Aquino, Fernanda Salvatico de; Ferreira, Léslie Piccolotto
2015-01-01
Introduction Due to increased life expectancy among the population, studying the vocal parameters of the elderly is key to promoting vocal health in old age. Objective This study aims to analyze the profile of the extension of speech of elderly female choristers, according to age group. Method The study counted on the participation of 25 elderly female choristers from the Choir of Messianic Church of São Paulo, with ages varying between 63 and 82 years, and an average of 71 years (standard deviation of 5.22). The elders were divided into two groups: G1 aged 63 to 71 years and G2 aged 72 to 82. We asked that each participant count from 20 to 30 in weak, medium, strong, and very strong intensities. Their speech was registered by the software Vocalgrama that allows the evaluation of the profile of speech range. We then submitted the parameters of frequency and intensity to descriptive analysis, both in minimum and maximum levels, and range of spoken voice. Results The average of minimum and maximum frequencies were respectively 134.82–349.96 Hz for G1 and 137.28–348.59 Hz for G2; the average for minimum and maximum intensities were respectively 40.28–95.50 dB for G1 and 40.63–94.35 dB for G2; the vocal range used in speech was 215.14 Hz for G1 and 211.30 Hz for G2. Conclusion The minimum and maximum frequencies, maximum intensity, and vocal range presented differences in favor of the younger elder group. PMID:26722341
Low-flow characteristics of streams in South Carolina
Feaster, Toby D.; Guimaraes, Wladmir B.
2017-09-22
An ongoing understanding of streamflow characteristics of the rivers and streams in South Carolina is important for the protection and preservation of the State’s water resources. Information concerning the low-flow characteristics of streams is especially important during critical flow periods, such as during the historic droughts that South Carolina has experienced in the past few decades.Between 2008 and 2016, the U.S. Geological Survey, in cooperation with the South Carolina Department of Health and Environmental Control, updated low-flow statistics at 106 continuous-record streamgages operated by the U.S. Geological Survey for the eight major river basins in South Carolina. The low-flow frequency statistics included the annual minimum 1-, 3-, 7-, 14-, 30-, 60-, and 90-day mean flows with recurrence intervals of 2, 5, 10, 20, 30, and 50 years, depending on the length of record available at the streamflow-gaging station. Computations of daily mean flow durations for the 5-, 10-, 25-, 50-, 75-, 90-, and 95-percent probability of exceedance also were included.This report summarizes the findings from publications generated during the 2008 to 2016 investigations. Trend analyses for the annual minimum 7-day average flows are provided as well as trend assessments of long-term annual precipitation data. Statewide variability in the annual minimum 7-day average flow is assessed at eight long-term (record lengths from 55 to 78 years) streamgages. If previous low-flow statistics were available, comparisons with the updated annual minimum 7-day average flow, having a 10-year recurrence interval, were made. In addition, methods for estimating low-flow statistics at ungaged locations near a gaged location are described.
21 CFR 178.3860 - Release agents.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-octadecylcarbamate) (CAS Reg. No. 70892-21-6) produced by the reaction between stoichiometrically equivalent amounts of octadecyl isocyanate and vinyl alcohol/vinyl acetate copolymer; minimum average molecular weight...
The impact of minimum wages on population health: evidence from 24 OECD countries.
Lenhart, Otto
2017-11-01
This study examines the relationship between minimum wages and several measures of population health by analyzing data from 24 OECD countries for a time period of 31 years. Specifically, I test for health effects as a result of within-country variations in the generosity of minimum wages, which are measured by the Kaitz index. The paper finds that higher levels of minimum wages are associated with significant reductions of overall mortality rates as well as in the number of deaths due to outcomes that have been shown to be more prevalent among individuals with low socioeconomic status (e.g., diabetes, disease of the circulatory system, stroke). A 10% point increase of the Kaitz index is associated with significant declines in death rates and an increase in life expectancy of 0.44 years. Furthermore, I provide evidence for potential channels through which minimum wages impact population health by showing that more generous minimum wages impact outcomes such as poverty, the share of the population with unmet medical needs, the number of doctor consultations, tobacco consumption, calorie intake, and the likelihood of people being overweight.
Deng, Wei; Long, Long; Tang, Xian-Yan; Huang, Tian-Ren; Li, Ji-Lin; Rong, Min-Hua; Li, Ke-Zhi; Liu, Hai-Zhou
2015-01-01
Geographic information system (GIS) technology has useful applications for epidemiology, enabling the detection of spatial patterns of disease dispersion and locating geographic areas at increased risk. In this study, we applied GIS technology to characterize the spatial pattern of mortality due to liver cancer in the autonomous region of Guangxi Zhuang in southwest China. A database with liver cancer mortality data for 1971-1973, 1990-1992, and 2004-2005, including geographic locations and climate conditions, was constructed, and the appropriate associations were investigated. It was found that the regions with the highest mortality rates were central Guangxi with Guigang City at the center, and southwest Guangxi centered in Fusui County. Regions with the lowest mortality rates were eastern Guangxi with Pingnan County at the center, and northern Guangxi centered in Sanjiang and Rongshui counties. Regarding climate conditions, in the 1990s the mortality rate of liver cancer positively correlated with average temperature and average minimum temperature, and negatively correlated with average precipitation. In 2004 through 2005, mortality due to liver cancer positively correlated with the average minimum temperature. Regions of high mortality had lower average humidity and higher average barometric pressure than did regions of low mortality. Our results provide information to benefit development of a regional liver cancer prevention program in Guangxi, and provide important information and a reference for exploring causes of liver cancer.
Systemic and Local Vaccination against Breast Cancer with Minimum Autoimmune Sequelae
2012-10-01
AD_________________ Award Number: W81XWH-10-1-0466 TITLE: Systemic and Local Vaccination against...September 2012 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Systemic and Local Vaccination against Breast Cancer with Minimum Autoimmune Sequelae 5b...eliminate the tumor by vaccination and local ablation to render long-term immune protection without excessive autoimmune sequelae. Complimenting this
Systemic And Local Vaccination Against Breast Cancer With Minimum Autoimmune Sequelae
2011-10-01
AD_________________ Award Number: W81XWH-10-1-0466 TITLE: Systemic and Local Vaccination against...2011 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Systemic and Local Vaccination against Breast Cancer with Minimum Autoimmune Sequelae 5b. GRANT...eliminate the tumor by vaccination and local ablation to render long-term immune protection without excessive autoimmune sequelae. Complimenting this
Code of Federal Regulations, 2010 CFR
2010-10-01
... Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Mothership Under... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each...-out allocation (2,220) Column G Number of Chinook salmon deducted from the annual threshold amount of...
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Canadian crop calendars in support of the early warning project
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Hodges, T. (Principal Investigator)
1980-01-01
The Canadian crop calendars for LACIE are presented. Long term monthly averages of daily maximum and daily minimum temperatures for subregions of provinces were used to simulate normal daily maximum and minimum temperatures. The Robertson (1968) spring wheat and Williams (1974) spring barley phenology models were run using the simulated daily temperatures and daylengths for appropriate latitudes. Simulated daily temperatures and phenology model outputs for spring wheat and spring barley are given.
Validation of the Kp Geomagnetic Index Forecast at CCMC
NASA Astrophysics Data System (ADS)
Frechette, B. P.; Mays, M. L.
2017-12-01
The Community Coordinated Modeling Center (CCMC) Space Weather Research Center (SWRC) sub-team provides space weather services to NASA robotic mission operators and science campaigns and prototypes new models, forecasting techniques, and procedures. The Kp index is a measure of geomagnetic disturbances for space weather in the magnetosphere such as geomagnetic storms and substorms. In this study, we performed validation on the Newell et al. (2007) Kp prediction equation from December 2010 to July 2017. The purpose of this research is to understand the Kp forecast performance because it's critical for NASA missions to have confidence in the space weather forecast. This research was done by computing the Kp error for each forecast (average, minimum, maximum) and each synoptic period. Then to quantify forecast performance we computed the mean error, mean absolute error, root mean square error, multiplicative bias and correlation coefficient. A contingency table was made for each forecast and skill scores were computed. The results are compared to the perfect score and reference forecast skill score. In conclusion, the skill score and error results show that the minimum of the predicted Kp over each synoptic period from the Newell et al. (2007) Kp prediction equation performed better than the maximum or average of the prediction. However, persistence (reference forecast) outperformed all of the Kp forecasts (minimum, maximum, and average). Overall, the Newell Kp prediction still predicts within a range of 1, even though persistence beats it.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
A conditional stochastic weather generator for seasonal to multi-decadal simulations
NASA Astrophysics Data System (ADS)
Verdin, Andrew; Rajagopalan, Balaji; Kleiber, William; Podestá, Guillermo; Bert, Federico
2018-01-01
We present the application of a parametric stochastic weather generator within a nonstationary context, enabling simulations of weather sequences conditioned on interannual and multi-decadal trends. The generalized linear model framework of the weather generator allows any number of covariates to be included, such as large-scale climate indices, local climate information, seasonal precipitation and temperature, among others. Here we focus on the Salado A basin of the Argentine Pampas as a case study, but the methodology is portable to any region. We include domain-averaged (e.g., areal) seasonal total precipitation and mean maximum and minimum temperatures as covariates for conditional simulation. Areal covariates are motivated by a principal component analysis that indicates the seasonal spatial average is the dominant mode of variability across the domain. We find this modification to be effective in capturing the nonstationarity prevalent in interseasonal precipitation and temperature data. We further illustrate the ability of this weather generator to act as a spatiotemporal downscaler of seasonal forecasts and multidecadal projections, both of which are generally of coarse resolution.
NASA Astrophysics Data System (ADS)
Naveen, A.; Krishnamurthy, L.; Shridhar, T. N.
2018-04-01
Tungsten (W) and Alumina (Al2O3) thin films have been developed using co-sputtering technique on SS304, Copper (Cu) and Glass slides using Direct Current magnetron sputtering (DC) and Radio Frequency (RF) magnetron sputtering methods respectively. Central Composite Design (CCD) method approach has been adopted to determine the number of experimental plans for deposition and DC power, RF power and Argon gas flow rate have been input parameters, each at 5 levels for development of thin films. In this research paper, study has been carried out determine the optimized condition of deposition parameters for thickness and surface roughness of the thin films. Thickness and average Surface roughness in terms of nanometer (nm) have been characterized by thickness profilometer and atomic force microscopy respectively. The maximum and minimum average thickness observed to be 445 nm and 130 respectively. The optimum deposition condition for W/Al2O3 thin film growth was determined to be at 1000 watts of DC power and 800 watts of RF power, 20 minutes of deposition time, and almost 300 Standard Cubic Centimeter(SCCM) of Argon gas flow. It was observed that average roughness difference found to be less than one nanometer on SS substrate and one nanometer on copper approximately.
Structure of a swirling jet with vortex breakdown and combustion
NASA Astrophysics Data System (ADS)
Sharaborin, D. K.; Dulin, V. M.; Markovich, D. M.
2018-03-01
An experimental investigation is performed in order to compare the time-averaged spatial structure of low- and high-swirl turbulent premixed lean flames by using the particle image velocimetry and spontaneous Raman scattering techniques. Distributions of the time-average velocity, density and concentration of the main components of the gas mixture are measured for turbulent premixed swirling propane/air flames at atmospheric pressure for the equivalence ratio Φ = 0.7 and Reynolds number Re = 5000 for low- and high-swirl reacting jets. For the low-swirl jet (S = 0.41), the local minimum of the axial mean velocity is observed within the jet center. The positive value of the mean axial velocity indicates the absence of a permanent recirculation zone, and no clear vortex breakdown could be determined from the average velocity field. For the high-swirl jet (S = 1.0), a pronounced vortex breakdown took place with a bubble-type central recirculation zone. In both cases, the flames are stabilized in the inner mixing layer of the jet around the central wake, containing hot combustion products. O2 and CO2 concentrations in the wake of the low-swirl jet are found to be approximately two times smaller and greater than those in the recirculation zone of the high-swirl jet, respectively.
Botai, Joel O.; Rautenbach, Hannes; Ncongwane, Katlego P.; Botai, Christina M.
2017-01-01
The north-eastern parts of South Africa, comprising the Limpopo Province, have recorded a sudden rise in the rate of malaria morbidity and mortality in the 2017 malaria season. The epidemiological profiles of malaria, as well as other vector-borne diseases, are strongly associated with climate and environmental conditions. A retrospective understanding of the relationship between climate and the occurrence of malaria may provide insight into the dynamics of the disease’s transmission and its persistence in the north-eastern region. In this paper, the association between climatic variables and the occurrence of malaria was studied in the Mutale local municipality in South Africa over a period of 19-year. Time series analysis was conducted on monthly climatic variables and monthly malaria cases in the Mutale municipality for the period of 1998–2017. Spearman correlation analysis was performed and the Seasonal Autoregressive Integrated Moving Average (SARIMA) model was developed. Microsoft Excel was used for data cleaning, and statistical software R was used to analyse the data and develop the model. Results show that both climatic variables’ and malaria cases’ time series exhibited seasonal patterns, showing a number of peaks and fluctuations. Spearman correlation analysis indicated that monthly total rainfall, mean minimum temperature, mean maximum temperature, mean average temperature, and mean relative humidity were significantly and positively correlated with monthly malaria cases in the study area. Regression analysis showed that monthly total rainfall and monthly mean minimum temperature (R2 = 0.65), at a two-month lagged effect, are the most significant climatic predictors of malaria transmission in Mutale local municipality. A SARIMA (2,1,2) (1,1,1) model fitted with only malaria cases has a prediction performance of about 51%, and the SARIMAX (2,1,2) (1,1,1) model with climatic variables as exogenous factors has a prediction performance of about 72% in malaria cases. The model gives a close comparison between the predicted and observed number of malaria cases, hence indicating that the model provides an acceptable fit to predict the number of malaria cases in the municipality. To sum up, the association between the climatic variables and malaria cases provides clues to better understand the dynamics of malaria transmission. The lagged effect detected in this study can help in adequate planning for malaria intervention. PMID:29117114
Adeola, Abiodun M; Botai, Joel O; Rautenbach, Hannes; Adisa, Omolola M; Ncongwane, Katlego P; Botai, Christina M; Adebayo-Ojo, Temitope C
2017-11-08
The north-eastern parts of South Africa, comprising the Limpopo Province, have recorded a sudden rise in the rate of malaria morbidity and mortality in the 2017 malaria season. The epidemiological profiles of malaria, as well as other vector-borne diseases, are strongly associated with climate and environmental conditions. A retrospective understanding of the relationship between climate and the occurrence of malaria may provide insight into the dynamics of the disease's transmission and its persistence in the north-eastern region. In this paper, the association between climatic variables and the occurrence of malaria was studied in the Mutale local municipality in South Africa over a period of 19-year. Time series analysis was conducted on monthly climatic variables and monthly malaria cases in the Mutale municipality for the period of 1998-2017. Spearman correlation analysis was performed and the Seasonal Autoregressive Integrated Moving Average (SARIMA) model was developed. Microsoft Excel was used for data cleaning, and statistical software R was used to analyse the data and develop the model. Results show that both climatic variables' and malaria cases' time series exhibited seasonal patterns, showing a number of peaks and fluctuations. Spearman correlation analysis indicated that monthly total rainfall, mean minimum temperature, mean maximum temperature, mean average temperature, and mean relative humidity were significantly and positively correlated with monthly malaria cases in the study area. Regression analysis showed that monthly total rainfall and monthly mean minimum temperature ( R ² = 0.65), at a two-month lagged effect, are the most significant climatic predictors of malaria transmission in Mutale local municipality. A SARIMA (2,1,2) (1,1,1) model fitted with only malaria cases has a prediction performance of about 51%, and the SARIMAX (2,1,2) (1,1,1) model with climatic variables as exogenous factors has a prediction performance of about 72% in malaria cases. The model gives a close comparison between the predicted and observed number of malaria cases, hence indicating that the model provides an acceptable fit to predict the number of malaria cases in the municipality. To sum up, the association between the climatic variables and malaria cases provides clues to better understand the dynamics of malaria transmission. The lagged effect detected in this study can help in adequate planning for malaria intervention.
Electronic transport in Thue-Morse gapped graphene superlattice under applied bias
NASA Astrophysics Data System (ADS)
Wang, Mingjing; Zhang, Hongmei; Liu, De
2018-04-01
We investigate theoretically the electronic transport properties of Thue-Morse gapped graphene superlattice under an applied electric field. The results indicate that the combined effect of the band gap and the applied bias breaks the angular symmetry of the transmission coefficient. The zero-averaged wave-number gap can be greatly modulated by the band gap and the applied bias, but its position is robust against change of the band gap. Moreover, the conductance and the Fano factor are strongly dependent not only on the Fermi energy but also on the band gap and the applied bias. In the vicinity of the new Dirac point, the minimum value of the conductance obviously decreases and the Fano factor gradually forms a Poissonian value plateau with increasing of the band gap.
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
Code of Federal Regulations, 2010 CFR
2010-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2014 CFR
2014-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2013 CFR
2013-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2011 CFR
2011-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Code of Federal Regulations, 2012 CFR
2012-10-01
... OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.2 Purpose. The purpose of this part is to increase the fuel economy of passenger automobiles by establishing minimum levels of...
Population size, survival, and movements of white-cheeked pintails in Eastern Puerto Rico
Collazo, J.A.; Bonilla-Martinez, G.
2001-01-01
We estimated numbers and survival of White-cheeked Pintails (Anas bahamensis) in eastern Puerto Rico during 1996-1999. We also quantified their movements between Culebra Island and the Humacao Wildlife Refuge, Puerto Rico. Mark-resight population size estimates averaged 1020 pintails during nine, 3-month sampling periods from January 1997 to June 1999. On average, minimum regional counts were 38 % lower than mark-resight estimates (mean = 631). Adult survival was 0.51 ?? 0.09 (SE). This estimate is similar for other anatids of similar size but broader geographic distribution. The probability of pintails surviving and staying in Humacao was hiher (67 %) than for counterparts on Culebra (31 %). The probability of surviving and moving from Culebra to Humacao (41 %) was higher than from Humacao to Culebra (20 %). These findings, and available information on reproduction, indicate that the Humacao Wildlife Refuge refuge has an important role in the regional demography of pintails. Our findings on population numbers and regional survival are encouraging, given concerns about the species' status due to habitat loss and hunting. However, our outlook for the species is tempered by the remaining gaps in the population dynamics of pintails; for examples, survival estimates of broods and fledglings (age 0-1) are needed for a comprehensive status assessment. Until additional data are obtianed, White-cheeked Pintails should continue to be protectd from hunting in Puerto Rico.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Metastable Features of Economic Networks and Responses to Exogenous Shocks
Hosseiny, Ali; Bahrami, Mohammad; Palestrini, Antonio; Gallegati, Mauro
2016-01-01
It is well known that a network structure plays an important role in addressing a collective behavior. In this paper we study a network of firms and corporations for addressing metastable features in an Ising based model. In our model we observe that if in a recession the government imposes a demand shock to stimulate the network, metastable features shape its response. Actually we find that there exists a minimum bound where any demand shock with a size below it is unable to trigger the market out of recession. We then investigate the impact of network characteristics on this minimum bound. We surprisingly observe that in a Watts-Strogatz network, although the minimum bound depends on the average of the degrees, when translated into the language of economics, such a bound is independent of the average degrees. This bound is about 0.44ΔGDP, where ΔGDP is the gap of GDP between recession and expansion. We examine our suggestions for the cases of the United States and the European Union in the recent recession, and compare them with the imposed stimulations. While the stimulation in the US has been above our threshold, in the EU it has been far below our threshold. Beside providing a minimum bound for a successful stimulation, our study on the metastable features suggests that in the time of crisis there is a “golden time passage” in which the minimum bound for successful stimulation can be much lower. Hence, our study strongly suggests stimulations to arise within this time passage. PMID:27706166
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-15
.... The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and the need... DP, Amdt 2 Alexandria, MN, Chandler Field, RNAV (GPS) RWY 22, Orig Bemidji, MN, Bemidji Rgnl, RNAV (GPS) RWY 25, Orig Granite Falls, MN, Granite Falls Muni/Lenzen-Roe Meml Fld, Takeoff Minimums and...
Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.
ERIC Educational Resources Information Center
Mincy, Ronald B.
Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…
Code of Federal Regulations, 2014 CFR
2014-07-01
... averages for sulfur dioxide, nitrogen oxides (Class I municipal waste combustion units only), and carbon monoxide are in parts per million by dry volume at 7 percent oxygen (or the equivalent carbon dioxide level). Use the 1-hour averages of oxygen (or carbon dioxide) data from your continuous emission monitoring...
Code of Federal Regulations, 2013 CFR
2013-07-01
... averages for sulfur dioxide, nitrogen oxides (Class I municipal waste combustion units only), and carbon monoxide are in parts per million by dry volume at 7 percent oxygen (or the equivalent carbon dioxide level). Use the 1-hour averages of oxygen (or carbon dioxide) data from your continuous emission monitoring...
Code of Federal Regulations, 2012 CFR
2012-07-01
... averages for sulfur dioxide, nitrogen oxides (Class I municipal waste combustion units only), and carbon monoxide are in parts per million by dry volume at 7 percent oxygen (or the equivalent carbon dioxide level). Use the 1-hour averages of oxygen (or carbon dioxide) data from your continuous emission monitoring...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 8003(b) and (e)? 222.36 Section 222.36 Education Regulations of the Offices of the Department of... for Federally Connected Children Under Section 8003(b) and (e) of the Act § 222.36 What minimum number... of those children under section 8003(b) and (e)? (a) Except as provided in paragraph (d) of this...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 8003(b) and (e)? 222.36 Section 222.36 Education Regulations of the Offices of the Department of... for Federally Connected Children Under Section 8003(b) and (e) of the Act § 222.36 What minimum number... of those children under section 8003(b) and (e)? (a) Except as provided in paragraph (d) of this...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S.
2011-05-17
The process of recovering the waste in storage tanks at the Savannah River Site (SRS) typically requires mixing the contents of the tank to ensure uniformity of the discharge stream. Mixing is accomplished with one to four dual-nozzle slurry pumps located within the tank liquid. For the work, a Tank 48 simulation model with a maximum of four slurry pumps in operation has been developed to estimate flow patterns for efficient solid mixing. The modeling calculations were performed by using two modeling approaches. One approach is a single-phase Computational Fluid Dynamics (CFD) model to evaluate the flow patterns and qualitativemore » mixing behaviors for a range of different modeling conditions since the model was previously benchmarked against the test results. The other is a two-phase CFD model to estimate solid concentrations in a quantitative way by solving the Eulerian governing equations for the continuous fluid and discrete solid phases over the entire fluid domain of Tank 48. The two-phase results should be considered as the preliminary scoping calculations since the model was not validated against the test results yet. A series of sensitivity calculations for different numbers of pumps and operating conditions has been performed to provide operational guidance for solids suspension and mixing in the tank. In the analysis, the pump was assumed to be stationary. Major solid obstructions including the pump housing, the pump columns, and the 82 inch central support column were included. The steady state and three-dimensional analyses with a two-equation turbulence model were performed with FLUENT{trademark} for the single-phase approach and CFX for the two-phase approach. Recommended operational guidance was developed assuming that local fluid velocity can be used as a measure of sludge suspension and spatial mixing under single-phase tank model. For quantitative analysis, a two-phase fluid-solid model was developed for the same modeling conditions as the single-phase model. The modeling results show that the flow patterns driven by four pump operation satisfy the solid suspension requirement, and the average solid concentration at the plane of the transfer pump inlet is about 12% higher than the tank average concentrations for the 70 inch tank level and about the same as the tank average value for the 29 inch liquid level. When one of the four pumps is not operated, the flow patterns are satisfied with the minimum suspension velocity criterion. However, the solid concentration near the tank bottom is increased by about 30%, although the average solid concentrations near the transfer pump inlet have about the same value as the four-pump baseline results. The flow pattern results show that although the two-pump case satisfies the minimum velocity requirement to suspend the sludge particles, it provides the marginal mixing results for the heavier or larger insoluble materials such as MST and KTPB particles. The results demonstrated that when more than one jet are aiming at the same position of the mixing tank domain, inefficient flow patterns are provided due to the highly localized momentum dissipation, resulting in inactive suspension zone. Thus, after completion of the indexed solids suspension, pump rotations are recommended to avoid producing the nonuniform flow patterns. It is noted that when tank liquid level is reduced from the highest level of 70 inches to the minimum level of 29 inches for a given number of operating pumps, the solid mixing efficiency becomes better since the ratio of the pump power to the mixing volume becomes larger. These results are consistent with the literature results.« less
Kirton, Laurence G.; Yusoff, Norma-Rashid
2017-01-01
The Rajah Brooke's Birdwing, Trogonoptera brookiana, is a large, iconic butterfly that is facing heavy commercial exploitation and habitat loss. Males of some subspecies exhibit puddling behavior. A method of conservation monitoring was developed for subspecies albescens in Ulu Geroh, Peninsular Malaysia, where the males consistently puddle in single-species aggregations at stable geothermal springs, reaching well over 300 individuals when the population is at its highest. Digital photography was used to conduct counts of numbers of males puddling. The numbers of birdwings puddling were significantly correlated with counts of birdwings in flight, but were much higher. The numbers puddling during the peak hour were correlated with numbers puddling throughout the day and could be predicted using the numbers puddling at an alternative hour, enabling flexibility in the time of counts. Average counts for three images taken at each puddle at three peak hours between 1400–1600 hours over 2–3 days were used as a monthly population index. The numbers puddling were positively associated with higher relative humidity and brightness during monitoring hours. Monthly counts of birdwings from monitoring of puddles over a period of two years are presented. The minimum effort required for a monitoring program using counts of puddling males is discussed, as well as the potential of using the method to monitor other species of puddling butterflies. PMID:29232405
Phon, Chooi-Khim; Kirton, Laurence G; Norma-Rashid, Yusoff
2017-01-01
The Rajah Brooke's Birdwing, Trogonoptera brookiana, is a large, iconic butterfly that is facing heavy commercial exploitation and habitat loss. Males of some subspecies exhibit puddling behavior. A method of conservation monitoring was developed for subspecies albescens in Ulu Geroh, Peninsular Malaysia, where the males consistently puddle in single-species aggregations at stable geothermal springs, reaching well over 300 individuals when the population is at its highest. Digital photography was used to conduct counts of numbers of males puddling. The numbers of birdwings puddling were significantly correlated with counts of birdwings in flight, but were much higher. The numbers puddling during the peak hour were correlated with numbers puddling throughout the day and could be predicted using the numbers puddling at an alternative hour, enabling flexibility in the time of counts. Average counts for three images taken at each puddle at three peak hours between 1400-1600 hours over 2-3 days were used as a monthly population index. The numbers puddling were positively associated with higher relative humidity and brightness during monitoring hours. Monthly counts of birdwings from monitoring of puddles over a period of two years are presented. The minimum effort required for a monitoring program using counts of puddling males is discussed, as well as the potential of using the method to monitor other species of puddling butterflies.
NASA Astrophysics Data System (ADS)
Nielsen, R. L.; Ghiorso, M. S.; Trischman, T.
2015-12-01
The database traceDs is designed to provide a transparent and accessible resource of experimental partitioning data. It now includes ~ 90% of all the experimental trace element partitioning data (~4000 experiments) produced over the past 45 years, and is accessible through a web based interface (using the portal lepr.ofm-research.org). We set a minimum standard for inclusion, with the threshold criteria being the inclusion of: Experimental conditions (temperature, pressure, device, container, time, etc.) Major element composition of the phases Trace element analyses of the phases Data sources that did not report these minimum components were not included. The rationale for not including such data is that the degree of equilibration is unknown, and more important, no rigorous approach to modeling the behavior of trace elements is possible without knowledge of composition of the phases, and the temperature and pressure of formation/equilibration. The data are stored using a schema derived from that of the Library of Experimental Phase Relations (LEPR), modified to account for additional metadata, and restructured to permit multiple analytical entries for various element/technique/standard combinations. In the process of populating the database, we have learned a number of things about the existing published experimental partitioning data. Most important are: ~ 20% of the papers do not satisfy one or more of the threshold criteria. The standard format for presenting data is the average. This was developed as the standard during the time where there were space constraints for publication in spite of fact that all the information can now be published as electronic supplements. The uncertainties that are published with the compositional data are often not adequately explained (e.g. 1 or 2 sigma, standard deviation of the average, etc.). We propose a new set of publication standards for experimental data that include the minimum criteria described above, the publication of all analyses with error based on peak count rates and background, plus information on the structural state of the mineral (e.g. orthopyroxene vs. pigeonite).
NASA Technical Reports Server (NTRS)
Soebiyanto, Radina P.; Bonilla, Luis; Jara, Jorge; McCracken, John; Azziz?-Baumgartner, Eduardo; Widdowson, Marc-Alain; Kiang, Richard
2012-01-01
Worldwide, seasonal influenza causes about 500,000 deaths and 5 million severe illnesses per year. The environmental drivers of influenza transmission are poorly understood especially in the tropics. We aimed to identify meteorological factors for influenza transmission in tropical Central America. We gathered laboratory-confirmed influenza case-counts by week from Guatemala City, San Salvador Department (El Salvador) and Panama Province from 2006 to 2010. The average total cases per year were: 390 (Guatemala), 99 (San Salvador) and 129 (Panama). Meteorological factors including daily air temperature, rainfall, relative and absolute humidity (RH, AH) were obtained from ground stations, NASA satellites and land models. For these factors, we computed weekly averages and their deviation from the 5-yr means. We assessed the relationship between the number of influenza case-counts and the meteorological factors, including effects lagged by 1 to 4 weeks, using Poisson regression for each site. Our results showed influenza in San Salvador would increase by 1 case within a week of every 1 day with RH>75% (Relative Risk (RR)= 1.32, p=.001) and every 1C increase in minimum temperature (RR=1.29, p=.007) but it would decrease by 1 case for every 1mm-above mean weekly rainfall (RR=0.93,p<.001) (model pseudo-R2=0.55). Within 2 weeks, influenza in Panama was increased by 1 case for every 1% increase in RH (RR=1.04, p=.003), and it was increased by 2 cases for every 1C increase of minimum temperature (RR=2.01, p<.001) (model pseudo-R2=0.4). Influenza counts in Guatemala had 1 case increase for every 1C increase in minimum temperature in the previous week (RR=1.21, p<.001), and for every 1mm/day-above normal increase of rainfall rate (RR=1.03, p=.03) (model pseudo-R2=0.54). Our findings that cases increase with temperature and humidity differ from some temperate-zone studies. But they indicate that climate parameters such as humidity and temperature could be predictive of influenza activity and should be incorporated into country-specific influenza transmission models
20 CFR 229.4 - Applying for the overall minimum.
Code of Federal Regulations, 2010 CFR
2010-04-01
... from employment and self-employment in order to determine whether the claimant or annuitant qualifies for the overall minimum. (Approved by the Office of Management and Budget under control number 3220...
20 CFR 229.4 - Applying for the overall minimum.
Code of Federal Regulations, 2011 CFR
2011-04-01
... from employment and self-employment in order to determine whether the claimant or annuitant qualifies for the overall minimum. (Approved by the Office of Management and Budget under control number 3220...
Monitoring gray wolf populations using multiple survey methods
Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.
2013-01-01
The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method. Occupancy modeling may be useful for standardizing estimates across large landscapes, even if survey methods differ across regions, allowing for inferences about broad-scale population dynamics of wolves.
Banan, Zoya; Gernand, Jeremy M
2018-04-18
Shale gas has become an important strategic energy source with considerable potential economic benefits and the potential to reduce greenhouse gas emissions in so far as it displaces coal use. However, there still exist environmental health risks caused by emissions from exploration and production activities. In the United States, states and localities have set different minimum setback policies to reduce the health risks corresponding to the emissions from these locations, but it is unclear whether these policies are sufficient. This study uses a Gaussian plume model to evaluate the probability of exposure exceedance from EPA concentration limits for PM2.5 at various locations around a generic wellsite in the Marcellus shale region. A set of meteorological data monitored at ten different stations across Marcellus shale gas region in Pennsylvania during 2015 serves as an input to this model. Results indicate that even though the current setback distance policy in Pennsylvania (500 ft. or 152.4 m) might be effective in some cases, exposure limit exceedance occurs frequently at this distance with higher than average emission rates and/or greater number of wells per wellpad. Setback distances should be 736 m to ensure compliance with the daily average concentration of PM2.5, and a function of the number of wells to comply with the annual average PM2.5 exposure standard. The Marcellus Shale gas is known as a significant source of criteria pollutants and studies show that the current setback distance in Pennsylvania is not adequate to protect the residents from exceeding the established limits. Even an effective setback distance to meet the annual exposure limit may not be adequate to meet the daily limit. The probability of exceeding the annual limit increases with number of wells per site. We use a probabilistic dispersion model to introduce a technical basis to select appropriate setback distances.
Laines Canepa, José Ramón; Zequeira Larios, Carolina; Valadez Treviño, Maria Elena Macías; Garduza Sánchez, Diana Ivett
2012-03-01
State parks are highly sensitive areas of great natural importance and tourism value. Herein a case study involving a basic survey of solid waste which was carried out in 2006 in Agua Blanca State Park, Macuspana, Tabasco, Mexico with two sampling periods representing the high and low tourist season is presented. The survey had five objectives: to find out the number of visitors in the different seasons, to consider the daily generation of solid waste from tourist activities, to determine bulk density, to select and quantify sub-products; and to suggest a possible treatment. A daily average of 368 people visited the park: 18,862 people in 14 days during the high season holiday (in just one day, Easter Sunday, up to 4425 visitors) and 2092 visitors in 43 days during the low season. The average weight of the generated solid waste was 61.267 kg day(-1) and the generated solid waste average per person was 0.155 kg person(-1 ) day(-1). During the high season, the average increased to 0.188 kg person(-1 ) day(-1) and during the low season, the average decreased to 0.144 kg person(-1 ) day(-1). The bulk density average was 75.014 kg m(-3), the maximum value was 92.472 kg m(-3) and the minimum was 68.274 kg m(-3). The sub-products comprised 54.52% inorganic matter; 32.03% organic matter, 10.60% non-recyclable and 2.85% others. Based on these results, waste management strategies such as reuse/recycling, aerobic and anaerobic digestion, the construction of a manual landfill and the employment of a specialist firm were suggested.
a Traffic-Dependent Acoustical Grinding Criterion
NASA Astrophysics Data System (ADS)
DINGS, P.; VERHEIJEN, E.; KOOTWIJK-DAMMAN, C.
2000-03-01
On most lines of the Dutch railway network, where a substantial amount of block-braked trains have rough wheels, the average wheel roughness dominates over the rail roughness. Therefore, reducing wheel roughness is top priority in the Netherlands. However, for the situations where rail roughness exceeds wheel roughness, this roughness can be lowered at acceptable cost. The high rail roughness is often due to rail corrugation which can be removed by grinding. A method has been developed to assess periodically the rail roughness on each railway line of the network, to compare it with the average wheel roughness for that line and to determine whether a noise reduction can be achieved by grinding the rail. Roughness measurements can be carried out with an instrumented coach. The two axle-boxes of a measurement wheelset are equipped with accelerometers. Together with the train speed and the right frequency filter, the accelerometer signal is used to produce a wavelength spectrum of the rail roughness. To determine the average wheel roughness on a given line, the so-called Acoustical Timetable can be used. This database comprises train types, train intensities and train speeds for each track section in the Netherlands. An average wheel roughness spectrum is known for each type of braking system. The number of trains of each type passing by on a certain track section determine the average roughness. Analysis of the data shows on which track sections the rail roughness exceeds the wheel roughness by a specified level difference. If this track section lies in a residential area, the decision can be made to grind this piece of track to reduce the noise production locally. Using this methodology, the noise production can be kept to a minimum, determined by the local average wheel roughness.
Temporal dynamics of CO2 fluxes and profiles over a Central European city
NASA Astrophysics Data System (ADS)
Vogt, R.; Christen, A.; Rotach, M. W.; Roth, M.; Satyanarayana, A. N. V.
2006-02-01
In Summer 2002 eddy covariance flux measurements of CO2 were performed over a dense urban surface. The month-long measurements were carried out in the framework of the Basel Urban Boundary Layer Experiment (BUBBLE). Two Li7500 open path analysers were installed at z/z H = 1.0 and 2.2 above a street canyon with z H the average building height of 14.6 m and z the height above street level. Additionally, profiles of CO2 concentration were sampled at 10 heights from street level up to 2 z H . The minimum and maximum of the average diurnal course of CO2 concentration at 2 z H were 362 and 423 ppmv in late afternoon and early morning, respectively. Daytime CO2 concentrations were not correlated to local sources, e.g. the minimum occurred together with the maximum in traffic load. During night-time CO2 is in general accumulated, except when inversion development is suppressed by frontal passages. CO2 concentrations were always decreasing with height and correspondingly, the fluxes on average always directed upward. At z/z H = 2.2 low values of about 3 µmol m-2 s-1 were measured during the second half of the night. During daytime average values reached up to 14 µmol m-2 s-1. The CO2 fluxes are well correlated with the traffic load, with their maxima occurring together in late afternoon. Daytime minimum CO2 concentrations fell below regional background values. Besides vertical mixing and entrainment, it is suggested that this is also due to advection of rural air with reduced CO2 concentration. Comparison with other urban observations shows a large range of differences among urban sites in terms of both CO2 fluxes and concentrations.
Global Precipitation Measurement (GPM) Validation Network
NASA Technical Reports Server (NTRS)
Schwaller, Mathew; Moris, K. Robert
2010-01-01
The method averages the minimum TRMM PR and Ground Radar (GR) sample volumes needed to match-up spatially/temporally coincident PR and GR data types. PR and GR averages are calculated at the geometric intersection of the PR rays with the individual Ground Radar(GR)sweeps. Along-ray PR data are averaged only in the vertical, GR data are averaged only in the horizontal. Small difference in PR & GR reflectivity high in the atmosphere, relatively larger differences. Version 6 TRMM PR underestimates rainfall in the case of convective rain in the lower part of the atmosphere by 30 to 40 percent.
The impact of environmental factors on marine turtle stranding rates
Flint, Mark; Limpus, Colin J.; Mills, Paul C.
2017-01-01
Globally, tropical and subtropical regions have experienced an increased frequency and intensity in extreme weather events, ranging from severe drought to protracted rain depressions and cyclones, these coincided with an increased number of marine turtles subsequently reported stranded. This study investigated the relationship between environmental variables and marine turtle stranding. The environmental variables examined in this study, in descending order of importance, were freshwater discharge, monthly mean maximum and minimum air temperatures, monthly average daily diurnal air temperature difference and rainfall for the latitudinal hotspots (-27°, -25°, -23°, -19°) along the Queensland coast as well as for major embayments within these blocks. This study found that marine turtle strandings can be linked to these environmental variables at different lag times (3–12 months), and that cumulative (months added together for maximum lag) and non-cumulative (single month only) effects cause different responses. Different latitudes also showed different responses of marine turtle strandings, both in response direction and timing.Cumulative effects of freshwater discharge in all latitudes resulted in increased strandings 10–12 months later. For latitudes -27°, -25° and -23° non-cumulative effects for discharge resulted in increased strandings 7–12 months later. Latitude -19° had different results for the non-cumulative bay with strandings reported earlier (3–6 months). Monthly mean maximum and minimum air temperatures, monthly average daily diurnal air temperature difference and rainfall had varying results for each examined latitude. This study will allow first responders and resource managers to be better equipped to deal with increased marine turtle stranding rates following extreme weather events. PMID:28771635
NASA Astrophysics Data System (ADS)
López-González, M. J.; Rodríguez, E.; García-Comas, M.; López-Puertas, M.; Olivares, I.; Ruiz-Bueno, J. A.; Shepherd, M. G.; Shepherd, G. G.; Sargoytchev, S.
2017-11-01
In this paper, we investigate the tidal activity in the mesosphere and lower thermosphere region at 370N using OH Meinel and O2 atmospheric airglow observations from 1998 to 2015. The observations were taken with a Spectral Airglow Temperature Imager (SATI) installed at Sierra Nevada Observatory (SNO) (37.060N, 3.380W) at 2900 m height. From these observations a seasonal dependence of the amplitudes of the semidiurnal tide is inferred. The maximum tidal amplitude occurs in winter and the minimum in summer. The vertically averaged rotational temperatures and vertically integrated volume emission rate (rotational temperatures and intensities here in after), from the O2 atmospheric band measurements and the rotational temperature derived from OH Meinel band measurements reach the maximum amplitude about 1-4 h after midnight during almost all the year except in August-September where the maximum is found 2-4 h earlier. The amplitude of the tide in the OH intensity reaches the minimum near midnight in midwinter, then it is progressively delayed until 4:00 LT in August-September, and from there on it moves again forward towards midnight. The mean Krassovsky numbers for OH and O2 emissions are 5.9 ±1.8 and 5.6 ±1.0, respectively, with negative Krassovsky phases for almost all the year, indicating an upward energy transport. The mean vertical wavelengths for the vertical tidal propagation derived from OH and O2 emissions are 35 ±20 km and 33 ±18 km, respectively. The vertical wavelengths together with the phase shift in the temperature derived from both airglow emissions indicate that these airglow emission layers are separated by 7 ±3 km, on average.
Chuang, Ting-Wu; Ionides, Edward L; Knepper, Randall G; Stanuszek, William W; Walker, Edward D; Wilson, Mark L
2012-07-01
Weather is important determinant of mosquito abundance that, in turn, influences vectorborne disease dynamics. In temperate regions, transmission generally is seasonal as mosquito abundance and behavior varies with temperature, precipitation, and other meteorological factors. We investigated how such factors affected species-specific mosquito abundance patterns in Saginaw County, MI, during a 17-yr period. Systematic sampling was undertaken at 22 trapping sites from May to September, during 1989-2005, for 19,228 trap-nights and 300,770 mosquitoes in total. Aedes vexans (Meigen), Culex pipiens L. and Culex restuans Theobald, the most abundant species, were analyzed. Weather data included local daily maximum temperature, minimum temperature, total precipitation, and average relative humidity. In addition to standard statistical methods, cross-correlation mapping was used to evaluate temporal associations with various lag periods between weather variables and species-specific mosquito abundances. Overall, the average number of mosquitoes was 4.90 per trap-night for Ae. vexans, 2.12 for Cx. pipiens, and 1.23 for Cx. restuans. Statistical analysis of the considerable temporal variability in species-specific abundances indicated that precipitation and relative humidity 1 wk prior were significantly positively associated with Ae. vexans, whereas elevated maximum temperature had a negative effect during summer. Cx. pipiens abundance was positively influenced by the preceding minimum temperature in the early season but negatively associated with precipitation during summer and with maximum temperature in July and August. Cx. restuans showed the least weather association, with only relative humidity 2-24 d prior being linked positively during late spring-early summer. The recently developed analytical method applied in this study could enhance our understanding of the influences of weather variability on mosquito population dynamics.
A finite-state, finite-memory minimum principle, part 2
NASA Technical Reports Server (NTRS)
Sandell, N. R., Jr.; Athans, M.
1975-01-01
In part 1 of this paper, a minimum principle was found for the finite-state, finite-memory (FSFM) stochastic control problem. In part 2, conditions for the sufficiency of the minimum principle are stated in terms of the informational properties of the problem. This is accomplished by introducing the notion of a signaling strategy. Then a min-H algorithm based on the FSFM minimum principle is presented. This algorithm converges, after a finite number of steps, to a person - by - person extremal solution.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and... Three Rivers, MI, Three Rivers Muni Dr. Haines, Takeoff Minimums and Obstacle DP, Orig Brainerd, MN, Brainerd Lakes Rgnl, ILS OR LOC/DME RWY 34, Amdt 1 Park Rapids, MN, Park Rapids Muni-Konshok Field, NDB RWY...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2013 CFR
2013-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2014 CFR
2014-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2012 CFR
2012-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Catcher... Allocation and Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to... threshold amount of 13,516 Column H Percent used to calculate IPA minimum participation Vessel name USCG...
High charge state carbon and oxygen ions in Earth's equatorial quasi-trapping region
NASA Technical Reports Server (NTRS)
Christon, S. P.; Hamilton, D. C.; Gloeckler, G.; Eastmann, T. E.
1994-01-01
Observations of energetic (1.5 - 300 keV/e) medium-to-high charge state (+3 less than or equal to Q less than or equal to +7) solar wind origin C and O ions made in the quasi-trapping region (QTR) of Earth's magnetosphere are compared to ion trajectories calculated in model equatorial magnetospheric magnetic and electric fields. These comparisons indicate that solar wind ions entering the QTR on the nightside as an energetic component of the plasma sheet exit the region on the dayside, experiencing little or no charge exchange on the way. Measurements made by the CHarge Energy Mass (CHEM) ion spectrometer on board the Active Magnetospheric Particle Tracer Explorer/Charge Composition Explorer (AMPTE/CCE) spacecraft at 7 less than L less than 9 from September 1984 to January 1989 are the source of the new results contained herein: quantitative long-term determination of number densities, average energies, energy spectra, local time distributions, and their variation with geomagnetic disturbance level as indexed by Kp. Solar wind primaries (ions with charge states unchanged) and their secondaries (ions with generally lower charge states produced from primaries in the magnetosphere via charge exchange)are observed throughout the QTR and have distinctly different local time variations that persist over the entire 4-year analysis interval. During Kp larger than or equal to 3 deg intervals, primary ion (e.g., O(+6)) densities exhibit a pronounced predawn maximum with average energy minimum and a broad near-local-noon density minimum with average energy maximum. Secondary ion (e.g., O(+5)) densities do not have an identifiable predawn peak, rather they have a broad dayside maximum peaked in local morning and a nightside minimum. During Kp less than or equal to 2(-) intervals, primary ion density peaks are less intense, broader in local time extent, and centered near midnight, while secondary ion density local time variations diminish. The long-time-interval baseline helps to refine and extend previous observations; for example, we show that ionospheric contribution to O(+3)) is negligible. Through comparison with model ion trajectories, we interpret the lack of pronounced secondary ion density peaks colocated with the primary density peaks to indicate that: (1) negligible charge exchange occurs at L greater than 7, that is, solar wind secondaries are produced at L less than 7, and (2) solar wind secondaries do not form a significant portion of the plasma sheet population injected into the QTR. We conclude that little of the energetic solar wind secondary ion population is recirculated through the magnetosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as amore » supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.« less
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Out of Pocket Payment for Obstetrical Complications: A Cost Analysis Study in Iran
Yavangi, Mahnaz; Sohrabi, Mohammad Reza; Riazi, Sahand
2013-01-01
Background: This study was conducted to determine the total expenditure and out of pocket payment on pregnancy complications in Tehran, the capital of Iran. Methods: A cross-sectional study conducted on 1172 patients who admitted in two general teaching referral Hospitals in Tehran. In this study, we calculated total and out of pocket inpatient costs for seven pregnancy complications including preeclampsia, intrauterine growth restriction (IUGR), abortion, ante-partum hemorrhage, preterm delivery, premature rupture of membranes and post-dated pregnancy. We used descriptive analysis and analysis of variance test to compare these pregnancy complications. Results: The average duration of hospitalization was 3.28 days and the number of visits by physicians for a patient was 9.79 on average. The average total cost for these pregnancy complications was 735.22 Unites States Dollars (USD) (standard deviation [SD] = 650.53). The average out of packet share was 277.08 USD (SD = 350.74), which was 37.69% of total expenditure. IUGR with payment of 398.76 USD (SD = 418.54) (52.06% of total expenditure) had the greatest amount of out of pocket expenditure in all complications. While, abortion had the minimum out of pocket amount that was 148.77 USD (SD = 244.05). Conclusions: Obstetrics complications had no catastrophic effect on families, but IUGR cost was about 30% of monthly household non-food costs in Tehran so more financial protection plans and insurances are recommended for these patients. PMID:24404365
7 CFR 1710.205 - Minimum approval requirements for all load forecasts.
Code of Federal Regulations, 2014 CFR
2014-01-01
... computer software applications. RUS will evaluate borrower load forecasts for readability, understanding..., distribution costs, other systems costs, average revenue per kWh, and inflation. Also, a borrower's engineering...
A reaction-diffusion within-host HIV model with cell-to-cell transmission.
Ren, Xinzhi; Tian, Yanni; Liu, Lili; Liu, Xianning
2018-06-01
In this paper, a reaction-diffusion within-host HIV model is proposed. It incorporates cell mobility, spatial heterogeneity and cell-to-cell transmission, which depends on the diffusion ability of the infected cells. In the case of a bounded domain, the basic reproduction number [Formula: see text] is established and shown as a threshold: the virus-free steady state is globally asymptotically stable if [Formula: see text] and the virus is uniformly persistent if [Formula: see text]. The explicit formula for [Formula: see text] and the global asymptotic stability of the constant positive steady state are obtained for the case of homogeneous space. In the case of an unbounded domain and [Formula: see text], the existence of the traveling wave solutions is proved and the minimum wave speed [Formula: see text] is obtained, providing the mobility of infected cells does not exceed that of the virus. These results are obtained by using Schauder fixed point theorem, limiting argument, LaSalle's invariance principle and one-side Laplace transform. It is found that the asymptotic spreading speed may be larger than the minimum wave speed via numerical simulations. However, our simulations show that it is possible either to underestimate or overestimate the spread risk [Formula: see text] if the spatial averaged system is used rather than one that is spatially explicit. The spread risk may also be overestimated if we ignore the mobility of the cells. It turns out that the minimum wave speed could be either underestimated or overestimated as long as the mobility of infected cells is ignored.
Investigation of Sunspot Area Varying with Sunspot Number
NASA Astrophysics Data System (ADS)
Li, K. J.; Li, F. Y.; Zhang, J.; Feng, W.
2016-11-01
The statistical relationship between sunspot area (SA) and sunspot number (SN) is investigated through analysis of their daily observation records from May 1874 to April 2015. For a total of 1607 days, representing 3 % of the total interval considered, either SA or SN had a value of zero while the other parameter did not. These occurrences most likely reflect the report of short-lived spots by a single observatory and subsequent averaging of zero values over multiple stations. The main results obtained are as follows: i) The number of spotless days around the minimum of a solar cycle is statistically negatively correlated with the maximum strength of solar activity of that cycle. ii) The probability distribution of SA generally decreases monotonically with SA, but the distribution of SN generally increases first, then it decreases as a whole. The different probability distribution of SA and SN should strengthen their non-linear relation, and the correction factor [k] in the definition of SN may be one of the factors that cause the non-linearity. iii) The non-linear relation of SA and SN indeed exists statistically, and it is clearer during the maximum epoch of a solar cycle.
Influence of age and selected environmental factors on reproductive performance of canvasbacks
Serie, Jerome R.; Trauger, David L.; Austin, Jane E.
1992-01-01
Age, productivity, and other factors affecting breeding performance of canvasbacks (Aythya valisineria) are poorly understood. Consequently, we tested whether reproductive performance of female canvasbacks varied with age and selected environmental factors in southwestern Manitoba from 1974 to 1980. Neither clutch size, nest parasitism, nest success, nor the number of ducklings/brood varied with age. Return rates, nest initiation dates, renesting, and hen success were age-related. Return rates averaged 21% for second-year (SY) and 69% for after-second-year (ASY) females (58% for third-year and 79% for after-third-year females). Additionally, water conditions and spring temperatures influenced chronology of arrival, timing of nesting, and reproductive success. Nest initiation by birds of all ages was affected by minimum April temperatures. Clutch size was higher in nests initiated earlier. Interspecific nest parasitism did not affect clutch size, nest success, hen success, or hatching success. Nest success was lower in dry years (17%) than in moderately wet (54%) or wet (60%) years. Nests per female were highest during wet years. No nests of SY females were found in dry years. In years of moderate to good wetland conditions, females of all ages nested. Predation was the primary factor influencing nest success. Hen success averaged 58% over all years. The number of ducklings surviving 20 days averaged 4.7/brood. Because SY females have lower return rates and hen success than ASY females, especially during drier years, management to increase canvasback populations might best be directed to increasing first year recruitment (no. of females returning to breed) and to increasing overall breeding success by reducing predation and enhancing local habitat conditions during nesting.
Jumrani, Kanchan; Bhatia, Virender Singh; Pandey, Govind Prakash
2017-03-01
High-temperature stress is a major environmental stress and there are limited studies elucidating its impact on soybean (Glycine max L. Merril.). The objectives of present study were to quantify the effect of high temperature on changes in leaf thickness, number of stomata on adaxial and abaxial leaf surfaces, gas exchange, chlorophyll fluorescence parameters and seed yield in soybean. Twelve soybean genotypes were grown at day/night temperatures of 30/22, 34/24, 38/26 and 42/28 °C with an average temperature of 26, 29, 32 and 35 °C, respectively, under greenhouse conditions. One set was also grown under ambient temperature conditions where crop season average maximum, minimum and mean temperatures were 28.0, 22.4 and 25.2 °C, respectively. Significant negative effect of temperature was observed on specific leaf weight (SLW) and leaf thickness. Rate of photosynthesis, stomatal conductance and water use efficiency declined as the growing temperatures increased; whereas, intercellular CO 2 and transpiration rate were increased. With the increase in temperature chlorophyll fluorescence parameters such as Fv/Fm, qP and PhiPSII declined while there was increase in qN. Number of stomata on both abaxial and adaxial surface of leaf increased significantly with increase in temperatures. The rate of photosynthesis, PhiPSII, qP and SPAD values were positively associated with leaf thickness and SLW. This indicated that reduction in photosynthesis and associated parameters appears to be due to structural changes observed at higher temperatures. The average seed yield was maximum (13.2 g/pl) in plants grown under ambient temperature condition and declined by 8, 14, 51 and 65% as the temperature was increased to 30/22, 34/24, 38/26 and 42/28 °C, respectively.
Kang, Jung-Ha; Kim, Yi-Kyung; Park, Jung-Youn; An, Chel-Min; Jun, Je-Chun
2012-08-01
Of the more than 300 octopus species, Octopus minor is one of the most popular and economically important species in Eastern Asia, including Korea, along with O. vulgaris, O. ocellatus, and O. aegina. We developed 19 microsatellite markers from Octopus minor and eight polymorphic markers were developed to analyze the genetic diversity and relationships among four octopus populations from Korea and three from China. The number of alleles per locus varied from 10 to 49, and allelic richness per locus ranged from 2 to 16.4 across all populations. The average allele number among the populations was 11.1, with a minimum of 8.3 and a maximum of 13.6. The mean allelic richness was 8.7 in all populations. The Hardy-Weinberg equilibrium (HWE) test revealed significant deviation in 19 of the 56 single-locus sites, and null alleles were presumed in five of eight loci. The pairwise F ( ST ) values between populations from Korea and China differed significantly in all pairwise comparisons. The genetic distances between the China and Korea samples ranged from 0.161 to 0.454. The genetic distances among the populations from Korea ranged from 0.033 to 0.090, with an average of 0.062; those among populations from China ranged from 0.191 to 0.316, with an average of 0.254. The populations from Korea and China formed clearly separated into clusters via an unweighted pair group method with arithmetic mean dendrogram. Furthermore, a population from muddy flats on the western coast of the Korean Peninsula and one from a rocky area on Jeju Island formed clearly separated subclusters. An assignment test based on the allele distribution discriminated between the Korean and Chinese origins with 96.9 % accuracy.
Models for short term malaria prediction in Sri Lanka
Briët, Olivier JT; Vounatsou, Penelope; Gunawardena, Dissanayake M; Galappaththy, Gawrie NL; Amerasinghe, Priyanie H
2008-01-01
Background Malaria in Sri Lanka is unstable and fluctuates in intensity both spatially and temporally. Although the case counts are dwindling at present, given the past history of resurgence of outbreaks despite effective control measures, the control programmes have to stay prepared. The availability of long time series of monitored/diagnosed malaria cases allows for the study of forecasting models, with an aim to developing a forecasting system which could assist in the efficient allocation of resources for malaria control. Methods Exponentially weighted moving average models, autoregressive integrated moving average (ARIMA) models with seasonal components, and seasonal multiplicative autoregressive integrated moving average (SARIMA) models were compared on monthly time series of district malaria cases for their ability to predict the number of malaria cases one to four months ahead. The addition of covariates such as the number of malaria cases in neighbouring districts or rainfall were assessed for their ability to improve prediction of selected (seasonal) ARIMA models. Results The best model for forecasting and the forecasting error varied strongly among the districts. The addition of rainfall as a covariate improved prediction of selected (seasonal) ARIMA models modestly in some districts but worsened prediction in other districts. Improvement by adding rainfall was more frequent at larger forecasting horizons. Conclusion Heterogeneity of patterns of malaria in Sri Lanka requires regionally specific prediction models. Prediction error was large at a minimum of 22% (for one of the districts) for one month ahead predictions. The modest improvement made in short term prediction by adding rainfall as a covariate to these prediction models may not be sufficient to merit investing in a forecasting system for which rainfall data are routinely processed. PMID:18460204
Mapping from multiple-control Toffoli circuits to linear nearest neighbor quantum circuits
NASA Astrophysics Data System (ADS)
Cheng, Xueyun; Guan, Zhijin; Ding, Weiping
2018-07-01
In recent years, quantum computing research has been attracting more and more attention, but few studies on the limited interaction distance between quantum bits (qubit) are deeply carried out. This paper presents a mapping method for transforming multiple-control Toffoli (MCT) circuits into linear nearest neighbor (LNN) quantum circuits instead of traditional decomposition-based methods. In order to reduce the number of inserted SWAP gates, a novel type of gate with the optimal LNN quantum realization was constructed, namely NNTS gate. The MCT gate with multiple control bits could be better cascaded by the NNTS gates, in which the arrangement of the input lines was LNN arrangement of the MCT gate. Then, the communication overhead measurement model on inserted SWAP gate count from the original arrangement to the new arrangement was put forward, and we selected one of the LNN arrangements with the minimum SWAP gate count. Moreover, the LNN arrangement-based mapping algorithm was given, and it dealt with the MCT gates in turn and mapped each MCT gate into its LNN form by inserting the minimum number of SWAP gates. Finally, some simplification rules were used, which can further reduce the final quantum cost of the LNN quantum circuit. Experiments on some benchmark MCT circuits indicate that the direct mapping algorithm results in fewer additional SWAP gates in about 50%, while the average improvement rate in quantum cost is 16.95% compared to the decomposition-based method. In addition, it has been verified that the proposed method has greater superiority for reversible circuits cascaded by MCT gates with more control bits.
Results and evaluation of a survey to estimate Pacific walrus population size, 2006
Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley
2011-01-01
In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.
75 FR 71368 - Erik Erb; Notice of Receipt of Petition for Rulemaking
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-23
... requirement for security officers working 12-hour shifts from an average of 3 days per week to 2.5 or 2 days... the minimum days off (MDO) requirement for security officers working 12- hour shifts from an average of 3 days per week to 2.5 or 2 days per week. The NRC is also requesting public comments on the PRM...
Rise in the frequency of cloud cover in LANDSAT data for the period 1973 to 1981. [Brazil
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Mendonca, F. J.; Neto, G. C.
1983-01-01
Percentages of cloud cover in LANDSAT imagery were used to calculate the cloud cover monthly average statistic for each LANDSAT scene in Brazil, during the period of 1973 to 1981. The average monthly cloud cover and the monthly minimum cloud cover were also calculated for the regions of north, northeast, central west, southeast and south, separately.
Tixier, Philippe; Germon, Amandine; Rakotobe, Veromanitra; Phillips-Mora, Wilbert; Maximova, Siela; Avelino, Jacques
2017-01-01
Moniliophthora Pod Rot (MPR) caused by the fungus Moniliophthora roreri (Cif.) Evans et al., is one of the main limiting factors of cocoa production in Latin America. Currently insufficient information on the biology and epidemiology of the pathogen limits the development of efficient management options to control MPR. This research aims to elucidate MPR development through the following daily microclimatic variables: minimum and maximum temperatures, wetness frequency, average temperature and relative humidity in the highly susceptible cacao clone Pound-7 (incidence = 86% 2008–2013 average). A total of 55 cohorts totaling 2,268 pods of 3–10 cm length, one to two months of age, were tagged weekly. Pods were assessed throughout their lifetime, every one or two weeks, and classified in 3 different categories: healthy, diseased with no sporulation, diseased with sporulating lesions. As a first step, we used Generalized Linear Mixed Models (GLMM) to determine with no a priori the period (when and for how long) each climatic variable was better related with the appearance of symptoms and sporulation. Then the significance of the candidate variables was tested in a complete GLMM. Daily average wetness frequency from day 14 to day 1, before tagging, and daily average maximum temperature from day 4 to day 21, after tagging, were the most explanatory variables of the symptoms appearance. The former was positively linked with the symptoms appearance when the latter exhibited a maximum at 30°C. The most important variables influencing sporulation were daily average minimum temperature from day 35 to day 58 and daily average maximum temperature from day 37 to day 48, both after tagging. Minimum temperature was negatively linked with the sporulation while maximum temperature was positively linked. Results indicated that the fungal microclimatic requirements vary from the early to the late cycle stages, possibly due to the pathogen’s long latent period. This information is valuable for development of new conceptual models for MPR and improvement of control methods. PMID:28972981
Leandro-Muñoz, Mariela E; Tixier, Philippe; Germon, Amandine; Rakotobe, Veromanitra; Phillips-Mora, Wilbert; Maximova, Siela; Avelino, Jacques
2017-01-01
Moniliophthora Pod Rot (MPR) caused by the fungus Moniliophthora roreri (Cif.) Evans et al., is one of the main limiting factors of cocoa production in Latin America. Currently insufficient information on the biology and epidemiology of the pathogen limits the development of efficient management options to control MPR. This research aims to elucidate MPR development through the following daily microclimatic variables: minimum and maximum temperatures, wetness frequency, average temperature and relative humidity in the highly susceptible cacao clone Pound-7 (incidence = 86% 2008-2013 average). A total of 55 cohorts totaling 2,268 pods of 3-10 cm length, one to two months of age, were tagged weekly. Pods were assessed throughout their lifetime, every one or two weeks, and classified in 3 different categories: healthy, diseased with no sporulation, diseased with sporulating lesions. As a first step, we used Generalized Linear Mixed Models (GLMM) to determine with no a priori the period (when and for how long) each climatic variable was better related with the appearance of symptoms and sporulation. Then the significance of the candidate variables was tested in a complete GLMM. Daily average wetness frequency from day 14 to day 1, before tagging, and daily average maximum temperature from day 4 to day 21, after tagging, were the most explanatory variables of the symptoms appearance. The former was positively linked with the symptoms appearance when the latter exhibited a maximum at 30°C. The most important variables influencing sporulation were daily average minimum temperature from day 35 to day 58 and daily average maximum temperature from day 37 to day 48, both after tagging. Minimum temperature was negatively linked with the sporulation while maximum temperature was positively linked. Results indicated that the fungal microclimatic requirements vary from the early to the late cycle stages, possibly due to the pathogen's long latent period. This information is valuable for development of new conceptual models for MPR and improvement of control methods.
Match-play activity profile in elite women's rugby union players.
Suarez-Arrones, Luis; Portillo, Javier; Pareja-Blanco, Fernando; Sáez de Villareal, Eduardo; Sánchez-Medina, Luis; Munguía-Izquierdo, Diego
2014-02-01
The aim of this study was to provide an objective description of the locomotive activities and exercise intensity undergone during the course of an international-level match of female rugby union. Eight players were analyzed using global positioning system tracking technology. The total distance covered by the players during the whole match was 5,820 ± 512 m. The backs covered significantly more distance than the forwards (6,356 ± 144 vs. 5,498 ± 412 m, respectively). Over this distance, 42.7% (2,487 ± 391 m) was spent standing or walking, 35% jogging (2,037 ± 315 m), 9.7% running at low intensity (566 ± 115 m), 9.5% at medium intensity (553 ± 190 m), 1.8% at high intensity (105 ± 74 m), and 1.2% sprinting (73 ± 107 m). There were significant differences in the distance covered by forwards and backs in certain speed zones. Analysis of the relative distance traveled over successive 10-minute period of match play revealed that the greatest distances were covered during the first (725 ± 53 m) and the last (702 ± 79 m) 10-minute period of the match. The average number of sprints, the average maximum distance of sprinting, the average minimum distance of sprinting, and the average sprint distance during the game were 4.7 ± 3.9 sprints, 20.6 ± 10.5 m, 5.8 ± 0.9 -m, and 12.0 ± 3.8 m, respectively. There were substantial differences between forwards and backs. Backs covered greater total distance, distance in certain speed zones, and sprinting performance. The players spent 46.9 ± 28.9% of match time between 91 and 100% of maximum heart rate and experienced a large number of impacts (accelerometer data and expressed as g forces) during the game. These findings offer important information to design better training strategies and physical fitness testing adapted to the specific demands of female rugby union.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1998-01-01
Samuel Heinrich Schwabe, the discoverer of the sunspot cycle, observed the Sun routinely from Desau, Germany during the interval of 1826-1869, averaging about 290 observing days per year. His yearly counts of 'clusters of spots' (or, more correctly, the yearly number of newly appearing sunspot groups) provided a simple means for describing the overt features of the sunspot cycle (i.e., the timing and relative strengths of cycle minimum and maximum). In 1848, Rudolf Wolf, a Swiss astronomer, having become aware of Schwabe's discovery, introduced his now familiar 'relative sunspot number' and established an international cadre of observers for monitoring the future behavior of the sunspot cycle and for reconstructing its past behavior (backwards in time to 1818, based on daily sunspot number estimates). While Wolf's reconstruction is complete (without gaps) only from 1849 (hence, the beginning of the modern era), the immediately preceding interval of 1818-1848 is incomplete, being based on an average of 260 observing days per year. In this investigation, Wolf's reconstructed record of annual sunspot number is compared against Schwabe's actual observing record of yearly counts of clusters of spots. The comparison suggests that Wolf may have misplaced (by about 1-2 yr) and underestimated (by about 16 units of sunspot number) the maximum amplitude for cycle 7. If true, then, cycle 7's ascent and descent durations should measure about 5 years each instead of 7 and 3 years, respectively, the extremes of the distributions, and its maximum amplitude should measure about 96 instead of 70. This study also indicates that cycle 9's maximum amplitude is more reliably determined than cycle 8's and that both appear to be of comparable size (about 130 units of sunspot number) rather than being significantly different. Therefore, caution is urged against the indiscriminate use of the pre-modern era sunspot numbers in long-term studies of the sunspot cycle, since such use may lead to specious results.
21 CFR 177.2415 - Poly(aryletherketone) resins.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., and have a minimum weight-average molecular weight of 12,000, as determined by gel permeation...: Distilled water, 50 percent (by volume) ethanol in distilled water, 3 percent acetic acid in distilled water...
A proof of the theorem regarding the distribution of lift over the span for minimum induced drag
NASA Technical Reports Server (NTRS)
Durand, W F
1931-01-01
The proof of the theorem that the elliptical distribution of lift over the span is that which will give rise to the minimum induced drag has been given in a variety of ways, generally speaking too difficult to be readily followed by the graduate of the average good technical school of the present day. In the form of proof this report makes an effort to bring the matter more readily within the grasp of this class of readers.
ERIC Educational Resources Information Center
Currents, 2000
2000-01-01
A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)
Monthly and Seasonal Cloud Cover Patterns at the Manila Observatory (14.64°N, 121.08°E)
NASA Astrophysics Data System (ADS)
Antioquia, C. T.; Lagrosas, N.; Caballa, K.
2014-12-01
A ground based sky imaging system was developed at the Manila Observatory in 2012 to measure cloud occurrence and to analyse seasonal variation of cloud cover over Metro Manila. Ground-based cloud occurrence measurements provide more reliable results compared to satellite observations. Also, cloud occurrence data aid in the analysis of radiation budget in the atmosphere. In this study, a GoPro Hero 2 with almost 180o field of view is employed to take pictures of the atmosphere. These pictures are taken continuously, having a temporal resolution of 1min. Atmospheric images from April 2012 to June 2013 (excluding the months of September, October, and November 2012) were processed to determine cloud cover. Cloud cover in an image is measured as the ratio of the number of pixels with clouds present in them to the total number of pixels. The cloud cover values were then averaged over each month to know its monthly and seasonal variation. In Metro Manila, the dry season occurs in the months of November to May of the next year, while the wet season occurs in the months of June to October of the same year. Fig 1 shows the measured monthly variation of cloud cover. No data was collected during the months of September (wherein the camera was used for the 7SEAS field campaign), October, and November 2012 (due to maintenance and repairs). Results show that there is high cloud cover during the wet season months (80% on average) while there is low cloud cover during the dry season months (62% on average). The lowest average cloud cover for a wet season month occurred in June 2012 (73%) while the highest average cloud cover for a wet season month occurred in June 2013 (86%). The variations in cloud cover average in this season is relatively smaller compared to that of the dry season wherein the lowest average cloud cover in a month was during April 2012 (38%) while the highest average cloud cover in a month was during January 2013 (77%); minimum and maximum averages being 39% apart. During the wet season, the cloud occurrence is mainly due to tropical storms, Southwest Monsoon, and local convection processes. In the dry season, less cloud is formed because of cold dry air from Northeast Monsoon (December to February) and generally dry and hot weather (March to May). Regular data collection has been implemented for further long term data analysis.
Zhang, Xue-Guang; Zhang, Li; Liu, Shu-Wen; Cao, Xue-Ying; Wang, Yuan-Da; Wei, Ri-Bao; Cai, Guang-Yan; Chen, Xiang-Mei
2012-01-01
Background Internal medicine includes several subspecialties. This study aimed to describe change trend of impact factors in different subspecialties of internal medicine during the past 12 years, as well as the developmental differences among each subspecialty, and the possible influencing factors behind these changes and differences. Methods Nine subspecialties of internal medicine were chosen for comparison. All data were collected from the Science Citation Index Expanded and Journal Citation Reports database. Results (1) Journal numbers in nine subspecialties increased significantly from 1998 to 2010, with an average increment of 80.23%, in which cardiac and cardiovascular system diseases increased 131.2% rank the first; hematology increased 45% rank the least. (2) Impact Factor in subspecialties of infectious disease, cardiac and cardiovascular system diseases, gastroenterology and hepatology, hematology, endocrinology and metabolism increased significantly (p<0.05), in which gastroenterology and hepatology had the largest increase of 65.4%. (3) Journal impact factor of 0–2 had the largest proportion in all subspecialties. Among the journals with high impact factor (IF>6), hematology had the maximum proportion of 10%, nephrology and respiratory system disease had the minimum of 4%. Among the journal with low impact factor (IF<2), journal in nephrology and allergy had the most (60%), while endocrinology and metabolism had the least (40%). There were differences in median number of IF among the different subspecialties (p<0.05), in which endocrinology and metabolism had the highest, nephrology had the lowest. (4) The highest IF had a correlation with journal numbers and total paper numbers in each field. Conclusion The IF of internal medicine journals showed an increasingly positive trend, in which gastroenterology and hepatology increase the most. Hematology had more high IF journals. Endocrinology and metabolism had higher average IF. Nephrology remained the lowest position. Numbers of journals and total papers were associated with the highest IF. PMID:23118973
Minimum-Time Consensus-Based Approach for Power System Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Sun, Yannan
2016-02-01
This paper presents minimum-time consensus based distributed algorithms for power system applications, such as load shedding and economic dispatch. The proposed algorithms are capable of solving these problems in a minimum number of time steps instead of asymptotically as in most of existing studies. Moreover, these algorithms are applicable to both undirected and directed communication networks. Simulation results are used to validate the proposed algorithms.
Uncertainties in observations and climate projections for the North East India
NASA Astrophysics Data System (ADS)
Soraisam, Bidyabati; Karumuri, Ashok; D. S., Pai
2018-01-01
The Northeast-India has undergone many changes in climatic-vegetation related issues in the last few decades due to increased human activities. However, lack of observations makes it difficult to ascertain the climate change. The study involves the mean, seasonal cycle, trend and extreme-month analysis for summer-monsoon and winter seasons of observed climate data from Indian Meteorological Department (1° × 1°) and Aphrodite & CRU-reanalysis (both 0.5° × 0.5°), and five regional-climate-model simulations (LMDZ, MPI, GFDL, CNRM and ACCESS) data from AR5/CORDEX-South-Asia (0.5° × 0.5°). Long-term (1970-2005) observed, minimum and maximum monthly temperature and precipitation, and the corresponding CORDEX-South-Asia data for historical (1970-2005) and future-projections of RCP4.5 (2011-2060) have been analyzed for long-term trends. A large spread is found across the models in spatial distributions of various mean maximum/minimum climate statistics, though models capture a similar trend in the corresponding area-averaged seasonal cycles qualitatively. Our observational analysis broadly suggests that there is no significant trend in rainfall. Significant trends are observed in the area-averaged minimum temperature during winter. All the CORDEX-South-Asia simulations for the future project either a decreasing insignificant trend in seasonal precipitation, but increasing trend for both seasonal maximum and minimum temperature over the northeast India. The frequency of extreme monthly maximum and minimum temperature are projected to increase. It is not clear from future projections how the extreme rainfall months during JJAS may change. The results show the uncertainty exists in the CORDEX-South-Asia model projections over the region in spite of the relatively high resolution.
NASA Astrophysics Data System (ADS)
Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.
2017-07-01
This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.
Optimizing conceptual aircraft designs for minimum life cycle cost
NASA Technical Reports Server (NTRS)
Johnson, Vicki S.
1989-01-01
A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.
Ahrens, Philipp; Martetschläger, Frank; Siebenlist, Sebastian; Attenberger, Johann; Crönlein, Moritz; Biberthaler, Peter; Stöckle, Ulrich; Sandmann, Gunther H
2017-04-26
Humeral head fractures requiring surgical intervention are severe injuries, which might affect the return to sports and daily activities. We hypothesize that athletic patients will be constrained regarding their sporting activities after surgically treated humeral head fractures. Despite a long rehabilitation program physical activities will change and an avoidance of overhead activities will be noticed. Case series with 65 Patients, with a minimum follow-up of 24 months participated in this study. All patients were treated using a locking plate fixation. Their sporting activity was investigated at the time of the injury and re-investigated after an average of 3.83 years. The questionnaire setup included the evaluation of shoulder function, sporting activities, intensity, sport level and frequency evaluation. Level of evidence IV. At the time of injury 61 Patients (94%) were engaged in recreational sporting activities. The number of sporting activities declined from 26 to 23 at the follow-up examination. There was also a decline in sports frequency and duration of sports activities. The majority of patients remains active in their recreational sporting activity at a comparable duration and frequency both pre- and postoperatively. Nevertheless, shoulder centered sport activities including golf, water skiing and martial arts declined or were given up.
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
ERIC Educational Resources Information Center
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
NASA Technical Reports Server (NTRS)
Kasper, J. C.; Stenens, M. L.; Stevens, M. L.; Lazarus, A. J.; Steinberg, J. T.; Ogilvie, Keith W.
2006-01-01
We present a study of the variation of the relative abundance of helium to hydrogen in the solar wind as a function of solar wind speed and heliographic latitude over the previous solar cycle. The average values of A(sub He), the ratio of helium to hydrogen number densities, are calculated in 25 speed intervals over 27-day Carrington rotations using Faraday Cup observations from the Wind spacecraft between 1995 and 2005. The higher speed and time resolution of this study compared to an earlier work with the Wind observations has led to the discovery of three new aspects of A(sub He), modulation during solar minimum from mid-1995 to mid-1997. First, we find that for solar wind speeds between 350 and 415 km/s, A(sub He), varies with a clear six-month periodicity, with a minimum value at the heliographic equatorial plane and a typical gradient of 0.01 per degree in latitude. For the slow wind this is a 30% effect. We suggest that the latitudinal gradient may be due to an additional dependence of coronal proton flux on coronal field strength or the stability of coronal loops. Second, once the gradient is subtracted, we find that A(sub He), is a remarkably linear function of solar wind speed. Finally, we identify a vanishing speed, at which A(sub He), is zero, is 259 km/s and note that this speed corresponds to the minimum solar wind speed observed at one AU. The vanishing speed may be related to previous theoretical work in which enhancements of coronal helium lead to stagnation of the escaping proton flux. During solar maximum the A(sub He), dependences on speed and latitude disappear, and we interpret this as evidence of two source regions for slow solar wind in the ecliptic plane, one being the solar minimum streamer belt and the other likely being active regions.
Uncertainty relations for angular momentum eigenstates in two and three spatial dimensions
NASA Astrophysics Data System (ADS)
Bracher, Christian
2011-03-01
I reexamine Heisenberg's uncertainty relation for two- and three-dimensional wave packets with fixed angular momentum quantum numbers m or ℓ. A simple proof shows that the product of the average extent Δr and Δp of a two-dimensional wave packet in position and momentum space is bounded from below by ΔrΔp ≥ℏ(|m|+1). The minimum uncertainty is attained by modified Gaussian wave packets that are special eigenstates of the two-dimensional isotropic harmonic oscillator, which include the ground states of electrons in a uniform magnetic field. Similarly, the inequality ΔrΔp ≥ℏ(ℓ +3/2) holds for three-dimensional wave packets with fixed total angular momentum ℓ and the equality holds for a Gaussian radial profile. I also discuss some applications of these uncertainty relations.
Current husbandry of red pandas (Ailurus fulgens) in zoos.
Eriksson, P; Zidar, J; White, D; Westander, J; Andersson, M
2010-01-01
The endangered red panda (Ailurus fulgens) is held in zoos worldwide. The aim of this study was to examine how red pandas are kept and managed in captivity and to compare it with the management guidelines. Sixty-nine zoos, mainly from Europe but also from North America and Australia/New Zealand, responded to our survey. The results revealed that in general zoos follow the management guidelines for most of the investigated issues. The average enclosure is almost four times larger than the minimum size recommended by the management guidelines, although seven zoos have smaller enclosures. About half the zoos do not follow the guidelines concerning visitor access and number of nest boxes. Other issues that may compromise animal welfare include proximity of neighboring carnivore species and placement of nest boxes. © 2010 Wiley-Liss, Inc.
State Tobacco Control Spending and Youth Smoking
Tauras, John A.; Chaloupka, Frank J.; Farrelly, Matthew C.; Giovino, Gary A.; Wakefield, Melanie; Johnston, Lloyd D.; O’Malley, Patrick M.; Kloska, Deborah D.; Pechacek, Terry F.
2005-01-01
Objective. We examined the relationship between state-level tobacco control expenditures and youth smoking prevalence and cigarette consumption. Methods. We estimated a 2-part model of cigarette demand using data from the 1991 through 2000 nationally representative surveys of 8th-, 10th-, and 12th-grade students as part of the Monitoring the Future project. Results. We found that real per capita expenditures on tobacco control had a negative and significant impact on youth smoking prevalence and on the average number of cigarettes smoked by smokers. Conclusions. Had states represented by the Monitoring the Future sample and the District of Columbia spent the minimum amount of money recommended by the Centers for Disease Control and Prevention, the prevalence of smoking among youths would have been between 3.3% and 13.5% lower than the rate we observed over this period. PMID:15671473
Textural changes of FER-A peridotite in time series piston-cylinder experiments at 1.0 GPa, 1300°C
NASA Astrophysics Data System (ADS)
Schwab, B. E.; Mercer, C. N.; Johnston, A.
2012-12-01
A series of eight 1.0 GPa, 1300°C partial melting experiments were performed using FER-A peridotite starting material to investigate potential textural changes in the residual crystalline phases over time. Powdered peridotite with a layer of vitreous carbon spheres as a melt sink were sealed in graphite-lined Pt capsules and run in CaF2 furnace assemblies in 1.27cm piston-cylinder apparatus at the University of Oregon. Run durations ranged from 4 to 128 hours. Experimental charges were mounted in epoxy, cut, and polished for analysis. In a first attempt to quantify the mineral textures, individual 500x BSE images were collected from selected, representative locations on each of the experimental charges using the FEI Quanta 250 ESEM at Humboldt State University. Noran System Seven (NSS) EDS system was used to collect x-ray maps (spectral images) to aid in identification of phases. A combination of image analysis techniques within NSS and ImageJ software are being used to process the images and quantify the mineral textures observed. The goals are to quantify the size, shape, and abundance of residual olivine (ol), orthopyroxene (opx), clinopyroxene (cpx), and spinel crystals within the selected sample areas of the run products. Additional work will be done to compare the results of the selected areas with larger (lower magnification) images acquired using the same techniques. Preliminary results indicate that measurements of average grain area, minimum grain area, and average, maximum, and minimum grain perimeter show the greatest change (generally decreasing) in measurements for ol, opx, and cpx between the shortest-duration, 4-hour, experiment and the subsequent, 8-hour, experiment. The largest relative change in nearly all of these measurements appears to be for cpx. After the initial decrease, preliminary measurements remain relatively constant for ol, opx, and cpx, respectively, in experiments from 8 to 128 hours in duration. In contrast, measured parameters of spinel grains increase from the 4-hour to 8-hour experiment and continue to fluctuate over the time interval investigated. Spinel also represents the smallest number of individual grains (average n = 25) in any experiment. Average aspect ratios for all minerals remain relatively constant (~1.5-2) throughout the time series. Additional measurements and refinements are underway.
NASA Technical Reports Server (NTRS)
Martin, Norman J.
1959-01-01
Exploratory tests of a circular internal-contraction inlet were made at Mach numbers of 2.00 and 2.35 to determine the effect of a cowl-type boundary-layer control located downstream of the inlet throat. The inlet was designed for a Mach number of 2.5. Tests were also made of the inlet modified to correspond to design Mach numbers of 2.35 and 2.25. Surveys near the minimum area section of the inlet without boundary-layer control indicated maximum averaged pressure recoveries between 0.90 and 0.92 at a free-stream Mach number, M(sub infinity), of 2.35 for the inlets. Farther downstream, after partial subsonic diffusion, a maximum pressure recovery of 0.842 was obtained with the inlet at M(sub infinity) = 2.35. The pressure recovery of the inlet was increased by 0.03 at a Mach number of 2.35 and decreased by 0.02 at a Mach number of 2.00 by the application of cowl-type boundary-layer control. Further investigation with the inlet without bleed demonstrated that an increase of angle of attack from 0 deg to 3 deg reduced the pressure recovery 0.04. The effect of Reynolds number was to increase pressure recovery 0.07 (from 0.785 to 0.855) with an increase in Reynolds number (based on inlet diameter) from 0.79 x 10(exp 6) to 3.19 x 10(exp 6).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gettings, M.B.
A blower-door-directed infiltration retrofit procedure was field tested on 18 homes in south central Wisconsin. The procedure, developed by the Wisconsin Energy Conservation Corporation, includes recommended retrofit techniques as well as criteria for estimating the amount of cost-effective work to be performed on a house. A recommended expenditure level and target air leakage reduction, in air changes per hour at 50 Pascal (ACH50), are determined from the initial leakage rate measured. The procedure produced an average 16% reduction in air leakage rate. For the 7 houses recommended for retrofit, 89% of the targeted reductions were accomplished with 76% of themore » recommended expenditures. The average cost of retrofits per house was reduced by a factor of four compared with previous programs. The average payback period for recommended retrofits was 4.4 years, based on predicted energy savings computed from achieved air leakage reductions. Although exceptions occurred, the procedure's 8 ACH50 minimum initial leakage rate for advising retrofits to be performed appeared a good choice, based on cost-effective air leakage reduction. Houses with initial rates of 7 ACH50 or below consistently required substantially higher costs to achieve significant air leakage reductions. No statistically significant average annual energy savings was detected as a result of the infiltration retrofits. Average measured savings were -27 therm per year, indicating an increase in energy use, with a 90% confidence interval of 36 therm. Measured savings for individual houses varied widely in both positive and negative directions, indicating that factors not considered affected the results. Large individual confidence intervals indicate a need to increase the accuracy of such measurements as well as understand the factors which may cause such disparity. Recommendations for the procedure include more extensive training of retrofit crews, checks for minimum air exchange rates to insure air quality, and addition of the basic cost of determining the initial leakage rate to the recommended expenditure level. Recommendations for the field test of the procedure include increasing the number of houses in the sample, more timely examination of metered data to detect anomalies, and the monitoring of indoor air temperature. Though not appropriate in a field test of a procedure, further investigation into the effects of air leakage rate reductions on heating loads needs to be performed.« less
Assessment for apraxia in mild cognitive impairment and Alzheimer's dise
Ward, Mirela; Cecato, Juliana F.; Aprahamian, Ivan; Martinelli, José Eduardo
2015-01-01
Objective To evaluate apraxia in healthy elderly and in patients diagnosed with Alzheimer's disease (AD) and Mild cognitive impairment (MCI). Methods We evaluated 136 subjects with an average age of 75.74 years (minimum 60 years old, maximum 92 years old) and average schooling of 9 years (minimum of 7 and a maximum of 12 years), using the Mini-Mental State examination (MMSE), Cambridge Cognitive Examination (CAMCOG) and the Clock Drawing Test. For the analysis of the presence of apraxia, eight subitems from the CAMCOG were selected: the drawings of the pentagon, spiral, house, clock; and the tasks of putting a piece of paper in an envelope; the correct one hand waiving "Goodbye" movements; paper cutting using scissors; and brushing teeth. Results Elder controls had an average score of 11.51, compared to MCI (11.13), and AD patients, whose average apraxia test scores were the lowest (10.23). Apraxia scores proved able to differentiate the three groups studied (p=0.001). In addition, a negative correlation was observed between apraxia and MMSE scores. Conclusion We conclude that testing for the presence of apraxia is important in the evaluation of patients with cognitive impairments and may help to differentiate elderly controls, MCI and AD. PMID:29213944
Mwanza, Jean-Claude; Budenz, Donald L; Godfrey, David G; Neelakantan, Arvind; Sayyad, Fouad E; Chang, Robert T; Lee, Richard K
2014-04-01
To evaluate the glaucoma diagnostic performance of ganglion cell inner-plexiform layer (GCIPL) parameters used individually and in combination with retinal nerve fiber layer (RNFL) or optic nerve head (ONH) parameters measured with Cirrus HD-OCT (Carl Zeiss Meditec, Inc, Dublin, CA). Prospective cross-sectional study. Fifty patients with early perimetric glaucoma and 49 age-matched healthy subjects. Three peripapillary RNFL and 3 macular GCIPL scans were obtained in 1 eye of each participant. A patient was considered glaucomatous if at least 2 of the 3 RNFL or GCIPL scans had the average or at least 1 sector measurement flagged at 1% to 5% or less than 1%. The diagnostic performance was determined for each GCIPL, RNFL, and ONH parameter as well as for binary or-logic and and-logic combinations of GCIPL with RNFL or ONH parameters. Sensitivity, specificity, positive likelihood ratio (PLR), and negative likelihood ratio (NLR). Among GCIPL parameters, the minimum had the best diagnostic performance (sensitivity, 82.0%; specificity, 87.8%; PLR, 6.69; and NLR, 0.21). Inferior quadrant was the best RNFL parameter (sensitivity, 74%; specificity, 95.9%; PLR, 18.13; and NLR, 0.27), as was rim area (sensitivity, 68%; specificity, 98%; PLR, 33.3; and NLR, 0.33) among ONH parameters. The or-logic combination of minimum GCIPL and average RNFL provided the overall best diagnostic performance (sensitivity, 94%; specificity, 85.7%; PRL, 6.58; and NLR, 0.07) as compared with the best RNFL, best ONH, and best and-logic combination (minimum GCIPL and inferior quadrant RNFL; sensitivity, 64%; specificity, 100%; PLR, infinity; and NPR, 0.36). The binary or-logic combination of minimum GCIPL and average RNFL or rim area provides better diagnostic performances than those of and-logic combinations or best single GCIPL, RNFL, or ONH parameters. This finding may be clinically valuable for the diagnosis of early glaucoma. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Assessment of corneal epithelial thickness in dry eye patients.
Cui, Xinhan; Hong, Jiaxu; Wang, Fei; Deng, Sophie X; Yang, Yujing; Zhu, Xiaoyu; Wu, Dan; Zhao, Yujin; Xu, Jianjiang
2014-12-01
To investigate the features of corneal epithelial thickness topography with Fourier-domain optical coherence tomography (OCT) in dry eye patients. In this cross-sectional study, 100 symptomatic dry eye patients and 35 normal subjects were enrolled. All participants answered the ocular surface disease index questionnaire and were subjected to OCT, corneal fluorescein staining, tear breakup time, Schirmer 1 test without anesthetic (S1t), and meibomian morphology. Several epithelium statistics for each eye, including central, superior, inferior, minimum, maximum, minimum - maximum, and map standard deviation, were averaged. Correlations of epithelial thickness with the symptoms of dry eye were calculated. The mean (±SD) central, superior, and inferior corneal epithelial thickness was 53.57 (±3.31) μm, 52.00 (±3.39) μm, and 53.03 (±3.67) μm in normal eyes and 52.71 (±2.83) μm, 50.58 (±3.44) μm, and 52.53 (±3.36) μm in dry eyes, respectively. The superior corneal epithelium was thinner in dry eye patients compared with normal subjects (p = 0.037), whereas central and inferior epithelium were not statistically different. In the dry eye group, patients with higher severity grades had thinner superior (p = 0.017) and minimum (p < 0.001) epithelial thickness, more wide range (p = 0.032), and greater deviation (p = 0.003). The average central epithelial thickness had no correlation with tear breakup time, S1t, or the severity of meibomian glands, whereas average superior epithelial thickness positively correlated with S1t (r = 0.238, p = 0.017). Fourier-domain OCT demonstrated that the thickness map of the dry eye corneal epithelium was thinner than normal eyes in the superior region. In more severe dry eye disease patients, the superior and minimum epithelium was much thinner, with a greater range of map standard deviation.
Irlenbusch, Ulrich; Berth, Alexander; Blatter, Georges; Zenz, Peter
2012-03-01
Most anthropometric data on the proximal humerus has been obtained from deceased healthy individuals with no deformities. Endoprostheses are implanted for primary and secondary osteoarthritis, rheumatoid arthritis,humeral-head necrosis, fracture sequelae and other humeral-head deformities. This indicates that pathologicoanatomical variability may be greater than previously assumed. We therefore investigated a group of patients with typical shoulder replacement diagnoses, including posttraumatic and rheumatic deformities. One hundred and twenty-two patients with a double eccentrically adjustable shaft endoprosthesis served as a specific dimension gauge to determine in vivo the individual humeral-head rotation centres from the position of the adjustable prosthesis taper and the eccentric head. All prosthesis heads were positioned eccentrically.The entire adjustment range of the prosthesis of 12 mm medial/lateral and 6 mm dorsal/ventral was required. Mean values for effective offset were 5.84 mm mediolaterally[standard deviation (SD) 1.95, minimum +2, maximum +11]and 1.71 mm anteroposteriorly (SD 1.71, minimum −3,maximum 3 mm), averaging 5.16 mm (SD 1.76, minimum +2,maximum + 10). The posterior offset averaged 1.85 mm(SD 1.85, minimum −1, maximum + 6 mm). In summary, variability of the combined medial and dorsal offset of the humeral-head rotational centre determined in patients with typical underlying diagnoses in shoulder replacement was not greater than that recorded in the literature for healthy deceased patients.The range of deviation is substantial and shows the need for an adjustable prosthetic system.
On a thermonuclear origin for the 1980-81 deep light minimum of the symbiotic nova PU Vul
NASA Technical Reports Server (NTRS)
Sion, Edward M.
1993-01-01
The puzzling 1980-81 deep light minimum of the symbiotic nova PU Vul is discussed in terms of a sequence of quasi-static evolutionary models of a hot, 0.5 solar mass white dwarf accreting H-rich matter at a rate 1 x 10 exp -8 solar mass/yr. On the basis of the morphological behavior of the models, it is suggested that the deep light minimum of PU Vul could have been the result of two successive, closely spaced, hydrogen shell flashes on an accreting white dwarf whose core thermal structure and accreted H-rich envelope was not in a long-term thermal 'cycle-averaged' steady state with the rate of accretion.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-25
..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...
Azim, Syed; Juergens, Craig; Hines, John; McLaws, Mary-Louise
2016-07-01
Human auditing and collating hand hygiene compliance data take hundreds of hours. We report on 24/7 overt observations to establish adjusted average daily hand hygiene opportunities (HHOs) used as the denominator in an automated surveillance that reports daily compliance rates. Overt 24/7 automated surveillance collected HHOs in medical and surgical wards. Accredited auditors observed health care workers' interaction between patient and patient zones to collect the total number of HHOs, indications, and compliance and noncompliance. Automated surveillance captured compliance (ie, events) via low power radio connected to alcohol-based handrub (ABHR) dispensers. Events were divided by HHOs, adjusted for daily patient-to-nurse ratio, to establish daily rates. Human auditors collected 21,450 HHOs during 24/7 with 1,532 average unadjusted HHOs per day. This was 4.4 times larger than the minimum ward sample required for accreditation. The average adjusted HHOs for ABHR alone on the medical ward was 63 HHOs per patient day and 40 HHOs per patient day on the surgical ward. From July 1, 2014-July 31, 2015 the automated surveillance system collected 889,968 events. Automated surveillance collects 4 times the amount of data on each ward per day than a human auditor usually collects for a quarterly compliance report. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Weather forecasting based on hybrid neural model
NASA Astrophysics Data System (ADS)
Saba, Tanzila; Rehman, Amjad; AlGhamdi, Jarallah S.
2017-11-01
Making deductions and expectations about climate has been a challenge all through mankind's history. Challenges with exact meteorological directions assist to foresee and handle problems well in time. Different strategies have been investigated using various machine learning techniques in reported forecasting systems. Current research investigates climate as a major challenge for machine information mining and deduction. Accordingly, this paper presents a hybrid neural model (MLP and RBF) to enhance the accuracy of weather forecasting. Proposed hybrid model ensure precise forecasting due to the specialty of climate anticipating frameworks. The study concentrates on the data representing Saudi Arabia weather forecasting. The main input features employed to train individual and hybrid neural networks that include average dew point, minimum temperature, maximum temperature, mean temperature, average relative moistness, precipitation, normal wind speed, high wind speed and average cloudiness. The output layer composed of two neurons to represent rainy and dry weathers. Moreover, trial and error approach is adopted to select an appropriate number of inputs to the hybrid neural network. Correlation coefficient, RMSE and scatter index are the standard yard sticks adopted for forecast accuracy measurement. On individual standing MLP forecasting results are better than RBF, however, the proposed simplified hybrid neural model comes out with better forecasting accuracy as compared to both individual networks. Additionally, results are better than reported in the state of art, using a simple neural structure that reduces training time and complexity.
1982-08-01
DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72
Shrot, Yoav; Frydman, Lucio
2011-04-01
A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.
Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.
Love, M.S.; Schroeder, D.M.; Lenarz, W.; MacCall, A.; Bull, A.S.; Thorsteinson, L.
2006-01-01
Although bocaccio (Sebastes paucispinis) was an economically important rockfish species along the west coast of North America, overfishing has reduced the stock to about 7.4% of its former unfished population. In 2003, using a manned research submersible, we conducted fish surveys around eight oil and gas platforms off southern California as part of an assessment of the potential value of these structures as fish habitat. From these surveys, we estimated that there was a minimum of 430,000 juvenile bocaccio at these eight structures. We determined this number to be about 20% of the average number of juvenile bocaccio that survive annually for the geographic range of the species. When these juveniles become adults, they will contribute about one percent (0.8%) of the additional amount of fish needed to rebuild the Pacific Coast population. By comparison, juvenile bocaccio recruitment to nearshore natural nursery grounds, as determined through regional scuba surveys, was low in the same year. This research demonstrates that a relatively small amount of artificial nursery habitat may be quite valuable in rebuilding an overfished species.
NASA Astrophysics Data System (ADS)
Olson, L.; Pogue, K. R.; Bader, N.
2012-12-01
The Columbia Basin of Washington and Oregon is one of the most productive grape-growing areas in the United States. Wines produced in this region are influenced by their terroir - the amalgamation of physical and cultural elements that influence grapes grown at a particular vineyard site. Of the physical factors, climate, and in particular air temperature, has been recognized as a primary influence on viticulture. Air temperature directly affects ripening in the grapes. Proper fruit ripening, which requires precise and balanced levels of acid and sugar, and the accumulation of pigment in the grape skin, directly correlates with the quality of wine produced. Many features control air temperature within a particular vineyard. Elevation, latitude, slope, and aspect all converge to form complex relationships with air temperatures; however, the relative degree to which these attributes affect temperatures varies between regions and is not well understood. This study examines the influence of geography and geomorphology on air temperatures within the American Viticultural Areas (AVAs) of the Columbia Basin in eastern Washington and Oregon. The premier vineyards within each AVA, which have been recognized for producing high-quality wine, were equipped with air temperature monitoring stations that collected hourly temperature measurements. A variety of temperature statistics were calculated, including daily average, maximum, and minimum temperatures. From these values, average diurnal variation and growing degree-days (10°C) were calculated. A variety of other statistics were computed, including date of first and last frost and time spent below a minimum temperature threshold. These parameters were compared to the vineyard's elevation, latitude, slope, aspect, and local topography using GPS, ArcCatalog, and GIS in an attempt to determine their relative influences on air temperatures. From these statistics, it was possible to delineate two trends of temperature variation controlled by elevation. In some AVAs, such as Walla Walla Valley and Red Mountain, average air temperatures increased with elevation because of the effect of cold air pooling on valley floors. In other AVAs, such as Horse Heaven Hills, Lake Chelan and Columbia Gorge, average temperatures decreased with elevation due to the moderating influences of the Columbia River and Lake Chelan. Other temperature statistics, including average diurnal range and maximum and minimum temperature, were influenced by relative topography, including local topography and slope. Vineyards with flat slopes that had low elevations relative to their surroundings had larger diurnal variations and lower maximum and minimum temperatures than vineyards with steeper slopes that were high relative to their surroundings.
Dissociation of end systole from end ejection in patients with long-term mitral regurgitation.
Brickner, M E; Starling, M R
1990-04-01
To determine whether left ventricular (LV) end systole and end ejection uncouple in patients with long-term mitral regurgitation, 59 patients (22 control patients with atypical chest pain, 21 patients with aortic regurgitation, and 16 patients with mitral regurgitation) were studied with micromanometer LV catheters and radionuclide angiograms. End systole was defined as the time of occurrence (Tmax) of the maximum time-varying elastance (Emax), and end ejection was defined as the time of occurrence of minimum ventricular volume (minV) and zero systolic flow as approximated by the aortic dicrotic notch (Aodi). The temporal relation between end systole and end ejection in the control patients was Tmax (331 +/- 42 [SD] msec), minV (336 +/- 36 msec), and then, zero systolic flow (355 +/- 23 msec). This temporal relation was maintained in the patients with aortic regurgitation. In contrast, in the patients with mitral regurgitation, the temporal relation was Tmax (266 +/- 49 msec), zero systolic flow (310 +/- 37 msec, p less than 0.01 vs. Tmax), and then, minV (355 +/- 37 msec, p less than 0.001 vs. Tmax and p less than 0.01 vs. Aodi). Additionally, the average Tmax occurred earlier in the patients with mitral regurgitation than in the control patients and patients with aortic regurgitation (p less than 0.01, for both), whereas the average time to minimum ventricular volume was similar in all three patient groups. Moreover, the average time to zero systolic flow also occurred earlier in the patients with mitral regurgitation than in the control patients (p less than 0.01) and patients with aortic regurgitation (p less than 0.05). Because of the dissociation of end systole from minimum ventricular volume in the patients with mitral regurgitation, the end-ejection pressure-volume relations calculated at minimum ventricular volume did not correlate (r = -0.09), whereas those calculated at zero systolic flow did correlate (r = 0.88) with the Emax slope values. We conclude that end ejection, defined as minimum ventricular volume, dissociates from end systole in patients with mitral regurgitation because of the shortened time to LV end systole in association with preservation of the time to LV end ejection due to the low impedance to ejection presented by the left atrium. Therefore, pressure-volume relations calculated at minimum ventricular volume might not be useful for assessing LV chamber performance in some patients with mitral regurgitation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-14
... tunnel construction completed; traffic data, including posted speed, design speed, current average daily... vertical clearance; minimum cross- sectional width; lane width(s); shoulder width(s); and pavement type. (3...
40 CFR 180.1022 - Iodine-detergent complex; exemption from the requirement of a tolerance.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the surfactants (a) polyoxypropylene-polyoxyethylene glycol nomionic block polymers (minimum average... molecular weight of 748 and in which the nonyl group is a propylene trimer isomer, is exempted from the...
40 CFR 180.1022 - Iodine-detergent complex; exemption from the requirement of a tolerance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the surfactants (a) polyoxypropylene-polyoxyethylene glycol nomionic block polymers (minimum average... molecular weight of 748 and in which the nonyl group is a propylene trimer isomer, is exempted from the...
40 CFR 180.1022 - Iodine-detergent complex; exemption from the requirement of a tolerance.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the surfactants (a) polyoxypropylene-polyoxyethylene glycol nomionic block polymers (minimum average... molecular weight of 748 and in which the nonyl group is a propylene trimer isomer, is exempted from the...
Ellis, William L.
1983-01-01
Hydraulic-fracturing measurements are used to infer the magnitude of the least principal stress in the vicinity of the Spent Fuel Test-Climax, located in the Climax stock at the Nevada Test Site. The measurements, made at various depths in two exploratory boreholes, suggest that the local stress field is not uniform. Estimates of the least principal stress magnitude vary over distances of a few tens of meters, with the smaller values averaging 2.9 MPa and the larger values averaging 5.5 MPa. The smaller values are in agreement with the minimum-stress magnitude of 2.8 MPa determined in a nearby drift in 1979, using an overcoring technique. Jointing in the granitic rock mass and (or) the influence of nearby faults may account for the apparent variation in minimum-stress magnitude indicated by the hydrofracture data.
NASA Astrophysics Data System (ADS)
Weisbrod, Chad R.; Kaiser, Nathan K.; Syka, John E. P.; Early, Lee; Mullen, Christopher; Dunyach, Jean-Jacques; English, A. Michelle; Anderson, Lissa C.; Blakney, Greg T.; Shabanowitz, Jeffrey; Hendrickson, Christopher L.; Marshall, Alan G.; Hunt, Donald F.
2017-09-01
High resolution mass spectrometry is a key technology for in-depth protein characterization. High-field Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) enables high-level interrogation of intact proteins in the most detail to date. However, an appropriate complement of fragmentation technologies must be paired with FTMS to provide comprehensive sequence coverage, as well as characterization of sequence variants, and post-translational modifications. Here we describe the integration of front-end electron transfer dissociation (FETD) with a custom-built 21 tesla FT-ICR mass spectrometer, which yields unprecedented sequence coverage for proteins ranging from 2.8 to 29 kDa, without the need for extensive spectral averaging (e.g., 60% sequence coverage for apo-myoglobin with four averaged acquisitions). The system is equipped with a multipole storage device separate from the ETD reaction device, which allows accumulation of multiple ETD fragment ion fills. Consequently, an optimally large product ion population is accumulated prior to transfer to the ICR cell for mass analysis, which improves mass spectral signal-to-noise ratio, dynamic range, and scan rate. We find a linear relationship between protein molecular weight and minimum number of ETD reaction fills to achieve optimum sequence coverage, thereby enabling more efficient use of instrument data acquisition time. Finally, real-time scaling of the number of ETD reactions fills during method-based acquisition is shown, and the implications for LC-MS/MS top-down analysis are discussed. [Figure not available: see fulltext.
Multi-decadal and seasonal variability of dust observations in West Greenland.
NASA Astrophysics Data System (ADS)
Bullard, Joanna E.; Mockford, Tom
2017-04-01
Since the early 1900s expedition records from west Greenland have reported local dust storms. The Kangerlussuaq region, near the inland ice, is dry (mean annual precipitation <160 mm) with, on average, 150 snow-free days per year. The main local dust sources are active, proglacial outwash plains although reworking of loess deposits may also be important. This paper presents an analysis of 70-years of dust storm observations (1945-2015) based on WMO weather codes 6 (dust haze), 7 (raised dust or sand) and 9 (distant or past dust storm) and associated wind data. The 70-year average number of dust observations days is 5 per year but variable ranging from 0 observations to 23 observations in 1985. Over the past 7 decades the number of dust days has increased from <30 in 1945-54 to >75 in 1995-2004 and 2005-2015. The seasonality of dust observations has remained consistent throughout most of the period. Dust days occur all year round but are most frequent in May-June and September-October and are associated with minimum snow cover and glacial meltwater-driven sediment supply to the outwash plains during spring and fall flood events. Wind regime is bimodal dominated by katabatic winds from the northeast, which are strongest and most frequent during winter months (Nov-Jan), with less frequent, southwesterly winds generated by Atlantic storms mostly confined to spring (May, June). The southwesterly winds are those most likely to transport dust onto the Greenland ice sheet.
Cigarette Prices and Community Price Comparisons in US Military Retail Stores
Poston, Walker S.C.; Haddock, Christopher K.; Jahnke, Sara A.; Smith, Elizabeth; Malone, Ruth E.; Jitnarin, Nattinee
2016-01-01
BACKGROUND Tobacco pricing impacts use, yet military retailers sell discounted cigarettes. No systematic research has examined how military retail stores use internal community comparisons to set prices. We analyzed data obtained through a Freedom of Information Act request on community price comparisons used by military retail to set cigarette prices. METHODS Data on cigarette prices were obtained directly from military retailers (exchanges) from January 2013–March 2014. Complete pricing data was provided from exchanges on 114 military installations. RESULTS The average price for a pack of Marlboro cigarettes in military exchanges was $5.51, which was similar to the average lowest community price ($5.45; Mean Difference=−0.06; p=0.104) and almost a $1.00 lower than the average highest price ($6.44). Military retail prices were 2.1%, 6.2%, and 13.7% higher than the lowest, average, and highest community comparisons and 18.2% of exchange prices violated pricing instructions. There was a negative correlation (r = −.21, p = 0.02) between the number of community stores surveyed and exchange cigarette prices. CONCLUSIONS There was no significant difference between prices for cigarettes on military installations and the lowest average community comparison, and in some locations the prices violated DoD policy. US Marine Corps exchanges had the lowest prices, which is of concern given that the Marines also have the highest rates of tobacco use in the DoD. Given the relationship between tobacco product prices and demand, a common minimum (or floor) shelf price for tobacco products should be set for all exchanges and discount coupon redemptions should be prohibited. PMID:27553357
NASA Astrophysics Data System (ADS)
Verkhoglyadova, O. P.; Tsurutani, B. T.; Mannucci, A. J.; Mlynczak, M. G.; Hunt, L. A.; Runge, T.
2013-02-01
We study solar wind-ionosphere coupling through the late declining phase/solar minimum and geomagnetic minimum phases during the last solar cycle (SC23) - 2008 and 2009. This interval was characterized by sequences of high-speed solar wind streams (HSSs). The concomitant geomagnetic response was moderate geomagnetic storms and high-intensity, long-duration continuous auroral activity (HILDCAA) events. The JPL Global Ionospheric Map (GIM) software and the GPS total electron content (TEC) database were used to calculate the vertical TEC (VTEC) and estimate daily averaged values in separate latitude and local time ranges. Our results show distinct low- and mid-latitude VTEC responses to HSSs during this interval, with the low-latitude daytime daily averaged values increasing by up to 33 TECU (annual average of ~20 TECU) near local noon (12:00 to 14:00 LT) in 2008. In 2009 during the minimum geomagnetic activity (MGA) interval, the response to HSSs was a maximum of ~30 TECU increases with a slightly lower average value than in 2008. There was a weak nighttime ionospheric response to the HSSs. A well-studied solar cycle declining phase interval, 10-22 October 2003, was analyzed for comparative purposes, with daytime low-latitude VTEC peak values of up to ~58 TECU (event average of ~55 TECU). The ionospheric VTEC changes during 2008-2009 were similar but ~60% less intense on average. There is an evidence of correlations of filtered daily averaged VTEC data with Ap index and solar wind speed. We use the infrared NO and CO2 emission data obtained with SABER on TIMED as a proxy for the radiation balance of the thermosphere. It is shown that infrared emissions increase during HSS events possibly due to increased energy input into the auroral region associated with HILDCAAs. The 2008-2009 HSS intervals were ~85% less intense than the 2003 early declining phase event, with annual averages of daily infrared NO emission power of ~ 3.3 × 1010 W and 2.7 × 1010 W in 2008 and 2009, respectively. The roles of disturbance dynamos caused by high-latitude winds (due to particle precipitation and Joule heating in the auroral zones) and of prompt penetrating electric fields (PPEFs) in the solar wind-ionosphere coupling during these intervals are discussed. A correlation between geoeffective interplanetary electric field components and HSS intervals is shown. Both PPEF and disturbance dynamo mechanisms could play important roles in solar wind-ionosphere coupling during prolonged (up to days) external driving within HILDCAA intervals.
Coverage-guaranteed sensor node deployment strategies for wireless sensor networks.
Fan, Gaojuan; Wang, Ruchuan; Huang, Haiping; Sun, Lijuan; Sha, Chao
2010-01-01
Deployment quality and cost are two conflicting aspects in wireless sensor networks. Random deployment, where the monitored field is covered by randomly and uniformly deployed sensor nodes, is an appropriate approach for large-scale network applications. However, their successful applications depend considerably on the deployment quality that uses the minimum number of sensors to achieve a desired coverage. Currently, the number of sensors required to meet the desired coverage is based on asymptotic analysis, which cannot meet deployment quality due to coverage overestimation in real applications. In this paper, we first investigate the coverage overestimation and address the challenge of designing coverage-guaranteed deployment strategies. To overcome this problem, we propose two deployment strategies, namely, the Expected-area Coverage Deployment (ECD) and BOundary Assistant Deployment (BOAD). The deployment quality of the two strategies is analyzed mathematically. Under the analysis, a lower bound on the number of deployed sensor nodes is given to satisfy the desired deployment quality. We justify the correctness of our analysis through rigorous proof, and validate the effectiveness of the two strategies through extensive simulation experiments. The simulation results show that both strategies alleviate the coverage overestimation significantly. In addition, we also evaluate two proposed strategies in the context of target detection application. The comparison results demonstrate that if the target appears at the boundary of monitored region in a given random deployment, the average intrusion distance of BOAD is considerably shorter than that of ECD with the same desired deployment quality. In contrast, ECD has better performance in terms of the average intrusion distance when the invasion of intruder is from the inside of monitored region.
The effect of future reduction in aerosol emissions on climate extremes in China
NASA Astrophysics Data System (ADS)
Wang, Zhili; Lin, Lei; Yang, Meilin; Xu, Yangyang
2016-11-01
This study investigates the effect of reduced aerosol emissions on projected temperature and precipitation extremes in China during 2031-2050 and 2081-2100 relative to present-day conditions using the daily data output from the Community Earth System Model ensemble simulations under the Representative Concentration Pathway (RCP) 8.5 with an applied aerosol reduction and RCP8.5 with fixed 2005 aerosol emissions (RCP8.5_FixA) scenarios. The reduced aerosol emissions of RCP8.5 magnify the warming effect due to greenhouse gases (GHG) and lead to significant increases in temperature extremes, such as the maximum of daily maximum temperature (TXx), minimum of daily minimum temperature (TNn), and tropical nights (TR), and precipitation extremes, such as the maximum 5-day precipitation amount, number of heavy precipitation days, and annual total precipitation from days ˃95th percentile, in China. The projected TXx, TNn, and TR averaged over China increase by 1.2 ± 0.2 °C (4.4 ± 0.2 °C), 1.3 ± 0.2 °C (4.8 ± 0.2 °C), and 8.2 ± 1.2 (30.9 ± 1.4) days, respectively, during 2031-2050 (2081-2100) under the RCP8.5_FixA scenario, whereas the corresponding values are 1.6 ± 0.1 °C (5.3 ± 0.2 °C), 1.8 ± 0.2 °C (5.6 ± 0.2 °C), and 11.9 ± 0.9 (38.4 ± 1.0) days under the RCP8.5 scenario. Nationally averaged increases in all of those extreme precipitation indices above due to the aerosol reduction account for more than 30 % of the extreme precipitation increases under the RCP8.5 scenario. Moreover, the aerosol reduction leads to decreases in frost days and consecutive dry days averaged over China. There are great regional differences in changes of climate extremes caused by the aerosol reduction. When normalized by global mean surface temperature changes, aerosols have larger effects on temperature and precipitation extremes over China than GHG.
Uncertainty in LiDAR derived Canopy Height Models in three unique forest ecosystems
NASA Astrophysics Data System (ADS)
Goulden, T.; Leisso, N.; Scholl, V.; Hass, B.
2016-12-01
The National Ecological Observatory Network (NEON) is a continental-scale ecological observation platform designed to collect and disseminate data that contributes to understanding and forecasting the impacts of climate change, land use change, and invasive species on ecology. NEON will collect in-situ and airborne data over 81 sites across the US, including Alaska, Hawaii, and Puerto Rico. The Airborne Observation Platform (AOP) group within the NEON project operates a payload suite that includes a waveform / discrete LiDAR, imaging spectrometer (NIS) and high resolution RGB camera. One of the products derived from the discrete LiDAR is a canopy height model (CHM) raster developed at 1 m spatial resolution. Currently, it is hypothesized that differencing annually acquired CHM products allows identification of tree growth at in-situ distributed plots throughout the NEON sites. To test this hypothesis, the precision of the CHM product was determined through a specialized flight plan that independently repeated up to 20 observations of the same area with varying view geometries. The flight plan was acquired at three NEON sites, each with a unique forest types including 1) San Joaquin Experimental Range (SJER, open woodland dominated by oaks), 2) Soaproot Saddle (SOAP, mixed conifer deciduous forest), and 3) Oak Ridge National Laboratory (ORNL, oak hickory and pine forest). A CHM was developed for each flight line at each site and the overlap area was used to empirically estimate a site-specific precision of the CHM. The average cell-by-cell CHM precision at SJER, SOAP and ORNL was 1.34 m, 4.24 m and 0.72 m respectively. Given the average growth rate of the dominant species at each site and the average CHM uncertainty, the minimum time interval required between LiDAR acquisitions to confidently conclude growth had occurred at the plot scale was estimated to be between one and four years. The minimum interval time was shown to be primarily dependent on the CHM uncertainty and number of cells within a plot which contained vegetation. This indicates that users of NEON data should not expect that changes in canopy height can be confidently identified between annual AOP acquisitions for all areas of NEON sites.
Noncontact thermophysical property measurement by levitation of a thin liquid disk.
Lee, Sungho; Ohsaka, Kenichi; Rednikov, Alexei; Sadhal, Satwindar Singh
2006-09-01
The purpose of the current research program is to develop techniques for noncontact measurement of thermophysical properties of highly viscous liquids. The application would be for undercooled liquids that remain liquid even below the freezing point when suspended without a container. The approach being used here consists of carrying out thermocapillary flow and temperature measurements in a horizontally levitated, laser-heated thin glycerin disk. In a levitated state, the disk is flattened by an intense acoustic field. Such a disk has the advantage of a relatively low gravitational potential over the thickness, thus mitigating the buoyancy effects, and helping isolate the thermocapillary-driven flows. For the purpose of predicting the thermal properties from these measurements, it is necessary to develop a theoretical model of the thermal processes. Such a model has been developed, and, on the basis of the observed shape, the thickness is taken to be a minimum at the center with a gentle parabolic profile at both the top and the bottom surfaces. This minimum thickness is much smaller than the radius of disk drop and the ratio of thickness to radius becomes much less than unity. It is heated by laser beam in normal direction to the edge. A general three-dimensional momentum equation is transformed into a two-variable vorticity equation. For the highly viscous liquid, a few millimeters in size, Stokes equations adequately describe the flow. Additional approximations are made by considering average flow properties over the disk thickness in a manner similar to lubrication theory. In the same way, the three-dimensional energy equation is averaged over the disk thickness. With convection boundary condition at the surfaces, we integrate a general three-dimensional energy equation to get an averaged two-dimensional energy equation that has convection terms, conduction terms, and additional source terms corresponding to a Biot number. A finite-difference numerical approach is used to solve these steady-state governing equations in the cylindrical coordinate system. The calculations yield the temperature distribution and the thermally driven flow field. These results have been used to formulate a model that, in conjunction with experiments, has enabled the development of a method for the noncontact thermophysical property measurement of liquids.
Song, Yilin; Yang, Huixia
2014-08-01
To compare the clinical use of continuous glucose monitoring system (CGMS) and self-monitoring blood glucose (SMBG) when monitoring blood glucose level of patients with gestational diabetes mellitus (GDM) or type 2 diabetes mellitus (DM) complicated with pregnancy. A total of 99 patients with GDM (n = 70) and type 2 DM complicated with pregnancy (n = 29) that whether hospitalized or in clinical of Peking University First Hospital were recruited from Aug 2012 to Apr 2013. The CGMS was used to monitor their blood glucose level during the 72-hour time period, while the SMBG was also taken seven times daily. The correlation between these blood glucose levels and their glycosylated hemoglobin (HbA1c) levels were analyzed by comparing the average value, the maximum and the minimum value of blood glucose, and the appeared time of these extremum values in these two monitoring methods, and the amount of insulin usage was recorded as well. (1) The maximum, minimum and the average blood glucose value in the GDM group were (8.7 ± 1.2), (4.5 ± 0.6)and (6.3 ± 0.6)mmol/L of SMBG vs. (10.1 ± 1.7), (3.1 ± 0.7), (6.0 ± 0.6) mmol/L of CGMS. These values in DM group were(10.1 ± 2.2), (4.5 ± 1.0), (6.9 ± 1.1)mmol/L of SMBG vs.(12.2 ± 2.6), (2.8 ± 0.8), (6.6 ± 1.1) mmol/L of CGMS. By using the two methods, the maximum and the average value of the two groups showed significant differences (P < 0.01) while the minimum value showed no significant differences (P > 0.05). (2) In the GDM group, the average blood glucose values of CGMS and SMBG were significantly correlated (r = 0.864, P < 0.01). The maximum values presented the same result (r = 0.734, P < 0.01). Correlation was not found in the minimum values of CGMS and SMBG (r = 0.138, P > 0.05). In the DM group, the average valves of two methods were significantly correlated (r = 0.962, P < 0.01), the maximum values showed the same result (r = 0.831, P < 0.01).It can also be observed in the minimum values (r = 0.460, P < 0.05). (3) There was significant correlation between the average value of CGMS and HbA1c level (r = 0.400, P < 0.01), and the average value of SMBG and HbA1c level were correlated (r = 0.031, P < 0.05) in the GDM group; the average values of CGMS (r = 0.695, P < 0.01) and SMBG (r = 0.673, P < 0.01) were both significantly correlated with the HbA1c level in the DM group. (4) In the GDM group, 37% (26/70) of the minimum values of SMBG appeared 30 minutes before breakfast, while 34% (24/70) of them appeared 30 minutes before lunch; 86% (60/70) of the maximum values of SMBG were evenly distributed 2 hours after each of the three meals. In the DM group, 41% (12/29) of the minimum values of SMBG presented 30 minutes before lunch, while 21% (6/29) and 14% (4/29) of them were showed 30 minutes before breakfast and dinner respectively; about 30% of the maximum values of SMBG appeared 2 hours after each of the three meals. (5) In the GDM group, 23% (16/70) of the minimum values of CGMS occurred between 0:00-2:59 am., and most of the other minimum values of CGMS were evenly distributed in the rest of the day, except for 3% (2/70) of them were found during 18:00- 20:59 pm. 43% (30/70) of the maximum values of CGMS appeared during 6:00-8:59 am., only 1% (1/70) and 3% (2/70) of them presented during 0:00-2:59 am. and 21:00-23:59 pm., and the rest were evenly distributed for the other times of the day. In the DM group, 34% (10/29) of the minimum values of CGMS were found during 0:00-2:59 am., 14% (4/29) of them appeared during 9:00-11:59 am. and 15:00-17:59 pm., 45% (13/29) of the maximum values of the CGMS presented during 6:00-8:59 am., none was found during 21:00-23:59 pm.,0:00-2:59 am. and 3:00-5:59 am., and the rest were evenly distributed for the other times of the day. (6) 64% (45/70) of the patients in the GDM group did not require for insulin treatment, while 36% (25/70) of them did. For those patients who received insulin treatment, after CGMS, 64% (16/25) of them adjusted the insulin dosage according to their blood glucose levels. In the DM group, 14% (4/29) of them did not receive insulin treatment, while for the others who did (86%, 25/29); 60% (15/25) of them adjusted the insulin dosage according to their blood glucose levels after CGMS. Both CGMS and SMBG could correctly reflect patients' blood glucose levels. It was more difficult to control the blood glucose levels in patients with type 2 DM complicated with pregnancy than the GDM patients. Compared with SMBG, CGMS could detect postprandial hyperglycemia and nocturnal hypoglycemia more effectively.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
Danish A.I. field data with sexed semen.
Borchersen, S; Peacock, M
2009-01-01
The objective of this study was to compare conception rates, non-return rates and sex ratios of sexed and conventional semen from the same sires in commercial dairy herds in Denmark. The semen was produced from three bulls from each of the three major dairy breeds in Denmark: Holstein, Jersey and Danish Red Dairy Breed (nine bulls total), in order to answer questions on breeds differences in field results. AI was performed by trained technicians using a minimum of 150 doses of sorted sperm and 50 control doses from each bull. During the trial, a total of 2087 doses were used in 63 herds. The trial showed that the conception rate using sorted semen was 5% points lower than with conventional doses for Danish Reds, 7% points for Jerseys, and 12% points for Holsteins. Translating this into non-return rate revealed differences of 10-20% points among bulls. These differences are thought to be a good indicator of what to expect from commercial use of sexed semen. The sex ratios varied from 89% to 93% female calves among breeds, which on average is consistent with the theoretical average sex ratio of 93% females considering the low number of inseminations.
VehiHealth: An Emergency Routing Protocol for Vehicular Ad Hoc Network to Support Healthcare System.
Bhoi, S K; Khilar, P M
2016-03-01
Survival of a patient depends on effective data communication in healthcare system. In this paper, an emergency routing protocol for Vehicular Ad hoc Network (VANET) is proposed to quickly forward the current patient status information from the ambulance to the hospital to provide pre-medical treatment. As the ambulance takes time to reach the hospital, ambulance doctor can provide sudden treatment to the patient in emergency by sending patient status information to the hospital through the vehicles using vehicular communication. Secondly, the experienced doctors respond to the information by quickly sending a treatment information to the ambulance. In this protocol, data is forwarded through that path which has less link breakage problem between the vehicles. This is done by calculating an intersection value I v a l u e for the neighboring intersections by using the current traffic information. Then the data is forwarded through that intersection which has minimum I v a l u e . Simulation results show VehiHealth performs better than P-GEDIR, GyTAR, A-STAR and GSR routing protocols in terms of average end-to-end delay, number of link breakage, path length, and average response time.
Self-avoiding walks on scale-free networks
NASA Astrophysics Data System (ADS)
Herrero, Carlos P.
2005-01-01
Several kinds of walks on complex networks are currently used to analyze search and navigation in different systems. Many analytical and computational results are known for random walks on such networks. Self-avoiding walks (SAW’s) are expected to be more suitable than unrestricted random walks to explore various kinds of real-life networks. Here we study long-range properties of random SAW’s on scale-free networks, characterized by a degree distribution P (k) ˜ k-γ . In the limit of large networks (system size N→∞ ), the average number sn of SAW’s starting from a generic site increases as μn , with μ= < k2 > /
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Seven-Day Low Streamflows in the United States, 1940-2014
This map shows percentage changes in the minimum annual rate of water carried by rivers and streams across the country, based on the long-term rate of change from 1940 to 2014. Minimum streamflow is based on the consecutive seven-day period with the lowest average flow during a given year. Blue triangles represent an increase in low stream flow volumes, and brown triangles represent a decrease. Streamflow data were collected by the U.S. Geological Survey. For more information: www.epa.gov/climatechange/science/indicators
A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts
2015-04-30
fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Ho, Kai-Yu; Keyak, Joyce H; Powers, Christopher M
2014-01-03
Elevated bone principal strain (an indicator of potential bone injury) resulting from reduced cartilage thickness has been suggested to contribute to patellofemoral symptoms. However, research linking patella bone strain, articular cartilage thickness, and patellofemoral pain (PFP) remains limited. The primary purpose was to determine whether females with PFP exhibit elevated patella bone strain when compared to pain-free controls. A secondary objective was to determine the influence of patella cartilage thickness on patella bone strain. Ten females with PFP and 10 gender, age, and activity-matched pain-free controls participated. Patella bone strain fields were quantified utilizing subject-specific finite element (FE) models of the patellofemoral joint (PFJ). Input parameters for the FE model included (1) PFJ geometry, (2) elastic moduli of the patella bone, (3) weight-bearing PFJ kinematics, and (4) quadriceps muscle forces. Using quasi-static simulations, peak and average minimum principal strains as well as peak and average maximum principal strains were quantified. Cartilage thickness was quantified by computing the perpendicular distance between opposing voxels defining the cartilage edges on axial plane magnetic resonance images. Compared to the pain-free controls, individuals with PFP exhibited increased peak and average minimum and maximum principal strain magnitudes in the patella. Additionally, patella cartilage thickness was negatively associated with peak minimum principal patella strain and peak maximum principal patella strain. The elevated bone strain magnitudes resulting from reduced cartilage thickness may contribute to patellofemoral symptoms and bone injury in persons with PFP. © 2013 Published by Elsevier Ltd.
Predictive model for disinfection by-product in Alexandria drinking water, northern west of Egypt.
Abdullah, Ali M; Hussona, Salah El-dien
2013-10-01
Chlorine has been utilized in the early stages of water treatment processes as disinfectant. Disinfection for drinking water reduces the risk of pathogenic infection but may pose a chemical threat to human health due to disinfection residues and their by-products (DBP) when the organic and inorganic precursors are present in water. In the last two decades, many modeling attempts have been made to predict the occurrence of DBP in drinking water. Models have been developed based on data generated in laboratory-scale and field-scale investigations. The objective of this paper is to develop a predictive model for DBP formation in the Alexandria governorate located at the northern west of Egypt based on field-scale investigations as well as laboratory-controlled experimentations. The present study showed that the correlation coefficient between trihalomethanes (THM) predicted and THM measured was R (2)=0.88 and the minimum deviation percentage between THM predicted and THM measured was 0.8 %, the maximum deviation percentage was 89.3 %, and the average deviation was 17.8 %, while the correlation coefficient between dichloroacetic acid (DCAA) predicted and DCAA measured was R (2)=0.98 and the minimum deviation percentage between DCAA predicted and DCAA measured was 1.3 %, the maximum deviation percentage was 47.2 %, and the average deviation was 16.6 %. In addition, the correlation coefficient between trichloroacetic acid (TCAA) predicted and TCAA measured was R (2)=0.98 and the minimum deviation percentage between TCAA predicted and TCAA measured was 4.9 %, the maximum deviation percentage was 43.0 %, and the average deviation was 16.0 %.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Fragments (Pitted Style Only) Not more than 1.3% average by count Major Stems Not more than 3 HEVM Not more... per 300 grams Major Stems Not more than 3 HEVM Not more than 2 units per sample Broken Pieces and End... Fragments Average of not more than 1 by count per 300 grams Major Stems Not more than 3 HEVM Not more than 2...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Fragments (Pitted Style Only) Not more than 1.3% average by count Major Stems Not more than 3 HEVM Not more... per 300 grams Major Stems Not more than 3 HEVM Not more than 2 units per sample Broken Pieces and End... Fragments Average of not more than 1 by count per 300 grams Major Stems Not more than 3 HEVM Not more than 2...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Fragments (Pitted Style Only) Not more than 1.3% average by count Major Stems Not more than 3 HEVM Not more... per 300 grams Major Stems Not more than 3 HEVM Not more than 2 units per sample Broken Pieces and End... Fragments Average of not more than 1 by count per 300 grams Major Stems Not more than 3 HEVM Not more than 2...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Fragments (Pitted Style Only) Not more than 1.3% average by count Major Stems Not more than 3 HEVM Not more... per 300 grams Major Stems Not more than 3 HEVM Not more than 2 units per sample Broken Pieces and End... Fragments Average of not more than 1 by count per 300 grams Major Stems Not more than 3 HEVM Not more than 2...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Fragments (Pitted Style Only) Not more than 1.3% average by count Major Stems Not more than 3 HEVM Not more... per 300 grams Major Stems Not more than 3 HEVM Not more than 2 units per sample Broken Pieces and End... Fragments Average of not more than 1 by count per 300 grams Major Stems Not more than 3 HEVM Not more than 2...
This EnviroAtlas dataset contains data on the mean biological nitrogen fixation in natural/semi-natural ecosystems per 12-digit Hydrologic Unit (HUC) in 2006. Biological N fixation (BNF) in natural/semi-natural ecosystems was estimated using a correlation with actual evapotranspiration (AET). This correlation is based on a global meta-analysis of BNF in natural/semi-natural ecosystems (Cleveland et al. 1999). AET estimates for 2006 were calculated using a regression equation describing the correlation of AET with climate (average annual daily temperature, average annual minimum daily temperature, average annual maximum daily temperature, and annual precipitation) and land use/land cover variables in the conterminous US (Sanford and Selnick 2013). Data describing annual average minimum and maximum daily temperatures and total precipitation for 2006 were acquired from the PRISM climate dataset (http://prism.oregonstate.edu). Average annual climate data were then calculated for individual 12-digit USGS Hydrologic Unit Codes (HUC12s; http://water.usgs.gov/GIS/huc.html; 22 March 2011 release) using the Zonal Statistics tool in ArcMap 10.0. AET for individual HUC12s was estimated using equations described in Sanford and Selnick (2013). BNF in natural/semi-natural ecosystems within individual HUC12s was modeled with an equation describing the statistical relationship between BNF (kg N ha-1 yr-1) and actual evapotranspiration (AET; cm yr-1) and scaled to the proportion
Climate variables as predictors for seasonal forecast of dengue occurrence in Chennai, Tamil Nadu
NASA Astrophysics Data System (ADS)
Subash Kumar, D. D.; Andimuthu, R.
2013-12-01
Background Dengue is a recently emerging vector borne diseases in Chennai. As per the WHO report in 2011 dengue is one of eight climate sensitive disease of this century. Objective Therefore an attempt has been made to explore the influence of climate parameters on dengue occurrence and use for forecasting. Methodology Time series analysis has been applied to predict the number of dengue cases in Chennai, a metropolitan city which is the capital of Tamil Nadu, India. Cross correlation of the climate variables with dengue cases revealed that the most influential parameters were monthly relative humidity, minimum temperature at 4 months lag and rainfall at one month lag (Table 1). However due to intercorrelation of relative humidity and rainfall was high and therefore for predictive purpose the rainfall at one month lag was used for the model development. Autoregressive Integrated Moving Average (ARIMA) models have been applied to forecast the occurrence of dengue. Results and Discussion The best fit model was ARIMA (1,0,1). It was seen that the monthly minimum temperature at four months lag (β= 3.612, p = 0.02) and rainfall at one month lag (β= 0.032, p = 0.017) were associated with dengue occurrence and they had a very significant effect. Mean Relative Humidity had a directly significant positive correlation at 99% confidence level, but the lagged effect was not prominent. The model predicted dengue cases showed significantly high correlation of 0.814(Figure 1) with the observed cases. The RMSE of the model was 18.564 and MAE was 12.114. The model is limited by the scarcity of the dataset. Inclusion of socioeconomic conditions and population offset are further needed to be incorporated for effective results. Conclusion Thus it could be claimed that the change in climatic parameters is definitely influential in increasing the number of dengue occurrence in Chennai. The climate variables therefore can be used for seasonal forecasting of dengue with rise in minimum temperature and rainfall at a city level. Table 1. Cross correlation of climate variables with dengue cases in Chennai ** p<0.01,*p<0.05
Quantifying the Impact of Additional Laboratory Tests on the Quality of a Geomechanical Model
NASA Astrophysics Data System (ADS)
Fillion, Marie-Hélène; Hadjigeorgiou, John
2017-05-01
In an open-pit mine operation, the design of safe and economically viable slopes can be significantly influenced by the quality and quantity of collected geomechanical data. In several mining jurisdictions, codes and standards are available for reporting exploration data, but similar codes or guidelines are not formally available or enforced for geotechnical design. Current recommendations suggest a target level of confidence in the rock mass properties used for slope design. As these guidelines are qualitative and somewhat subjective, questions arise regarding the minimum number of tests to perform in order to reach the proposed level of confidence. This paper investigates the impact of defining a priori the required number of laboratory tests to conduct on rock core samples based on the geomechanical database of an operating open-pit mine in South Africa. In this review, to illustrate the process, the focus is on uniaxial compressive strength properties. Available strength data for 2 project stages were analysed using the small-sampling theory and the confidence interval approach. The results showed that the number of specimens was too low to obtain a reliable strength value for some geotechnical domains even if more specimens than the minimum proposed by the ISRM suggested methods were tested. Furthermore, the testing sequence used has an impact on the minimum number of specimens required. Current best practice cannot capture all possibilities regarding the geomechanical property distributions, and there is a demonstrated need for a method to determine the minimum number of specimens required while minimising the influence of the testing sequence.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Percolation flux and Transport velocity in the unsaturated zone, Yucca Mountain, Nevada
Yang, I.C.
2002-01-01
The percolation flux for borehole USW UZ-14 was calculated from 14C residence times of pore water and water content of cores measured in the laboratory. Transport velocity is calculated from the depth interval between two points divided by the difference in 14C residence times. Two methods were used to calculate the flux and velocity. The first method uses the 14C data and cumulative water content data directly in the incremental intervals in the Paintbrush nonwelded unit and the Topopah Spring welded unit. The second method uses the regression relation for 14C data and cumulative water content data for the entire Paintbrush nonwelded unit and the Topopah Spring Tuff/Topopah Spring welded unit. Using the first method, for the Paintbrush nonwelded unit in boreholeUSW UZ-14 percolation flux ranges from 2.3 to 41.0 mm/a. Transport velocity ranges from 1.2 to 40.6 cm/a. For the Topopah Spring welded unit percolation flux ranges from 0.9 to 5.8 mm/a in the 8 incremental intervals calculated. Transport velocity ranges from 1.4 to 7.3 cm/a in the 8 incremental intervals. Using the second method, average percolation flux in the Paintbrush nonwelded unit for 6 boreholes ranges from 0.9 to 4.0 mm/a at the 95% confidence level. Average transport velocity ranges from 0.6 to 2.6 cm/a. For the Topopah Spring welded unit and Topopah Spring Tuff, average percolation flux in 5 boreholes ranges from 1.3 to 3.2 mm/a. Average transport velocity ranges from 1.6 to 4.0 cm/a. Both the average percolation flux and average transport velocity in the PTn are smaller than in the TS/TSw. However, the average minimum and average maximum values for the percolation flux in the TS/TSw are within the PTn average range. Therefore, differences in the percolation flux in the two units are not significant. On the other hand, average, average minimum, and average maximum transport velocities in the TS/TSw unit are all larger than the PTn values, implying a larger transport velocity for the TS/TSw although there is a small overlap.
On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
Comprehensive Assessment of Marine Coatings in the Laboratory and Field
2015-03-27
basal plate morphology and minimum size requirements. Only barnacles occurring at least 5 mm from the edges of the coupon were tested. Other...Incipient Fouling Calcareous Polychaetes | Sedimentan/ Polychaetes Solitary Tunicate | Sponges |Scuzz & & & & & Figure 7. Average
40 CFR 63.9890 - What emission limitations must I meet?
Code of Federal Regulations, 2011 CFR
2011-07-01
... each emission limit in Table 1 to this subpart that applies to you. (b) For each wet scrubber applied... average pressure drop and scrubber liquid flow rate at or above the minimum level established during the...
40 CFR 63.9890 - What emission limitations must I meet?
Code of Federal Regulations, 2010 CFR
2010-07-01
... each emission limit in Table 1 to this subpart that applies to you. (b) For each wet scrubber applied... average pressure drop and scrubber liquid flow rate at or above the minimum level established during the...
On the effective turbulence driving mode of molecular clouds formed in disc galaxies
NASA Astrophysics Data System (ADS)
Jin, Keitaro; Salim, Diane M.; Federrath, Christoph; Tasker, Elizabeth J.; Habe, Asao; Kainulainen, Jouni T.
2017-07-01
We determine the physical properties and turbulence driving mode of molecular clouds formed in numerical simulations of a Milky Way-type disc galaxy with parsec-scale resolution. The clouds form through gravitational fragmentation of the gas, leading to average values for mass, radii and velocity dispersion in good agreement with observations of Milky Way clouds. The driving parameter (b) for the turbulence within each cloud is characterized by the ratio of the density contrast (σ _{ρ /ρ _0}) to the average Mach number (M) within the cloud, b=σ _{ρ /ρ _0}/M. As shown in previous works, b ˜ 1/3 indicates solenoidal (divergence-free) driving and b ˜ 1 indicates compressive (curl-free) driving. We find that the average b value of all the clouds formed in the simulations has a lower limit of b > 0.2. Importantly, we find that b has a broad distribution, covering values from purely solenoidal to purely compressive driving. Tracking the evolution of individual clouds reveals that the b value for each cloud does not vary significantly over their lifetime. Finally, we perform a resolution study with minimum cell sizes of 8, 4, 2 and 1 pc and find that the average b value increases with increasing resolution. Therefore, we conclude that our measured b values are strictly lower limits and that a resolution better than 1 pc is required for convergence. However, regardless of the resolution, we find that b varies by factors of a few in all cases, which means that the effective driving mode alters significantly from cloud to cloud.
On pressure measurement and seasonal pressure variations during the Phoenix mission
NASA Astrophysics Data System (ADS)
Taylor, Peter A.; Kahanpää, Henrik; Weng, Wensong; Akingunola, Ayodeji; Cook, Clive; Daly, Mike; Dickinson, Cameron; Harri, Ari-Matti; Hill, Darren; Hipkin, Victoria; Polkko, Jouni; Whiteway, Jim
2010-03-01
In situ surface pressures measured at 2 s intervals during the 150 sol Phoenix mission are presented and seasonal variations discussed. The lightweight Barocap®/Thermocap® pressure sensor system performed moderately well. However, the original data processing routine had problems because the thermal environment of the sensor was subject to more rapid variations than had been expected. Hence, the data processing routine was updated after Phoenix landed. Further evaluation and the development of a correction are needed since the temperature dependences of the Barocap sensor heads have drifted after the calibration of the sensor. The inaccuracy caused by this appears when the temperature of the unit rises above 0°C. This frequently affects data in the afternoons and precludes a full study of diurnal pressure variations at this time. Short-term fluctuations, on time scales of order 20 s are unaffected and are reported in a separate paper in this issue. Seasonal variations are not significantly affected by this problem and show general agreement with previous measurements from Mars. During the 151 sol mission the surface pressure dropped from around 860 Pa to a minimum (daily average) of 724 Pa on sol 140 (Ls 143). This local minimum occurred several sols earlier than expected based on GCM studies and Viking data. Since battery power was lost on sol 151 we are not sure if the timing of the minimum that we saw could have been advanced by a low-pressure meteorological event. On sol 95 (Ls 122), we also saw a relatively low-pressure feature. This was accompanied by a large number of vertical vortex events, characterized by short, localized (in time), low-pressure perturbations.
Power limits for microbial life.
LaRowe, Douglas E; Amend, Jan P
2015-01-01
To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).
Liang, Shih-Hsiung; Walther, Bruno Andreas; Shieh, Bao-Sen
2017-01-01
Biological invasions have become a major threat to biodiversity, and identifying determinants underlying success at different stages of the invasion process is essential for both prevention management and testing ecological theories. To investigate variables associated with different stages of the invasion process in a local region such as Taiwan, potential problems using traditional parametric analyses include too many variables of different data types (nominal, ordinal, and interval) and a relatively small data set with too many missing values. We therefore used five decision tree models instead and compared their performance. Our dataset contains 283 exotic bird species which were transported to Taiwan; of these 283 species, 95 species escaped to the field successfully (introduction success); of these 95 introduced species, 36 species reproduced in the field of Taiwan successfully (establishment success). For each species, we collected 22 variables associated with human selectivity and species traits which may determine success during the introduction stage and establishment stage. For each decision tree model, we performed three variable treatments: (I) including all 22 variables, (II) excluding nominal variables, and (III) excluding nominal variables and replacing ordinal values with binary ones. Five performance measures were used to compare models, namely, area under the receiver operating characteristic curve (AUROC), specificity, precision, recall, and accuracy. The gradient boosting models performed best overall among the five decision tree models for both introduction and establishment success and across variable treatments. The most important variables for predicting introduction success were the bird family, the number of invaded countries, and variables associated with environmental adaptation, whereas the most important variables for predicting establishment success were the number of invaded countries and variables associated with reproduction. Our final optimal models achieved relatively high performance values, and we discuss differences in performance with regard to sample size and variable treatments. Our results showed that, for both the establishment model and introduction model, the number of invaded countries was the most important or second most important determinant, respectively. Therefore, we suggest that future success for introduction and establishment of exotic birds may be gauged by simply looking at previous success in invading other countries. Finally, we found that species traits related to reproduction were more important in establishment models than in introduction models; importantly, these determinants were not averaged but either minimum or maximum values of species traits. Therefore, we suggest that in addition to averaged values, reproductive potential represented by minimum and maximum values of species traits should be considered in invasion studies.