On the minimum of independent geometrically distributed random variables
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
Anticipating Cycle 24 Minimum and its Consequences: An Update
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
This Technical Publication updates estimates for cycle 24 minimum and discusses consequences associated with cycle 23 being a longer than average period cycle and cycle 24 having parametric minimum values smaller (or larger for the case of spotless days) than long term medians. Through December 2007, cycle 23 has persisted 140 mo from its 12-mo moving average (12-mma) minimum monthly mean sunspot number occurrence date (May 1996). Longer than average period cycles of the modern era (since cycle 12) have minimum-to-minimum periods of about 139.0+/-6.3 mo (the 90-percent prediction interval), inferring that cycle 24 s minimum monthly mean sunspot number should be expected before July 2008. The major consequence of this is that, unless cycle 24 is a statistical outlier (like cycle 21), its maximum amplitude (RM) likely will be smaller than previously forecast. If, however, in the course of its rise cycle 24 s 12-mma of the weighted mean latitude (L) of spot groups exceeds 24 deg, then one expects RM >131, and if its 12-mma of highest latitude (H) spot groups exceeds 38 deg, then one expects RM >127. High-latitude new cycle spot groups, while first reported in January 2008, have not, as yet, become the dominant form of spot groups. Minimum values in L and H were observed in mid 2007 and values are now slowly increasing, a precondition for the imminent onset of the new sunspot cycle.
ERIC Educational Resources Information Center
Varga-Atkins, Tünde
2016-01-01
Recent years have seen a focus on responding to student expectations in higher education. As a result, a number of technology-enhanced learning (TEL) policies have stipulated a requirement for a minimum virtual learning environment (VLE) standard to provide a consistent student experience. This paper offers insight into an under-researched area of…
Trends in record-breaking temperatures for the conterminous United States
NASA Astrophysics Data System (ADS)
Rowe, Clinton M.; Derry, Logan E.
2012-08-01
In an unchanging climate, record-breaking temperatures are expected to decrease in frequency over time, as established records become increasingly more difficult to surpass. This inherent trend in the number of record-breaking events confounds the interpretation of actual trends in the presence of any underlying climate change. Here, a simple technique to remove the inherent trend is introduced so that any remaining trend can be examined separately for evidence of a climate change. As this technique does not use the standard definition of a broken record, our records* are differentiated by an asterisk. Results for the period 1961-2010 indicate that the number of record* low daily minimum temperatures has been significantly and steadily decreasing nearly everywhere across the United States while the number of record* high daily minimum temperatures has been predominantly increasing. Trends in record* low and record* high daily maximum temperatures are generally weaker and more spatially mixed in sign. These results are consistent with other studies examining changes expected in a warming climate.
The impact of minimum wages on population health: evidence from 24 OECD countries.
Lenhart, Otto
2017-11-01
This study examines the relationship between minimum wages and several measures of population health by analyzing data from 24 OECD countries for a time period of 31 years. Specifically, I test for health effects as a result of within-country variations in the generosity of minimum wages, which are measured by the Kaitz index. The paper finds that higher levels of minimum wages are associated with significant reductions of overall mortality rates as well as in the number of deaths due to outcomes that have been shown to be more prevalent among individuals with low socioeconomic status (e.g., diabetes, disease of the circulatory system, stroke). A 10% point increase of the Kaitz index is associated with significant declines in death rates and an increase in life expectancy of 0.44 years. Furthermore, I provide evidence for potential channels through which minimum wages impact population health by showing that more generous minimum wages impact outcomes such as poverty, the share of the population with unmet medical needs, the number of doctor consultations, tobacco consumption, calorie intake, and the likelihood of people being overweight.
A quadratic regression modelling on paddy production in the area of Perlis
NASA Astrophysics Data System (ADS)
Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2017-08-01
Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.
Sunspot Activity Near Cycle Minimum and What it Might Suggest for Cycle 24, the Next Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2009-01-01
In late 2008, 12-month moving averages of sunspot number, number of spotless days, number of groups, area of sunspots, and area per group were reflective of sunspot cycle minimum conditions for cycle 24, these values being of or near record value. The first spotless day occurred in January 2004 and the first new-cycle, high-latitude spot was reported in January 2008, although old-cycle, low-latitude spots have continued to be seen through April 2009, yielding an overlap of old and new cycle spots of at least 16 mo. New-cycle spots first became dominant over old-cycle spots in September 2008. The minimum value of the weighted mean latitude of sunspots occurred in May 2007, measuring 6.6 deg, and the minimum value of the highest-latitude spot followed in June 2007, measuring 11.7 deg. A cycle length of at least 150 mo is inferred for cycle 23, making it the longest cycle of the modern era. Based on both the maximum-minimum and amplitude-period relationships, cycle 24 is expected to be only of average to below-average size, peaking probably in late 2012 to early 2013, unless it proves to be a statistical outlier.
Li, Su-Ting T; Tancredi, Daniel J; Schwartz, Alan; Guillot, Ann; Burke, Ann E; Trimm, R Franklin; Guralnick, Susan; Mahan, John D; Gifford, Kimberly
2018-04-25
The Accreditation Council for Graduate Medical Education requires semiannual Milestone reporting on all residents. Milestone expectations of performance are unknown. Determine pediatric program director (PD) minimum Milestone expectations for residents prior to being ready to supervise and prior to being ready to graduate. Mixed methods survey of pediatric PDs on their programs' Milestone expectations before residents are ready to supervise and before they are ready to graduate, and in what ways PDs use Milestones to make supervision and graduation decisions. If programs had no established Milestone expectations, PDs indicated expectations they considered for use in their program. Mean minimum Milestone level expectations adjusted for program size, region, and clustering of Milestone expectations by program were calculated for prior to supervise and prior to graduate. Free-text questions were analyzed using thematic analysis. The response rate was 56.8% (113/199). Most programs had no required minimum Milestone level before residents are ready to supervise (80%; 76/95) or ready to graduate (84%; 80/95). For readiness to supervise, minimum Milestone expectations PDs considered establishing for their program were highest for humanism (2.46, 95% CI: 2.21-2.71) and professionalization (2.37, 2.15-2.60). Minimum Milestone expectations for graduates were highest for help-seeking (3.14, 2.83-3.46). Main themes included the use of Milestones in combination with other information to assess learner performance and Milestones are not equally weighted when making advancement decisions. Most PDs have not established program minimum Milestones, but would vary such expectations by competency. Copyright © 2018. Published by Elsevier Inc.
Assessing the impact of heart failure specialist services on patient populations.
Lyratzopoulos, Georgios; Cook, Gary A; McElduff, Patrick; Havely, Daniel; Edwards, Richard; Heller, Richard F
2004-05-24
The assessment of the impact of healthcare interventions may help commissioners of healthcare services to make optimal decisions. This can be particularly the case if the impact assessment relates to specific patient populations and uses timely local data. We examined the potential impact on readmissions and mortality of specialist heart failure services capable of delivering treatments such as b-blockers and Nurse-Led Educational Intervention (N-LEI). Statistical modelling of prevented or postponed events among previously hospitalised patients, using estimates of: treatment uptake and contraindications (based on local audit data); treatment effectiveness and intolerance (based on literature); and annual number of hospitalization per patient and annual risk of death (based on routine data). Optimal treatment uptake among eligible but untreated patients would over one year prevent or postpone 11% of all expected readmissions and 18% of all expected deaths for spironolactone, 13% of all expected readmisisons and 22% of all expected deaths for b-blockers (carvedilol) and 20% of all expected readmissions and an uncertain number of deaths for N-LEI. Optimal combined treatment uptake for all three interventions during one year among all eligible but untreated patients would prevent or postpone 37% of all expected readmissions and a minimum of 36% of all expected deaths. In a population of previously hospitalised patients with low previous uptake of b-blockers and no uptake of N-LEI, optimal combined uptake of interventions through specialist heart failure services can potentially help prevent or postpone approximately four times as many readmissions and a minimum of twice as many deaths compared with simply optimising uptake of spironolactone (not necessarily requiring specialist services). Examination of the impact of different heart failure interventions can inform rational planning of relevant healthcare services.
45 CFR 2400.51 - Summer Institute accreditation.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... The Institute is accredited for six graduate semester credits by the university at which it is held. It is expected that the universities at which Fellows are pursuing their graduate study will, upon... upon transfer from the university at which the Institute is held in fulfillment of the minimum number...
45 CFR 2400.51 - Summer Institute accreditation.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... The Institute is accredited for six graduate semester credits by the university at which it is held. It is expected that the universities at which Fellows are pursuing their graduate study will, upon... upon transfer from the university at which the Institute is held in fulfillment of the minimum number...
45 CFR 2400.51 - Summer Institute accreditation.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... The Institute is accredited for six graduate semester credits by the university at which it is held. It is expected that the universities at which Fellows are pursuing their graduate study will, upon... upon transfer from the university at which the Institute is held in fulfillment of the minimum number...
User Need Satisfaction as a Basis for Tactical Airlift Scheduling.
1984-03-01
Yr.. Mo.. Day) 1S. PAGE COUNT MS Thesis FROM _ TO 1984 March 220 16. SUPPLEMENTARY NOTATION gym . Daan for Resoarch and Prolusloud ou -t, :... e...SUBROUTINE ROUTE. IT DOES THIS BY TESTING EACH MISSION C- FOR SCHEDULING CONFI ICTS . IF THE NUMBER OF EXPECTED C- CONFLICTS EXCEEDS A MINIMUM VALUE, THE
Ponicki, William R; Gruenewald, Paul J; LaScala, Elizabeth A
2007-05-01
There is a considerable body of prior research indicating that a number of public policies that limit alcohol availability affect youth traffic fatalities. These limitations can be economic (e.g., beverage taxation), physical (e.g., numbers or operating hours of alcohol outlets), or demographic (e.g., minimum legal drinking age). The estimated impacts of these policies differ widely across studies. A full-price theoretical approach suggests that people weigh the benefits of drinking against the sum of all the associated costs, including the price of the beverages themselves plus the difficulty of obtaining them and any additional risks of injury or punishment related to their use. This study tested one prediction of this model, namely that the impact from changing one availability-related cost depends on the level of other components of full cost. The current analyses concentrate on 2 forms of limitations on availability that have been shown to affect youth traffic fatalities: minimum legal drinking age (MLDA) laws and beer taxes. The interdependence between the impacts of MLDA and taxes is investigated using a panel of 48 US states over the period 1975 to 2001. All age-group-specific models control for numerous other variables previously shown to affect vehicle fatalities, as well as fixed effects to account for unexplained crosssectional and time-series variation. The analyses showed that raising either MLDA or beer taxes in isolation led to fewer youth traffic fatalities. As expected, a given change in MLDA causes a larger proportional change in fatalities when beer taxes are low than when they are high. These findings suggest that a community's expected benefit from a proposed limitation on alcohol availability depends on its current regulatory environment. Specifically, communities with relatively strong existing policies might expect smaller impacts than suggested by prior research, while places with weak current regulations might expect larger benefits from the same policy initiative.
On the Importance of Cycle Minimum in Sunspot Cycle Prediction
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.
Baker, Stuart G
2018-02-01
When using risk prediction models, an important consideration is weighing performance against the cost (monetary and harms) of ascertaining predictors. The minimum test tradeoff (MTT) for ruling out a model is the minimum number of all-predictor ascertainments per correct prediction to yield a positive overall expected utility. The MTT for ruling out an added predictor is the minimum number of added-predictor ascertainments per correct prediction to yield a positive overall expected utility. An approximation to the MTT for ruling out a model is 1/[P (H(AUC model )], where H(AUC) = AUC - {½ (1-AUC)} ½ , AUC is the area under the receiver operating characteristic (ROC) curve, and P is the probability of the predicted event in the target population. An approximation to the MTT for ruling out an added predictor is 1 /[P {(H(AUC Model:2 ) - H(AUC Model:1 )], where Model 2 includes an added predictor relative to Model 1. The latter approximation requires the Tangent Condition that the true positive rate at the point on the ROC curve with a slope of 1 is larger for Model 2 than Model 1. These approximations are suitable for back-of-the-envelope calculations. For example, in a study predicting the risk of invasive breast cancer, Model 2 adds to the predictors in Model 1 a set of 7 single nucleotide polymorphisms (SNPs). Based on the AUCs and the Tangent Condition, an MTT of 7200 was computed, which indicates that 7200 sets of SNPs are needed for every correct prediction of breast cancer to yield a positive overall expected utility. If ascertaining the SNPs costs $500, this MTT suggests that SNP ascertainment is not likely worthwhile for this risk prediction.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Urban-rural migration: uncertainty and the effect of a change in the minimum wage.
Ingene, C A; Yu, E S
1989-01-01
"This paper extends the neoclassical, Harris-Todaro model of urban-rural migration to the case of production uncertainty in the agricultural sector. A unique feature of the Harris-Todaro model is an exogenously determined minimum wage in the urban sector that exceeds the rural wage. Migration occurs until the rural wage equals the expected urban wage ('expected' due to employment uncertainty). The effects of a change in the minimum wage upon regional outputs, resource allocation, factor rewards, expected profits, and expected national income are explored, and the influence of production uncertainty upon the obtained results are delineated." The geographical focus is on developing countries. excerpt
Quasiglobal reaction model for ethylene combustion
NASA Technical Reports Server (NTRS)
Singh, D. J.; Jachimowski, Casimir J.
1994-01-01
The objective of this study is to develop a reduced mechanism for ethylene oxidation. The authors are interested in a model with a minimum number of species and reactions that still models the chemistry with reasonable accuracy for the expected combustor conditions. The model will be validated by comparing the results to those calculated with a detailed kinetic model that has been validated against the experimental data.
Neural Network Solves "Traveling-Salesman" Problem
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P.; Moopenn, Alexander W.
1990-01-01
Experimental electronic neural network solves "traveling-salesman" problem. Plans round trip of minimum distance among N cities, visiting every city once and only once (without backtracking). This problem is paradigm of many problems of global optimization (e.g., routing or allocation of resources) occuring in industry, business, and government. Applied to large number of cities (or resources), circuits of this kind expected to solve problem faster and more cheaply.
Mathematics of gravitational lensing: multiple imaging and magnification
NASA Astrophysics Data System (ADS)
Petters, A. O.; Werner, M. C.
2010-09-01
The mathematical theory of gravitational lensing has revealed many generic and global properties. Beginning with multiple imaging, we review Morse-theoretic image counting formulas and lower bound results, and complex-algebraic upper bounds in the case of single and multiple lens planes. We discuss recent advances in the mathematics of stochastic lensing, discussing a general formula for the global expected number of minimum lensed images as well as asymptotic formulas for the probability densities of the microlensing random time delay functions, random lensing maps, and random shear, and an asymptotic expression for the global expected number of micro-minima. Multiple imaging in optical geometry and a spacetime setting are treated. We review global magnification relation results for model-dependent scenarios and cover recent developments on universal local magnification relations for higher order caustics.
Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U
2010-05-01
Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.
An Examination of Selected Geomagnetic Indices in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
Previous studies have shown geomagnetic indices to be useful for providing early estimates for the size of the following sunspot cycle several years in advance. Examined this study are various precursor methods for predicting the minimum and maximum amplitude of the following sunspot cycle, these precursors based on the aa and Ap geomagnetic indices and the number of disturbed days (NDD), days when the daily Ap index equaled or exceeded 25. Also examined is the yearly peak of the daily Ap index (Apmax), the number of days when Ap greater than or equal to 100, cyclic averages of sunspot number R, aa, Ap, NDD, and the number of sudden storm commencements (NSSC), as well the cyclic sums of NDD and NSSC. The analysis yields 90-percent prediction intervals for both the minimum and maximum amplitudes for cycle 24, the next sunspot cycle. In terms of yearly averages, the best regressions give Rmin = 9.8+/-2.9 and Rmax = 153.8+/-24.7, equivalent to Rm = 8.8+/-2.8 and RM = 159+/-5.5, based on the 12-mo moving average (or smoothed monthly mean sunspot number). Hence, cycle 24 is expected to be above average in size, similar to cycles 21 and 22, producing more than 300 sudden storm commencements and more than 560 disturbed days, of which about 25 will be Ap greater than or equal to 100. On the basis of annual averages, the sunspot minimum year for cycle 24 will be either 2006 or 2007.
2016-11-30
This week the sun was hitting its lowest level of solar activity since 2011 (Nov. 14-18, 2016) as it gradually marches toward solar minimum. This activity is usually measured by sunspot count and over the past several days the sun has been almost spotless. The sun has a pendulum-like pattern of solar cycle of activity that extends over about an 11-year period. The last peak of activity was in early 2014. At this point in time, the sunspot numbers seem to be sliding downwards faster than expected, though the solar minimum level should not occur until 2021. No doubt more and larger sunspots will inevitably appear, but we'll just have to wait and see. Movies are available at http://photojournal.jpl.nasa.gov/catalog/PIA21207
Final Report on Minimum Work Expectations of Recent [Nursing] Graduates.
ERIC Educational Resources Information Center
Scott, Robert E.
To determine the importance of job tasks and/or activities for the nurse aide, the licensed practical nurse (LPN), and the associate degree nurse (ADN), nursing instructors, LPNs and employers were surveyed in Kansas in 1978 using a minimum work behavior expectation instrument. Respondents were asked to rate approximately 200 discrete job tasks…
Procedures for One-Pass Vehicle Cone Index (VCI1) Determination for Acquisition Support
2013-08-01
the VCI of tracked vehicles can be directly compared to that of wheeled vehicles; Priddy and Willoughby, 2006). Measurement of the minimum soil...of the wheel , or number of revolutions per unit time divided by 2π for a track v = forward velocity of vehicle or wheel axle. 12. Trafficability...be tested at the expected gross vehicle weight (GVW) and, for wheeled vehicles, at an appropriate soft-soil tire pressure. For wheeled vehicles
NASA Technical Reports Server (NTRS)
Klenzing, J. H.; Simoes, F.; Ivanov, S.; Heelis, R. A.; Bilitza, D.; Pfaff, R. F.; Rowland, D. E.
2011-01-01
During the recent solar minimum, solar activity reached the lowest levels observed during the space age. This extremely low solar activity has accompanied a number of unexpected observations in the Earth's ionosphere and thermosphere when compared to previous solar minima. Among these are the fact that the ionosphere is significantly contracted beyond expectations based on empirical models. Climatological altitude profiles of ion density and composition measurements near the magnetic dip equator are constructed from the C/NOFS satellite to characterize the shape of the top side ionosphere during the recent solar minimum and into the new solar cycle. The variation of the profiles with respect to local time, season, and solar activity are compared to the IRI-2007 model. Building on initial results reported by Heelis et al. [2009], here we describe the extent of the contracted ionosphere, which is found to persist throughout 2009. The shape of the ionosphere during 2010 is found to be consistent with observations from previous solar minima.
Topside Equatorial Ionospheric Density and Composition During and After Extreme Solar Minimum
NASA Technical Reports Server (NTRS)
Klenzing, J.; Simoes, F.; Ivanov, S.; Heelis, R. A.; Bilitza, D.; Pfaff, R.; Rowland, D.
2011-01-01
During the recent solar minimum, solar activity reached the lowest levels observed during the space age. This extremely low solar activity has accompanied a number of unexpected observations in the Earth s ionosphere-thermosphere system when compared to previous solar minima. Among these are the fact that the ionosphere is significantly contracted beyond expectations based on empirical models. Altitude profiles of ion density and composition measurements near the magnetic dip equator are constructed from the Communication/Navigation Outage Forecast System (C/NOFS) satellite to characterize the shape of the topside ionosphere during the recent solar minimum and into the new solar cycle. The variation of the profiles with respect to local time, season, and solar activity are compared to the IRI-2007 model. Building on initial results reported by Heelis et al. (2009), here we describe the extent of the contracted ionosphere, which is found to persist throughout 2009. The shape of the ionosphere during 2010 is found to be consistent with observations from previous solar minima.
NASA Astrophysics Data System (ADS)
Martin, J.
1982-04-01
It is shown that the fulfillment of very high speed integrated circuit (VHSIC) device development goals entails the restructuring of military electronics acquisition policy, standardization which produces the maximum number of systems and subsystems by means of the minimum number of flexible, broad-purpose, high-power semiconductors, and especially the standardization of bus structures incorporating a priorization system. It is expected that the Design Specification Handbook currently under preparation by the VHSIC program office of the DOD will make the design of such systems a task whose complexity is comparable to that of present integrated circuit electronics.
Examination of Solar Cycle Statistical Model and New Prediction of Solar Cycle 23
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.
2000-01-01
Sunspot numbers in the current solar cycle 23 were estimated by using a statistical model with the accumulating cycle sunspot data based on the odd-even behavior of historical sunspot cycles from 1 to 22. Since cycle 23 has progressed and the accurate solar minimum occurrence has been defined, the statistical model is validated by comparing the previous prediction with the new measured sunspot number; the improved sunspot projection in short range of future time is made accordingly. The current cycle is expected to have a moderate level of activity. Errors of this model are shown to be self-correcting as cycle observations become available.
A Catalog of Visual Double and Multiple Stars With Eclipsing Components
2009-08-01
astrometric data were analyzed, resulting in new orbits for eight systems and new times of minimum light for a number of the eclipsing binaries. Some...analyses; one especially productive source is the study of the long- time behav- ior of the period of an EB. As might be expected, the longer the time ...span of conjunction time measurements, or times of min- imum light, the greater the chance of detecting a long-period orbit due to an additional
Anesthesiologists' perceptions of minimum acceptable work habits of nurse anesthetists.
Logvinov, Ilana I; Dexter, Franklin; Hindman, Bradley J; Brull, Sorin J
2017-05-01
Work habits are non-technical skills that are an important part of job performance. Although non-technical skills are usually evaluated on a relative basis (i.e., "grading on a curve"), validity of evaluation on an absolute basis (i.e., "minimum passing score") needs to be determined. Survey and observational study. None. None. The theme of "work habits" was assessed using a modification of Dannefer et al.'s 6-item scale, with scores ranging from 1 (lowest performance) to 5 (highest performance). E-mail invitations were sent to all consultant and fellow anesthesiologists at Mayo Clinic in Florida, Arizona, and Minnesota. Because work habits expectations can be generational, the survey was designed for adjustment based on all invited (responding or non-responding) anesthesiologists' year of graduation from residency. The overall mean±standard deviation of the score for anesthesiologists' minimum expectations of nurse anesthetists' work habits was 3.64±0.66 (N=48). Minimum acceptable scores were correlated with the year of graduation from anesthesia residency (linear regression P=0.004). Adjusting for survey non-response using all N=207 anesthesiologists, the mean of the minimum acceptable work habits adjusted for year of graduation was 3.69 (standard error 0.02). The minimum expectations for nurse anesthetists' work habits were compared with observational data obtained from the University of Iowa. Among 8940 individual nurse anesthetist work habits scores, only 2.6% were <3.69. All N=65 of the Iowa nurse anesthetists' mean work habits scores were significantly greater than the Mayo estimate (3.69) for the minimum expectations; all P<0.00024. Our results suggest that routinely evaluated work habits of nurse anesthetists within departments should not be compared with an appropriate minimum score (i.e., of 3.69). Instead, work habits scores should be analyzed based on relative reporting among anesthetists. Copyright © 2017 Elsevier Inc. All rights reserved.
Uncertainty, imprecision, and the precautionary principle in climate change assessment.
Borsuk, M E; Tomassini, L
2005-01-01
Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.
Application of statistical process control to qualitative molecular diagnostic assays.
O'Brien, Cathal P; Finn, Stephen P
2014-01-01
Modern pathology laboratories and in particular high throughput laboratories such as clinical chemistry have developed a reliable system for statistical process control (SPC). Such a system is absent from the majority of molecular laboratories and where present is confined to quantitative assays. As the inability to apply SPC to an assay is an obvious disadvantage this study aimed to solve this problem by using a frequency estimate coupled with a confidence interval calculation to detect deviations from an expected mutation frequency. The results of this study demonstrate the strengths and weaknesses of this approach and highlight minimum sample number requirements. Notably, assays with low mutation frequencies and detection of small deviations from an expected value require greater sample numbers to mitigate a protracted time to detection. Modeled laboratory data was also used to highlight how this approach might be applied in a routine molecular laboratory. This article is the first to describe the application of SPC to qualitative laboratory data.
Groundwater-level trends in the U.S. glacial aquifer system, 1964-2013
Hodgkins, Glenn A.; Dudley, Robert W.; Nielsen, Martha G.; Renard, Benjamin; Qi, Sharon L.
2017-01-01
The glacial aquifer system in the United States is a major source of water supply but previous work on historical groundwater trends across the system is lacking. Trends in annual minimum, mean, and maximum groundwater levels for 205 monitoring wells were analyzed across three regions of the system (East, Central, West Central) for four time periods: 1964-2013, 1974-2013, 1984-2013, and 1994-2013. Trends were computed separately for wells in the glacial aquifer system with low potential for human influence on groundwater levels and ones with high potential influence from activities such as groundwater pumping. Generally there were more wells with significantly increasing groundwater levels (levels closer to ground surface) than wells with significantly decreasing levels. The highest numbers of significant increases for all four time periods were with annual minimum and/or mean levels. There were many more wells with significant increases from 1964 to 2013 than from more recent periods, consistent with low precipitation in the 1960s. Overall there were low numbers of wells with significantly decreasing trends regardless of time period considered; the highest number of these were generally for annual minimum groundwater levels at wells with likely human influence. There were substantial differences in the number of wells with significant groundwater-level trends over time, depending on whether the historical time series are assumed to be independent, have short-term persistence, or have long-term persistence. Mean annual groundwater levels have significant lag-one-year autocorrelation at 26.0% of wells in the East region, 65.4% of wells in the Central region, and 100% of wells in the West Central region. Annual precipitation across the glacial aquifer system, on the other hand, has significant autocorrelation at only 5.5% of stations, about the percentage expected due to chance.
Minimum-dissipation scalar transport model for large-eddy simulation of turbulent flows
NASA Astrophysics Data System (ADS)
Abkar, Mahdi; Bae, Hyun J.; Moin, Parviz
2016-08-01
Minimum-dissipation models are a simple alternative to the Smagorinsky-type approaches to parametrize the subfilter turbulent fluxes in large-eddy simulation. A recently derived model of this type for subfilter stress tensor is the anisotropic minimum-dissipation (AMD) model [Rozema et al., Phys. Fluids 27, 085107 (2015), 10.1063/1.4928700], which has many desirable properties. It is more cost effective than the dynamic Smagorinsky model, it appropriately switches off in laminar and transitional flows, and it is consistent with the exact subfilter stress tensor on both isotropic and anisotropic grids. In this study, an extension of this approach to modeling the subfilter scalar flux is proposed. The performance of the AMD model is tested in the simulation of a high-Reynolds-number rough-wall boundary-layer flow with a constant and uniform surface scalar flux. The simulation results obtained from the AMD model show good agreement with well-established empirical correlations and theoretical predictions of the resolved flow statistics. In particular, the AMD model is capable of accurately predicting the expected surface-layer similarity profiles and power spectra for both velocity and scalar concentration.
NASA Astrophysics Data System (ADS)
2011-04-01
Metallic asteroid 216 Kleopatra is shaped like a dog's bone and has two tiny moons - which came from the asteroid itself - according to a team of astronomers from France and the US, who also measured its surprisingly low density and concluded that it is a collection of rubble. The recent solar minimum was longer and lower than expected, with a low polar field and an unusually large number of days with no sunspots visible. Models of the magnetic field and plasma flow within the Sun suggest that fast, then slow meridional flow could account for this pattern. Variable stars are a significant scientific target for amateur astronomers. The American Association of Variable Star Observers runs the world's largest database of variable star observations, from volunteers, and reached 20 million observations in February.
Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.
Kusenko, Alexander; Pearce, Lauren; Yang, Louis
2015-02-13
The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.
NASA Technical Reports Server (NTRS)
Shambayati, Shervin; Davarian, Faramaz; Morabito, David
2004-01-01
NASA is planning an engineering telemetry demonstration with Mars Reconnaissance Orbiter (MRO). Capabilities of Ka-band (32 GHz) for use with deep space mission are demonstrated using the link optimization algorithms and weather forecasting. Furthermore, based on the performance of previous deep space missions with Ka-band downlink capabilities, experiment plans are developed for telemetry operations during superior solar conjunction. A general overview of the demonstration is given followed by a description of the mission planning during cruise, the primary science mission and superior conjunction. As part of the primary science mission planning the expected data return for various data optimization methods is calculated. These results indicate that, given MRO's data rates, a link optimized to use of at most two data rates, subject to a minimum availability of 90%, performs almost as well as a link with no limits on the number of data rates subject to the same minimum availability.
Minimum expected delay-based routing protocol (MEDR) for Delay Tolerant Mobile Sensor Networks.
Feng, Yong; Liu, Ming; Wang, Xiaomin; Gong, Haigang
2010-01-01
It is a challenging work to develop efficient routing protocols for Delay Tolerant Mobile Sensor Networks (DTMSNs), which have several unique characteristics such as sensor mobility, intermittent connectivity, energy limit, and delay tolerability. In this paper, we propose a new routing protocol called Minimum Expected Delay-based Routing (MEDR) tailored for DTMSNs. MEDR achieves a good routing performance by finding and using the connected paths formed dynamically by mobile sensors. In MEDR, each sensor maintains two important parameters: Minimum Expected Delay (MED) and its expiration time. According to MED, messages will be delivered to the sensor that has at least a connected path with their hosting nodes, and has the shortest expected delay to communication directly with the sink node. Because of the changing network topology, the path is fragile and volatile, so we use the expiration time of MED to indicate the valid time of the path, and avoid wrong transmissions. Simulation results show that the proposed MEDR achieves a higher message delivery ratio with lower transmission overhead and data delivery delay than other DTMSN routing approaches.
2013-09-30
the Study of the Environmental Arctic Change (SEARCH) Sea Ice Outlook (SIO) effort. The SIO is an international effort to provide a community-wide...summary of the expected September arctic sea ice minimum. Monthly reports released throughout the summer synthesize community estimates of the current...state and expected minimum of sea ice . Along with the backbone components of this system (NAVGEM/HYCOM/CICE), other data models have been used to
Extremes in Otolaryngology Resident Surgical Case Numbers: An Update.
Baugh, Tiffany P; Franzese, Christine B
2017-06-01
Objectives The purpose of this study is to examine the effect of minimum case numbers on otolaryngology resident case log data and understand differences in minimum, mean, and maximum among certain procedures as a follow-up to a prior study. Study Design Cross-sectional survey using a national database. Setting Academic otolaryngology residency programs. Subjects and Methods Review of otolaryngology resident national data reports from the Accreditation Council for Graduate Medical Education (ACGME) resident case log system performed from 2004 to 2015. Minimum, mean, standard deviation, and maximum values for total number of supervisor and resident surgeon cases and for specific surgical procedures were compared. Results The mean total number of resident surgeon cases for residents graduating from 2011 to 2015 ranged from 1833.3 ± 484 in 2011 to 2072.3 ± 548 in 2014. The minimum total number of cases ranged from 826 in 2014 to 1004 in 2015. The maximum total number of cases increased from 3545 in 2011 to 4580 in 2015. Multiple key indicator procedures had less than the required minimum reported in 2015. Conclusion Despite the ACGME instituting required minimum numbers for key indicator procedures, residents have graduated without meeting these minimums. Furthermore, there continues to be large variations in the minimum, mean, and maximum numbers for many procedures. Variation among resident case numbers is likely multifactorial. Ensuring proper instruction on coding and case role as well as emphasizing frequent logging by residents will ensure programs have the most accurate data to evaluate their case volume.
Optimal Chunking of Large Multidimensional Arrays for Data Warehousing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron
2008-02-15
Very large multidimensional arrays are commonly used in data intensive scientific computations as well as on-line analytical processingapplications referred to as MOLAP. The storage organization of such arrays on disks is done by partitioning the large global array into fixed size sub-arrays called chunks or tiles that form the units of data transfer between disk and memory. Typical queries involve the retrieval of sub-arrays in a manner that access all chunks that overlap the query results. An important metric of the storage efficiency is the expected number of chunks retrieved over all such queries. The question that immediately arises is"whatmore » shapes of array chunks give the minimum expected number of chunks over a query workload?" The problem of optimal chunking was first introduced by Sarawagi and Stonebraker who gave an approximate solution. In this paper we develop exact mathematical models of the problem and provide exact solutions using steepest descent and geometric programming methods. Experimental results, using synthetic and real life workloads, show that our solutions are consistently within than 2.0percent of the true number of chunks retrieved for any number of dimensions. In contrast, the approximate solution of Sarawagi and Stonebraker can deviate considerably from the true result with increasing number of dimensions and also may lead to suboptimal chunk shapes.« less
Inflight fuel tank temperature survey data
NASA Technical Reports Server (NTRS)
Pasion, A. J.
1979-01-01
Statistical summaries of the fuel and air temperature data for twelve different routes and for different aircraft models (B747, B707, DC-10 and DC-8), are given. The minimum fuel, total air and static air temperature expected for a 0.3% probability were summarized in table form. Minimum fuel temperature extremes agreed with calculated predictions and the minimum fuel temperature did not necessarily equal the minimum total air temperature even for extreme weather, long range flights.
Li, Xiangyong; Rafaliya, N; Baki, M Fazle; Chaouch, Ben A
2017-03-01
Scheduling of surgeries in the operating rooms under limited competing resources such as surgical and nursing staff, anesthesiologist, medical equipment, and recovery beds in surgical wards is a complicated process. A well-designed schedule should be concerned with the welfare of the entire system by allocating the available resources in an efficient and effective manner. In this paper, we develop an integer linear programming model in a manner useful for multiple goals for optimally scheduling elective surgeries based on the availability of surgeons and operating rooms over a time horizon. In particular, the model is concerned with the minimization of the following important goals: (1) the anticipated number of patients waiting for service; (2) the underutilization of operating room time; (3) the maximum expected number of patients in the recovery unit; and (4) the expected range (the difference between maximum and minimum expected number) of patients in the recovery unit. We develop two goal programming (GP) models: lexicographic GP model and weighted GP model. The lexicographic GP model schedules operating rooms when various preemptive priority levels are given to these four goals. A numerical study is conducted to illustrate the optimal master-surgery schedule obtained from the models. The numerical results demonstrate that when the available number of surgeons and operating rooms is known without error over the planning horizon, the proposed models can produce good schedules and priority levels and preference weights of four goals affect the resulting schedules. The results quantify the tradeoffs that must take place as the preemptive-weights of the four goals are changed.
Sigma Routing Metric for RPL Protocol.
Sanmartin, Paul; Rojas, Aldo; Fernandez, Luis; Avila, Karen; Jabba, Daladier; Valle, Sebastian
2018-04-21
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption.
Sigma Routing Metric for RPL Protocol
Rojas, Aldo; Fernandez, Luis
2018-01-01
This paper presents the adaptation of a specific metric for the RPL protocol in the objective function MRHOF. Among the functions standardized by IETF, we find OF0, which is based on the minimum hop count, as well as MRHOF, which is based on the Expected Transmission Count (ETX). However, when the network becomes denser or the number of nodes increases, both OF0 and MRHOF introduce long hops, which can generate a bottleneck that restricts the network. The adaptation is proposed to optimize both OFs through a new routing metric. To solve the above problem, the metrics of the minimum number of hops and the ETX are combined by designing a new routing metric called SIGMA-ETX, in which the best route is calculated using the standard deviation of ETX values between each node, as opposed to working with the ETX average along the route. This method ensures a better routing performance in dense sensor networks. The simulations are done through the Cooja simulator, based on the Contiki operating system. The simulations showed that the proposed optimization outperforms at a high margin in both OF0 and MRHOF, in terms of network latency, packet delivery ratio, lifetime, and power consumption. PMID:29690524
Population demographics and genetic diversity in remnant and translocated populations of sea otters
Bodkin, James L.; Ballachey, Brenda E.; Cronin, M.A.; Scribner, K.T.
1999-01-01
The effects of small population size on genetic diversity and subsequent population recovery are theoretically predicted, but few empirical data are available to describe those relations. We use data from four remnant and three translocated sea otter (Enhydra lutris) populations to examine relations among magnitude and duration of minimum population size, population growth rates, and genetic variation. Metochondrial (mt)DNA haplotype diversity was correlated with the number of years at minimum population size (r = -0.741, p = 0.038) and minimum population size (r = 0.709, p = 0.054). We found no relation between population growth and haplotype diversity, altough growth was significantly greater in translocated than in remnant populations. Haplotype diversity in populations established from two sources was higher than in a population established from a single source and was higher than in the respective source populations. Haplotype frequencies in translocated populations of founding sizes of 4 and 28 differed from expected, indicating genetic drift and differential reproduction between source populations, whereas haplotype frequencies in a translocated population with a founding size of 150 did not. Relations between population demographics and genetic characteristics suggest that genetic sampling of source and translocated populations can provide valuable inferences about translocations.
78 FR 57585 - Minimum Training Requirements for Entry-Level Commercial Motor Vehicle Operators
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... specific minimum number of training hours. Instead, these commenters support a performance-based approach... support a minimum hours-based approach to training. They stated that FMCSA must specify the minimum number...\\ Additionally, some supporters of an hours-based training approach believed that the Agency's proposal did not...
Ibáñez, Javier; Vélez, M Dolores; de Andrés, M Teresa; Borrego, Joaquín
2009-11-01
Distinctness, uniformity and stability (DUS) testing of varieties is usually required to apply for Plant Breeders' Rights. This exam is currently carried out using morphological traits, where the establishment of distinctness through a minimum distance is the key issue. In this study, the possibility of using microsatellite markers for establishing the minimum distance in a vegetatively propagated crop (grapevine) has been evaluated. A collection of 991 accessions have been studied with nine microsatellite markers and pair-wise compared, and the highest intra-variety distance and the lowest inter-variety distance determined. The collection included 489 different genotypes, and synonyms and sports. Average values for number of alleles per locus (19), Polymorphic Information Content (0.764) and heterozygosities observed (0.773) and expected (0.785) indicated the high level of polymorphism existing in grapevine. The maximum intra-variety variability found was one allele between two accessions of the same variety, of a total of 3,171 pair-wise comparisons. The minimum inter-variety variability found was two alleles between two pairs of varieties, of a total of 119,316 pair-wise comparisons. In base to these results, the minimum distance required to set distinctness in grapevine with the nine microsatellite markers used could be established in two alleles. General rules for the use of the system as a support for establishing distinctness in vegetatively propagated crops are discussed.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
For 1996 .2006 (cycle 23), 12-month moving averages of the aa geomagnetic index strongly correlate (r = 0.92) with 12-month moving averages of solar wind speed, and 12-month moving averages of the number of coronal mass ejections (CMEs) (halo and partial halo events) strongly correlate (r = 0.87) with 12-month moving averages of sunspot number. In particular, the minimum (15.8, September/October 1997) and maximum (38.0, August 2003) values of the aa geomagnetic index occur simultaneously with the minimum (376 km/s) and maximum (547 km/s) solar wind speeds, both being strongly correlated with the following recurrent component (due to high-speed streams). The large peak of aa geomagnetic activity in cycle 23, the largest on record, spans the interval late 2002 to mid 2004 and is associated with a decreased number of halo and partial halo CMEs, whereas the smaller secondary peak of early 2005 seems to be associated with a slight rebound in the number of halo and partial halo CMEs. Based on the observed aaM during the declining portion of cycle 23, RM for cycle 24 is predicted to be larger than average, being about 168+/-60 (the 90% prediction interval), whereas based on the expected aam for cycle 24 (greater than or equal to 14.6), RM for cycle 24 should measure greater than or equal to 118+/-30, yielding an overlap of about 128+/-20.
NASA Astrophysics Data System (ADS)
Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa
2018-03-01
Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.
NASA Astrophysics Data System (ADS)
Gaidash, A. A.; Egorov, V. I.; Gleim, A. V.
2016-08-01
Quantum cryptography allows distributing secure keys between two users so that any performed eavesdropping attempt would be immediately discovered. However, in practice an eavesdropper can obtain key information from multi-photon states when attenuated laser radiation is used as a source of quantum states. In order to prevent actions of an eavesdropper, it is generally suggested to implement special cryptographic protocols, like decoy states or SARG04. In this paper, we describe an alternative method based on monitoring photon number statistics after detection. We provide a useful rule of thumb to estimate approximate order of difference of expected distribution and distribution in case of attack. Formula for calculating a minimum value of total pulses or time-gaps to resolve attack is shown. Also formulas for actual fraction of raw key known to Eve were derived. This method can therefore be used with any system and even combining with mentioned special protocols.
Fuzzy α-minimum spanning tree problem: definition and solutions
NASA Astrophysics Data System (ADS)
Zhou, Jian; Chen, Lu; Wang, Ke; Yang, Fan
2016-04-01
In this paper, the minimum spanning tree problem is investigated on the graph with fuzzy edge weights. The notion of fuzzy ? -minimum spanning tree is presented based on the credibility measure, and then the solutions of the fuzzy ? -minimum spanning tree problem are discussed under different assumptions. First, we respectively, assume that all the edge weights are triangular fuzzy numbers and trapezoidal fuzzy numbers and prove that the fuzzy ? -minimum spanning tree problem can be transformed to a classical problem on a crisp graph in these two cases, which can be solved by classical algorithms such as the Kruskal algorithm and the Prim algorithm in polynomial time. Subsequently, as for the case that the edge weights are general fuzzy numbers, a fuzzy simulation-based genetic algorithm using Prüfer number representation is designed for solving the fuzzy ? -minimum spanning tree problem. Some numerical examples are also provided for illustrating the effectiveness of the proposed solutions.
Observation of entanglement witnesses for orbital angular momentum states
NASA Astrophysics Data System (ADS)
Agnew, M.; Leach, J.; Boyd, R. W.
2012-06-01
Entanglement witnesses provide an efficient means of determining the level of entanglement of a system using the minimum number of measurements. Here we demonstrate the observation of two-dimensional entanglement witnesses in the high-dimensional basis of orbital angular momentum (OAM). In this case, the number of potentially entangled subspaces scales as d(d - 1)/2, where d is the dimension of the space. The choice of OAM as a basis is relevant as each subspace is not necessarily maximally entangled, thus providing the necessary state for certain tests of nonlocality. The expectation value of the witness gives an estimate of the state of each two-dimensional subspace belonging to the d-dimensional Hilbert space. These measurements demonstrate the degree of entanglement and therefore the suitability of the resulting subspaces for quantum information applications.
On pressure measurement and seasonal pressure variations during the Phoenix mission
NASA Astrophysics Data System (ADS)
Taylor, Peter A.; Kahanpää, Henrik; Weng, Wensong; Akingunola, Ayodeji; Cook, Clive; Daly, Mike; Dickinson, Cameron; Harri, Ari-Matti; Hill, Darren; Hipkin, Victoria; Polkko, Jouni; Whiteway, Jim
2010-03-01
In situ surface pressures measured at 2 s intervals during the 150 sol Phoenix mission are presented and seasonal variations discussed. The lightweight Barocap®/Thermocap® pressure sensor system performed moderately well. However, the original data processing routine had problems because the thermal environment of the sensor was subject to more rapid variations than had been expected. Hence, the data processing routine was updated after Phoenix landed. Further evaluation and the development of a correction are needed since the temperature dependences of the Barocap sensor heads have drifted after the calibration of the sensor. The inaccuracy caused by this appears when the temperature of the unit rises above 0°C. This frequently affects data in the afternoons and precludes a full study of diurnal pressure variations at this time. Short-term fluctuations, on time scales of order 20 s are unaffected and are reported in a separate paper in this issue. Seasonal variations are not significantly affected by this problem and show general agreement with previous measurements from Mars. During the 151 sol mission the surface pressure dropped from around 860 Pa to a minimum (daily average) of 724 Pa on sol 140 (Ls 143). This local minimum occurred several sols earlier than expected based on GCM studies and Viking data. Since battery power was lost on sol 151 we are not sure if the timing of the minimum that we saw could have been advanced by a low-pressure meteorological event. On sol 95 (Ls 122), we also saw a relatively low-pressure feature. This was accompanied by a large number of vertical vortex events, characterized by short, localized (in time), low-pressure perturbations.
MISFITS: evaluating the goodness of fit between a phylogenetic model and an alignment.
Nguyen, Minh Anh Thi; Klaere, Steffen; von Haeseler, Arndt
2011-01-01
As models of sequence evolution become more and more complicated, many criteria for model selection have been proposed, and tools are available to select the best model for an alignment under a particular criterion. However, in many instances the selected model fails to explain the data adequately as reflected by large deviations between observed pattern frequencies and the corresponding expectation. We present MISFITS, an approach to evaluate the goodness of fit (http://www.cibiv.at/software/misfits). MISFITS introduces a minimum number of "extra substitutions" on the inferred tree to provide a biologically motivated explanation why the alignment may deviate from expectation. These extra substitutions plus the evolutionary model then fully explain the alignment. We illustrate the method on several examples and then give a survey about the goodness of fit of the selected models to the alignments in the PANDIT database.
Gorban, Alexander N; Pokidysheva, Lyudmila I; Smirnova, Elena V; Tyukina, Tatiana A
2011-09-01
The "Law of the Minimum" states that growth is controlled by the scarcest resource (limiting factor). This concept was originally applied to plant or crop growth (Justus von Liebig, 1840, Salisbury, Plant physiology, 4th edn., Wadsworth, Belmont, 1992) and quantitatively supported by many experiments. Some generalizations based on more complicated "dose-response" curves were proposed. Violations of this law in natural and experimental ecosystems were also reported. We study models of adaptation in ensembles of similar organisms under load of environmental factors and prove that violation of Liebig's law follows from adaptation effects. If the fitness of an organism in a fixed environment satisfies the Law of the Minimum then adaptation equalizes the pressure of essential factors and, therefore, acts against the Liebig's law. This is the the Law of the Minimum paradox: if for a randomly chosen pair "organism-environment" the Law of the Minimum typically holds, then in a well-adapted system, we have to expect violations of this law.For the opposite interaction of factors (a synergistic system of factors which amplify each other), adaptation leads from factor equivalence to limitations by a smaller number of factors.For analysis of adaptation, we develop a system of models based on Selye's idea of the universal adaptation resource (adaptation energy). These models predict that under the load of an environmental factor a population separates into two groups (phases): a less correlated, well adapted group and a highly correlated group with a larger variance of attributes, which experiences problems with adaptation. Some empirical data are presented and evidences of interdisciplinary applications to econometrics are discussed. © Society for Mathematical Biology 2010
NASA Astrophysics Data System (ADS)
Wyss, B. M.; Wyss, M.
2007-12-01
We estimate that the city of Rangoon and adjacent provinces (Rangoon, Rakhine, Ayeryarwady, Bago) represent an earthquake risk similar in severity to that of Istanbul and the Marmara Sea region. After the M9.3 Sumatra earthquake of December 2004 that ruptured to a point north of the Andaman Islands, the likelihood of additional ruptures in the direction of Myanmar and within Myanmar is increased. This assumption is especially plausible since M8.2 and M7.9 earthquakes in September 2007 extended the 2005 ruptures to the south. Given the dense population of the aforementioned provinces, and the fact that historically earthquakes of M7.5 class have occurred there (in 1858, 1895 and three in 1930), it would not be surprising, if similar sized earthquakes would occur in the coming decades. Considering that we predicted the extent of human losses in the M7.6 Kashmir earthquake of October 2005 approximately correctly six month before it occurred, it seems reasonable to attempt to estimate losses in future large to great earthquakes in central Myanmar and along its coast of the Bay of Bengal. We have calculated the expected number of fatalities for two classes of events: (1) M8 ruptures offshore (between the Andaman Islands and the Myanmar coast, and along Myanmar's coast of the Bay of Bengal. (2) M7.5 repeats of the historic earthquakes that occurred in the aforementioned years. These calculations are only order of magnitude estimates because all necessary input parameters are poorly known. The population numbers, the condition of the building stock, the regional attenuation law, the local site amplification and of course the parameters of future earthquakes can only be estimated within wide ranges. For this reason, we give minimum and maximum estimates, both within approximate error limits. We conclude that the M8 earthquakes located offshore are expected to be less harmful than the M7.5 events on land: For M8 events offshore, the minimum number of fatalities is estimated as 700 ± 200 and the maximum is estimated as 13,000 ± 6,000. For repeats of the historic M7.5 or similar earthquakes, the minimum is 4,000 ± 2,000 and the maximum is 63,000 ± 27,000. An exception is a repeat of the M7.5 earthquake of 1895 beneath the capital Rangoon that is estimated to have a population of about 4.7 million. In the case of a repeat of the 1895 event, a minimum of 100,000 and a maximum of 1 106 fatalities would have to be expected. The number of injured can in all cases be assumed to equal about double the number of fatalities. Although it is not very likely that the 1895 event would be repeated in the same location, it is clear that any medium to large earthquake in the vicinity of Rangoon (at a distance similar to the M7.2 earthquake of May 1930) could cause a major disaster with more than 10,000 fatalities. In spite of the uncertainties in these estimates, it is clear that the capital of Myanmar, and the provinces surrounding it, will likely experience major earthquake disasters in the future and the probability that these could occur during the next decades is increased. We conclude that major efforts of mitigation, using earthquake engineering techniques, and preparation for seismological early-warning capabilities should be undertaken in and near Rangoon, as well as in other cities with more than 100,000 inhabitants (e.g., Phatein, Bago and Henzada).
NASA Technical Reports Server (NTRS)
Hall, R. M.
1976-01-01
The minimum operating temperature which avoids adverse low temperature effects, such as condensation, has been determined at a free stream Mach number of 0.85 for flow over a 0.137 meter airfoil mounted at zero incidence in the Langley 1/3 meter transonic cryogenic tunnel. The onset of low temperature effects is established by comparing the pressure coefficient measured at a given orifice for a particular temperature with those measured at temperatures sufficiently above where low temperature effects might be expected to occur. The pressure distributions over the airfoil are presented in tabular form. In addition, the comparisons of the pressure coefficient as a function of total temperature are presented graphically for chord locations of 0, 25, 50, and 75 percent. Over the 1.2 to 4.5 atmosphere total pressure range investigated, low temperature effects are not detected until total temperatures are 2 K, or more, below free stream saturation temperatures.
76 FR 30243 - Minimum Security Devices and Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-24
... DEPARTMENT OF THE TREASURY Office of Thrift Supervision Minimum Security Devices and Procedures.... Title of Proposal: Minimum Security Devices and Procedures. OMB Number: 1550-0062. Form Number: N/A. Description: The requirement that savings associations establish a written security program is necessitated by...
Optimal Alignment of Structures for Finite and Periodic Systems.
Griffiths, Matthew; Niblett, Samuel P; Wales, David J
2017-10-10
Finding the optimal alignment between two structures is important for identifying the minimum root-mean-square distance (RMSD) between them and as a starting point for calculating pathways. Most current algorithms for aligning structures are stochastic, scale exponentially with the size of structure, and the performance can be unreliable. We present two complementary methods for aligning structures corresponding to isolated clusters of atoms and to condensed matter described by a periodic cubic supercell. The first method (Go-PERMDIST), a branch and bound algorithm, locates the global minimum RMSD deterministically in polynomial time. The run time increases for larger RMSDs. The second method (FASTOVERLAP) is a heuristic algorithm that aligns structures by finding the global maximum kernel correlation between them using fast Fourier transforms (FFTs) and fast SO(3) transforms (SOFTs). For periodic systems, FASTOVERLAP scales with the square of the number of identical atoms in the system, reliably finds the best alignment between structures that are not too distant, and shows significantly better performance than existing algorithms. The expected run time for Go-PERMDIST is longer than FASTOVERLAP for periodic systems. For finite clusters, the FASTOVERLAP algorithm is competitive with existing algorithms. The expected run time for Go-PERMDIST to find the global RMSD between two structures deterministically is generally longer than for existing stochastic algorithms. However, with an earlier exit condition, Go-PERMDIST exhibits similar or better performance.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Margaret; Spurlock, C. Anna; Yang, Hung-Chia
The dual purpose of this project was to contribute to basic knowledge about the interaction between regulation and innovation and to inform the cost and benefit expectations related to technical change which are embedded in the rulemaking process of an important area of national regulation. The area of regulation focused on here is minimum efficiency performance standards (MEPS) for appliances and other energy-using products. Relevant both to U.S. climate policy and energy policy for buildings, MEPS remove certain product models from the market that do not meet specified efficiency thresholds.
The evolution of altruism in spatial threshold public goods games via an insurance mechanism
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Zhang, Chunyan
2015-05-01
The persistence of cooperation in public goods situations has become an important puzzle for researchers. This paper considers the threshold public goods games where the option of insurance is provided for players from the standpoint of diversification of risk, envisaging the possibility of multiple strategies in such scenarios. In this setting, the provision point is defined in terms of the minimum number of contributors in one threshold public goods game, below which the game fails. In the presence of risk and insurance, more contributions are motivated if (1) only cooperators can opt to be insured and thus their contribution loss in the aborted games can be (partly or full) covered by the insurance; (2) insured cooperators obtain larger compensation, at lower values of the threshold point (the required minimum number of contributors). Moreover, results suggest the dominance of insured defectors who get a better promotion by more profitable benefits from insurance. We provide results of extensive computer simulations in the realm of spatial games (random regular networks and scale-free networks here), and support this study with analytical results for well-mixed populations. Our study is expected to establish a causal link between the widespread altruistic behaviors and the existing insurance system.
Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.
Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu
2017-01-01
The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this article, we reveal that the (1+1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime [Formula: see text], where [Formula: see text], [Formula: see text], and [Formula: see text] are, respectively, the number of Steiner nodes, the number of special nodes, and the largest weight among all edges in the input graph. We also show that the (1+1) EA is better than two other heuristics on two GSTP instances, and the (1+1) EA may be inefficient on a constructed GSTP instance.
({The) Solar System Large Planets influence on a new Maunder Miniμm}
NASA Astrophysics Data System (ADS)
Yndestad, Harald; Solheim, Jan-Erik
2016-04-01
In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.
Tchebichef moment transform on image dithering for mobile applications
NASA Astrophysics Data System (ADS)
Ernawan, Ferda; Abu, Nur Azman; Rahmalan, Hidayah
2012-04-01
Currently, mobile image applications spend a lot of computing process to display images. A true color raw image contains billions of colors and it consumes high computational power in most mobile image applications. At the same time, mobile devices are only expected to be equipped with lower computing process and minimum storage space. Image dithering is a popular technique to reduce the numbers of bit per pixel at the expense of lower quality image displays. This paper proposes a novel approach on image dithering using 2x2 Tchebichef moment transform (TMT). TMT integrates a simple mathematical framework technique using matrices. TMT coefficients consist of real rational numbers. An image dithering based on TMT has the potential to provide better efficiency and simplicity. The preliminary experiment shows a promising result in term of error reconstructions and image visual textures.
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
Code of Federal Regulations, 2011 CFR
2011-07-01
...— (1)(i) For a program offered in credit hours, a minimum of 30 weeks of instructional time; or (ii) For a program offered in clock hours, a minimum of 26 weeks of instructional time; and (2) For an undergraduate educational program, an amount of instructional time whereby a full-time student is expected to...
ERIC Educational Resources Information Center
California Community Colleges, Sacramento. Office of the Chancellor.
This document is the fifth edition of Minimum Qualifications for Faculty and Administrators in California Community Colleges and it updates information presented in the last edition. The document is divided into the following sections: disciplines requiring a Mater's degree, disciplines in which a Master's degree is not generally expected for…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-09
... Proposed Information Collection to OMB Minimum Property Standards for Multifamily and Care-Type Occupancy... Lists the Following Information Title of Proposal: Minimum Property Standards for Multifamily and Care-Type Occupancy Housing. OMB Approval Number: 2502-0321. Form Numbers: None. Description of the Need for...
Gauging the Nearness and Size of Cycle Minimum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1997-01-01
By definition, the conventional onset for the start of a sunspot cycle is the time when smoothed sunspot number (i.e., the 12-month moving average) has decreased to its minimum value (called minimum amplitude) prior to the rise to its maximum value (called maximum amplitude) for the given sunspot cycle. On the basis (if the modern era sunspot cycles 10-22 and on the presumption that cycle 22 is a short-period cycle having a cycle length of 120 to 126 months (the observed range of short-period modern era cycles), conventional onset for cycle 23 should not occur until sometime between September 1996 and March 1997, certainly between June 1996 and June 1997, based on the 95-percent confidence level deduced from the mean and standard deviation of period for the sample of six short-pei-iod modern era cycles. Also, because the first occurrence of a new cycle, high-latitude (greater than or equal to 25 degrees) spot has always preceded conventional onset of the new cycle by at least 3 months (for the data-available interval of cycles 12-22), conventional onset for cycle 23 is not expected until about August 1996 or later, based on the first occurrence of a new cycle 23, high-latitude spot during the decline of old cycle 22 in May 1996. Although much excitement for an earlier-occurring minimum (about March 1996) for cycle 23 was voiced earlier this year, the present study shows that this exuberance is unfounded. The decline of cycle 22 continues to favor cycle 23 minimum sometime during the latter portion of 1996 to the early portion of 1997.
An Examination of Sunspot Number Rates of Growth and Decay in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
On the basis of annual sunspot number averages, sunspot number rates of growth and decay are examined relative to both minimum and maximum amplitudes and the time of their occurrences using cycles 12 through present, the most reliably determined sunspot cycles. Indeed, strong correlations are found for predicting the minimum and maximum amplitudes and the time of their occurrences years in advance. As applied to predicting sunspot minimum for cycle 24, the next cycle, its minimum appears likely to occur in 2006, especially if it is a robust cycle similar in nature to cycles 17-23.
12 CFR 618.8020 - Feasibility requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) An analysis of how the program relates to or promotes the institution's business plan and strategic... plan; (2) An analysis of the expected financial returns of the program which, at a minimum, must include an evaluation of market, pricing, competition issues, and expected profitability. This analysis...
Using the Inflection Points and Rates of Growth and Decay to Predict Levels of Solar Activity
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2008-01-01
The ascending and descending inflection points and rates of growth and decay at specific times during the sunspot cycle are examined as predictors for future activity. On average, the ascending inflection point occurs about 1-2 yr after sunspot minimum amplitude (Rm) and the descending inflection point occurs about 6-7 yr after Rm. The ascending inflection point and the inferred slope (including the 12-mo moving average (12-mma) of (Delta)R (the month-to-month change in the smoothed monthly mean sunspot number (R)) at the ascending inflection point provide strong indications as to the expected size of the ongoing cycle s sunspot maximum amplitude (RM), while the descending inflection point appears to provide an indication as to the expected length of the ongoing cycle. The value of the 12-mma of (Delta)R at elapsed time T = 27 mo past the epoch of RM (E(RM)) seems to provide a strong indication as to the expected size of Rm for the following cycle. The expected Rm for cycle 24 is 7.6 +/- 4.4 (the 90-percent prediction interval), occurring before September 2008. Evidence is also presented for secular rises in selected cycle-related parameters and for preferential grouping of sunspot cycles by amplitude and/or period.
Sousa, F A; da Silva, J A
2000-04-01
The purpose of this study was to verify the relationship between professional prestige scaled through estimations and the professional prestige scaled through estimation of the number of minimum salaries attributed to professions in function of their prestige in society. Results showed: 1--the relationship between the estimation of magnitudes and the estimation of the number of minimum salaries attributed to the professions in function of their prestige is characterized by a function of potence with an exponent lower than 1,0,2--the orders of degrees of prestige of the professions resultant from different experiments involving different samples of subjects are highly concordant (W = 0.85; p < 0.001), considering the modality used as a number (estimation of magnitudes of minimum salaries).
Long-term performance of minimum-input oak restoration plantings
Elizabeth Bernhardt; Tedmund J. Swiecki
2015-01-01
Starting in 1989, we used minimum-input methods to restore native oaks to parts of their former ranges in Vacaville, California. Each restoration site was analyzed, and only those inputs deemed necessary to overcome expected limiting factors for oak establishment were used. We avoided unnecessary inputs that added to cost and could have unintended negative consequences...
Defining worthwhile and desired responses to treatment of chronic low back pain.
Yelland, Michael J; Schluter, Philip J
2006-01-01
To describe patients' perceptions of minimum worthwhile and desired reductions in pain and disability upon commencing treatment for chronic low back pain. Descriptive study nested within a community-based randomized controlled trial on prolotherapy injections and exercises. A total of 110 participants with chronic low back pain. Interventions. Prior to treatment, participants were asked what minimum percentage reductions in pain and disability would make treatment worthwhile and what percentage reductions in pain and disability they desired with treatment. Minimum worthwhile reductions and desired reductions in pain and disability. Median (inter-quartile range) minimum worthwhile reductions were 25% (20%, 50%) for pain and 35% (20%, 50%) for disability. This compared with desired reductions of 80% (60%, 100%) for pain and 80% (50%, 100%) for disability. The internal consistency between pain and disability responses was high (Spearman's coefficient of association of 0.81 and 0.87, respectively). A significant association existed between minimum worthwhile reductions and desired reductions, but no association was found between these two factors and patient age, gender, pain severity or duration, disability, anxiety, depression, response to treatment, or treatment satisfaction. Inquiring directly about patients' expectations of reductions in pain and in disability is important in establishing realistic treatment goals and setting benchmarks for success. There is a wide disparity between the reductions that they regard as minimum worthwhile and reductions that they hope to achieve. However, there is a high internal consistency between reductions in pain and disability that they expect.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Securing Safety - Spaceflight Standards for the Mass Market
NASA Astrophysics Data System (ADS)
Goh, G.
The projected total revenue of the space tourism industry is expected to exceed USD $1 billion by 2021. The vast economic potential of space tourism has fuelled ambitious plans for commercial orbital and suborbital flights, in addition to longer- duration spaceflights on board the International Space Station (ISS) and other planned orbiting habitats. International and national legal frameworks are challenged to provide regulations to ensure minimum standards of spaceflight safety for a high risk activity that aims to enter the mainstream tourism market. Thrown into the mix are various considerations of the number of spaceflight participants per flight, the economic viability of stringent safety standards, the plethora of possible flight vehicles and the compensation mechanism in case of violations of safety regulations. This paper surveys the legal challenges in the regulation of safety in commercial manned spaceflight, including issues of jurisdiction, authorization, licensing and liability. Drawing on analogous developments in other fields of law related to international carriage, a safety regulation framework with minimum international standards is proposed. This proposed framework considers both accident avoidance and emergency response in light of international legal, policy and economic perspectives.
30 CFR 250.616 - Blowout prevention equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... pressure rating of the BOP system and system components shall exceed the expected surface pressure to which they may be subjected. If the expected surface pressure exceeds the rated working pressure of the... pressure limitations that will be applied during each mode of pressure control. (b) The minimum BOP system...
30 CFR 250.616 - Blowout prevention equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... pressure rating of the BOP system and system components shall exceed the expected surface pressure to which they may be subjected. If the expected surface pressure exceeds the rated working pressure of the... pressure limitations that will be applied during each mode of pressure control. (b) The minimum BOP system...
30 CFR 250.615 - Blowout prevention equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... pressure rating of the BOP system and system components shall exceed the expected surface pressure to which they may be subjected. If the expected surface pressure exceeds the rated working pressure of the... pressure limitations that will be applied during each mode of pressure control. (b) The minimum BOP system...
Beliefs and social behavior in a multi-period ultimatum game
Azar, Ofer H.; Lahav, Yaron; Voslinsky, Alisa
2015-01-01
We conduct a multi-period ultimatum game in which we elicit players' beliefs. Responders do not predict accurately the amount that will be offered to them, and do not get better in their predictions over time. At the individual level we see some effect of the mistake in expectations in the previous period on the responder's expectation about the offer in the current period, but this effect is relatively small. The proposers' beliefs about the minimum amount that responders will accept is significantly higher than the minimum amount responders believe will be accepted by other responders. The proposer's belief about the minimal acceptable offer does not change following a rejection. Nevertheless, the proposer's offer in the next period does increase following a rejection. The probability of rejection increases when the responder has higher expectations about the amount that will be offered to him or higher beliefs about the minimal amount that other responders will accept. PMID:25762909
Numerical solution of open string field theory in Schnabl gauge
NASA Astrophysics Data System (ADS)
Arroyo, E. Aldo; Fernandes-Silva, A.; Szitas, R.
2018-01-01
Using traditional Virasoro L 0 level-truncation computations, we evaluate the open bosonic string field theory action up to level (10 , 30). Extremizing this level-truncated potential, we construct a numerical solution for tachyon condensation in Schnabl gauge. We find that the energy associated to the numerical solution overshoots the expected value -1 at level L = 6. Extrapolating the level-truncation data for L ≤ 10 to estimate the vacuum energies for L > 10, we predict that the energy reaches a minimum value at L ˜ 12, and then turns back to approach -1 asymptotically as L → ∞. Furthermore, we analyze the tachyon vacuum expectation value (vev), for which by extrapolating its corresponding level-truncation data, we predict that the tachyon vev reaches a minimum value at L ˜ 26, and then turns back to approach the expected analytical result as L → ∞.
46 CFR 11.705 - Route familiarization requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... limitations specified in this section, the number of round trips required to qualify an applicant for a... endorsement as first-class pilot shall furnish evidence of having completed a minimum number of round trips... sought. Evidence of having completed a minimum number of round trips while serving as an observer...
46 CFR 11.705 - Route familiarization requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... limitations specified in this section, the number of round trips required to qualify an applicant for a... endorsement as first-class pilot shall furnish evidence of having completed a minimum number of round trips... sought. Evidence of having completed a minimum number of round trips while serving as an observer...
46 CFR 11.705 - Route familiarization requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... limitations specified in this section, the number of round trips required to qualify an applicant for a... endorsement as first-class pilot shall furnish evidence of having completed a minimum number of round trips... sought. Evidence of having completed a minimum number of round trips while serving as an observer...
Novel Driving Control of Power Assisted Wheelchair Based on Minimum Jerk Trajectory
NASA Astrophysics Data System (ADS)
Seki, Hirokazu; Sugimoto, Takeaki; Tadakuma, Susumu
This paper describes a novel trajectory control scheme for power assisted wheelchair. Human input torque patterns are always intermittent in power assisted wheelchairs, therefore, the suitable trajectories must be generated also after the human decreases his/her input torque. This paper tries to solve this significant problem based on minimum jerk model minimizing the changing rate of acceleration. The proposed control system based on minimum jerk trajectory is expected to improve the ride quality, stability and safety. Some experiments show the effectiveness of the proposed method.
Regulating the medical loss ratio: implications for the individual market.
Abraham, Jean M; Karaca-Mandic, Pinar
2011-03-01
To provide state-level estimates of the size and structure of the US individual market for health insurance and to investigate the potential impact of new medical loss ratio (MLR) regulation in 2011, as indicated by the Patient Protection and Affordable Care Act (PPACA). Using data from the National Association of Insurance Commissioners, we provided state-level estimates of the size and structure of the US individual market from 2002 to 2009. We estimated the number of insurers expected to have MLRs below the legislated minimum and their corresponding enrollment. In the case of noncompliant insurers exiting the market, we estimated the number of enrollees that may be vulnerable to major coverage disruption given poor health status. In 2009, using a PPACA-adjusted MLR definition, we estimated that 29% of insurer-state observations in the individual market would have MLRs below the 80% minimum, corresponding to 32% of total enrollment. Nine states would have at least one-half of their health insurers below the threshold. If insurers below the MLR threshold exit the market, major coverage disruption could occur for those in poor health; we estimated the range to be between 104,624 and 158,736 member-years. The introduction of MLR regulation as part of the PPACA has the potential to significantly affect the functioning of the individual market for health insurance.
NASA Astrophysics Data System (ADS)
Taherkhani, Mohammand Amin; Navi, Keivan; Van Meter, Rodney
2018-01-01
Quantum aided Byzantine agreement is an important distributed quantum algorithm with unique features in comparison to classical deterministic and randomized algorithms, requiring only a constant expected number of rounds in addition to giving a higher level of security. In this paper, we analyze details of the high level multi-party algorithm, and propose elements of the design for the quantum architecture and circuits required at each node to run the algorithm on a quantum repeater network (QRN). Our optimization techniques have reduced the quantum circuit depth by 44% and the number of qubits in each node by 20% for a minimum five-node setup compared to the design based on the standard arithmetic circuits. These improvements lead to a quantum system architecture with 160 qubits per node, space-time product (an estimate of the required fidelity) {KQ}≈ 1.3× {10}5 per node and error threshold 1.1× {10}-6 for the total nodes in the network. The evaluation of the designed architecture shows that to execute the algorithm once on the minimum setup, we need to successfully distribute a total of 648 Bell pairs across the network, spread evenly between all pairs of nodes. This framework can be considered a starting point for establishing a road-map for light-weight demonstration of a distributed quantum application on QRNs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, S R; Walter, W R
The behavior of aftershock sequences around the Nevada Test Site in the southern Great Basin is characterized as a potential discriminant between explosions and earthquakes. The aftershock model designed by Reasenberg and Jones (1989, 1994) allows for a probabilistic statement of earthquake-like aftershock behavior at any time after the mainshock. We use this model to define two types of aftershock discriminants. The first defines M{sub X}, or the minimum magnitude of an aftershock expected within a given duration after the mainshock with probability X. Of the 67 earthquakes with M > 4 in the study region, 63 of them producemore » an aftershock greater than M{sub 99} within the first seven days after a mainshock. This is contrasted with only six of 93 explosions with M > 4 that produce an aftershock greater than M{sub 99} for the same period. If the aftershock magnitude threshold is lowered and the M{sub 90} criteria is used, then no explosions produce an aftershock greater than M{sub 90} for durations that end more than 17 days after the mainshock. The other discriminant defines N{sub X}, or the minimum cumulative number of aftershocks expected for given time after the mainshock with probability X. Similar to the aftershock magnitude discriminant, five earthquakes do not produce more aftershocks than N{sub 99} within 7 days after the mainshock. However, within the same period all but one explosion produce less aftershocks then N{sub 99}. One explosion is added if the duration is shortened to two days after than mainshock. The cumulative number aftershock discriminant is more reliable, especially at short durations, but requires a low magnitude of completeness for the given earthquake catalog. These results at NTS are quite promising and should be evaluated at other nuclear test sites to understand the effects of differences in the geologic setting and nuclear testing practices on its performance.« less
Lenormand, Maxime; Huet, Sylvie; Deffuant, Guillaume
2012-01-01
We use a minimum requirement approach to derive the number of jobs in proximity services per inhabitant in French rural municipalities. We first classify the municipalities according to their time distance in minutes by car to the municipality where the inhabitants go the most frequently to get services (called MFM). For each set corresponding to a range of time distance to MFM, we perform a quantile regression estimating the minimum number of service jobs per inhabitant that we interpret as an estimation of the number of proximity jobs per inhabitant. We observe that the minimum number of service jobs per inhabitant is smaller in small municipalities. Moreover, for municipalities of similar sizes, when the distance to the MFM increases, the number of jobs of proximity services per inhabitant increases.
Younis, Mustafa Z; Jabr, Samer; Smith, Pamela C; Al-Hajeri, Maha; Hartmann, Michael
2011-01-01
Academic research investigating health care costs in the Palestinian region is limited. Therefore, this study examines the costs of the cardiac catheterization unit of one of the largest hospitals in Palestine. We focus on costs of a cardiac catheterization unit and the increasing number of deaths over the past decade in the region due to cardiovascular diseases (CVDs). We employ cost-volume-profit (CVP) analysis to determine the unit's break-even point (BEP), and investigate expected benefits (EBs) of Palestinian government subsidies to the unit. Findings indicate variable costs represent 56 percent of the hospital's total costs. Based on the three functions of the cardiac catheterization unit, results also indicate that the number of patients receiving services exceed the break-even point in each function, despite the unit receiving a government subsidy. Our findings, although based on one hospital, will permit hospital management to realize the importance of unit costs in order to make informed financial decisions. The use of break-even analysis will allow area managers to plan minimum production capacity for the organization. The economic benefits for patients and the government from the unit may encourage government officials to focus efforts on increasing future subsidies to the hospital.
Muskellunge growth potential in northern Wisconsin: implications for trophy management
Faust, Matthew D.; Isermann, Daniel A.; Luehring, Mark A.; Hansen, Michael J.
2015-01-01
The growth potential of Muskellunge Esox masquinongy was evaluated by back-calculating growth histories from cleithra removed from 305 fish collected during 1995–2011 to determine whether it was consistent with trophy management goals in northern Wisconsin. Female Muskellunge had a larger mean asymptotic length (49.8 in) than did males (43.4 in). Minimum ultimate size of female Muskellunge (45.0 in) equaled the 45.0-in minimum length limit, but was less than the 50.0-in minimum length limit used on Wisconsin's trophy waters, while the minimum ultimate size of male Muskellunge (34.0 in) was less than the statewide minimum length limit. Minimum reproductive sizes for both sexes were less than Wisconsin's trophy minimum length limits. Mean growth potential of female Muskellunge in northern Wisconsin appears to be sufficient for meeting trophy management objectives and angler expectations. Muskellunge in northern Wisconsin had similar growth potential to those in Ontario populations, but lower growth potential than Minnesota's populations, perhaps because of genetic and environmental differences.
ERIC Educational Resources Information Center
Bayerlein, Leopold; Timpson, Mel
2017-01-01
Purpose: The purpose of this paper is to assess the overall alignment of undergraduate accounting degree programmes from all Certified Practicing Accountants Australia and Chartered Accountants Australia and New Zealand accredited higher education providers in Australia with the profession's minimum educational expectations (MEEs).…
On the critical flame radius and minimum ignition energy for spherical flame initiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zheng; Burke, M. P.; Ju, Yiguang
2011-01-01
Spherical flame initiation from an ignition kernel is studied theoretically and numerically using different fuel/oxygen/helium/argon mixtures (fuel: hydrogen, methane, and propane). The emphasis is placed on investigating the critical flame radius controlling spherical flame initiation and its correlation with the minimum ignition energy. It is found that the critical flame radius is different from the flame thickness and the flame ball radius and that their relationship depends strongly on the Lewis number. Three different flame regimes in terms of the Lewis number are observed and a new criterion for the critical flame radius is introduced. For mixtures with Lewis numbermore » larger than a critical Lewis number above unity, the critical flame radius is smaller than the flame ball radius but larger than the flame thickness. As a result, the minimum ignition energy can be substantially over-predicted (under-predicted) based on the flame ball radius (the flame thickness). The results also show that the minimum ignition energy for successful spherical flame initiation is proportional to the cube of the critical flame radius. Furthermore, preferential diffusion of heat and mass (i.e. the Lewis number effect) is found to play an important role in both spherical flame initiation and flame kernel evolution after ignition. It is shown that the critical flame radius and the minimum ignition energy increase significantly with the Lewis number. Therefore, for transportation fuels with large Lewis numbers, blending of small molecule fuels or thermal and catalytic cracking will significantly reduce the minimum ignition energy.« less
Number of minimum-weight code words in a product code
NASA Technical Reports Server (NTRS)
Miller, R. L.
1978-01-01
Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.
Minimum number of measurements for evaluating Bertholletia excelsa.
Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E
2017-09-27
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.
NASA Astrophysics Data System (ADS)
Kitagawa, M.; Yamamoto, Y.
1987-11-01
An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2011 CFR
2011-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2012 CFR
2012-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
25 CFR 547.14 - What are the minimum technical standards for electronic random number generation?
Code of Federal Regulations, 2010 CFR
2010-04-01
... CLASS II GAMES § 547.14 What are the minimum technical standards for electronic random number generation... rules of the game. For example, if a bingo game with 75 objects with numbers or other designations has a... serial correlation (outcomes shall be independent from the previous game); and (x) Test on subsequences...
Code of Federal Regulations, 2010 CFR
2010-04-01
... INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR EDUCATION THE INDIAN SCHOOL EQUALIZATION PROGRAM Administrative Procedures, Student Counts, and Verifications § 39.214 What is the minimum number of instructional...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... Information Collection: Comment Request; Minimum Property Standards for Multifamily and Care-Type Facilities...: Minimum Property Standards for Multifamily and Care-type facilities. OMB Control Number, if applicable... Housing and Urban Development (HUD) developed the Minimum Property Standards (MPS) program in order to...
Cummings, Mark
2017-07-01
In 2014, the American Osteopathic Association (AOA) and the American Association of Colleges of Osteopathic Medicine signed a memorandum of understanding (MOU) with the Accreditation Council for Graduate Medical Education (ACGME) to create a unified accreditation system for graduate medical education (GME) under the ACGME. The AOA will cease to accredit GME programs on June 30, 2020. By then, AOA-accredited programs need to apply for and achieve ACGME initial accreditation. The terms of the MOU also made it advantageous for some formerly nonteaching hospitals to establish AOA programs, chiefly in primary care, as a step toward future ACGME accreditation.In transitioning AOA programs to the ACGME system, hospitals with osteopathic GME can expect to encounter challenges related to major differences between AOA and ACGME standards. The minimum numbers of residents for ACGME programs in most specialties are greater than those for AOA programs, which will require hospitals that may already be at their federal caps to add additional residency positions. ACGME standards are also more faculty- and staff-intensive and require additional infrastructure, necessitating additional financial investments. In addition, greater curricular specificity in ACGME standards will generate new educational and financial challenges.To address these challenges, hospitals may need to reallocate resources and positions among their current AOA programs, reducing the number of programs (and specialties) they sponsor. It is expected that a number of established and new AOA programs will choose not to pursue ACGME accreditation or will fail to qualify for ACGME initial accreditation.
Design verification test matrix development for the STME thrust chamber assembly
NASA Technical Reports Server (NTRS)
Dexter, Carol E.; Elam, Sandra K.; Sparks, David L.
1993-01-01
This report presents the results of the test matrix development for design verification at the component level for the National Launch System (NLS) space transportation main engine (STME) thrust chamber assembly (TCA) components including the following: injector, combustion chamber, and nozzle. A systematic approach was used in the development of the minimum recommended TCA matrix resulting in a minimum number of hardware units and a minimum number of hot fire tests.
Studies on laminar boundary-layer receptivity to freestream turbulence near a leading edge
NASA Technical Reports Server (NTRS)
Kendall, James M.
1991-01-01
An experimental study of the generation of Tollmien-Schlichting waves and wave packets in a flat-plate boundary-layer by weak freestream turbulence has been conducted with the intent of clarifying receptivity mechanisms. Emphasis was placed upon the properties of such waves at stations as far forward as the minimum critical Reynolds number. It was found that alteration of the flow about the leading edge, due either to an asymmetry associated with lift, or due to a change of the fineness ratio of the leading edge, altered the T-S wave amplitude at early stations. The subsequent growth of the waves proceeded faster than expected according to certain stability theory results. Speculation regarding receptivity mechanisms is made.
Start of Eta Car's X-ray Minimum
NASA Technical Reports Server (NTRS)
Corcoran, Michael F.; Liburd, Jamar; Hamaguchi, Kenji; Gull, Theodore; Madura, Thomas; Teodoro, Mairan; Moffat, Anthony; Richardson, Noel; Russell, Chris; Pollock, Andrew;
2014-01-01
Analysis of Eta Car's X-ray spectrum in the 2-10 keV band using quicklook data from the XRay Telescope on Swift shows that the flux on July 30, 2014 was 4.9 plus or minus 2.0×10(exp-12) ergs s(exp-1)cm(exp-2). This flux is nearly equal to the X-ray minimum flux seen by RXTE in 2009, 2003.5, and 1998, and indicates that Eta Car has reached its X-ray minimum, as expected based on the 2024-day period derived from previous 2-10 keV observations with RXTE.
Revisioning a clinical nurse specialist curriculum in 3 specialty tracks.
Arslanian-Engoren, Cynthia; Sullivan, Barbara-Jean; Struble, Laura
2011-01-01
The objective of the present study was to revise 3 clinical nurse specialist (CNS) educational tracks with current National Association of Clinical Nurse Specialist core competencies and educational expectations. National curricula recommendations include core competencies by the 3 spheres of influence. Advanced practice registered nurses consensus model educational requirements include a minimum of 500 faculty-supervised clinical hours; separate graduate courses in pharmacology, pathophysiology, and advanced physical assessment; and content in differential diagnosis disease management, decision making, and role preparation. This educational initiative was designed to (1) align with core competencies and advanced practice registered nurse consensus model recommendations, (2) create an innovative learning environment, (3) meet the needs of diverse student populations, (4) align with emerging doctor of nursing practice programs, (5) create a high-efficiency and high-quality environment to manage human and fiscal resources, and (6) reduce duplication of efforts. Courses were revised that did not meet current CNS educational preparation expectations. A total of 11 didactic and clinical sequences courses were developed for the 3 tracks to (1) ensure minimum numbers of clinical hours; (2) expand content on health promotion and risk reduction, advanced practice nurse role, and the healthcare delivery system; (3) consolidate clinical courses; and (4) resequence foundational content before beginning clinical courses. Revisioning a CNS curriculum in 3 specialty tracks is challenging but doable using innovative and creative approaches. The innovative process used to revise our CNS curriculum will assist nurse educators faced with similar program delivery challenges to meet future directions for educating CNS students in advanced nursing practice. Copyright © 2011 Lippincott Williams & Wilkins.
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E
2017-09-01
- A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.
10 CFR 905.16 - What are the requirements for the minimum investment report alternative?
Code of Federal Regulations, 2010 CFR
2010-01-01
... number, email and Website if applicable, and contact person; (2) Authority or requirement to undertake a..., in writing, a minimum investment report every 5 years. (h) Maintaining minimum investment reports. (1...
12 CFR 615.5336 - Compliance and reporting.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) The conditions or circumstances leading to the institution's falling below minimum levels, the... expected to generate additional earnings; (vi) The effect of the business changes required to increase...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-16
... on 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex... (GPS) Y RWY 20, Amdt 1B Cambridge, MN, Cambridge Muni, Takeoff Minimums and Obstacle DP, Orig Pipestone, MN, Pipestone Muni, NDB RWY 36, Amdt 7, CANCELLED Rushford, MN, Rushford Muni, Takeoff Minimums and...
Lee, Dominic J O'
2015-04-15
Dual mechanical braiding experiments provide a useful tool with which to investigate the nature of interactions between rod-like molecules, for instance actin and DNA. In conditions close to molecular condensation, one would expect an appearance of a local minimum in the interaction potential between the two molecules. We investigate this situation, introducing an attractive component into the interaction potential, using a model developed for describing such experiments. We consider both attractive interactions that do not depend on molecular structure and those which depend on a DNA-like helix structure. In braiding experiments, an attractive term may lead to certain effects. A local minimum may cause molecules to collapse from a loosely braided configuration into a tight one, occurring at a critical value of the moment applied about the axis of the braid. For a fixed number of braid pitches, this may lead to coexistence between the two braiding states, tight and loose. Coexistence implies certain proportions of the braid are in each state, their relative size depending on the number of braid pitches. This manifests itself as a linear dependence in numerically calculated quantities as functions of the number of braid pitches. Also, in the collapsed state, the braid radius stays roughly constant. Furthermore, if the attractive interaction is helix dependent, the left-right handed braid symmetry is broken. For a DNA like charge distribution, using the Kornyshev-Leikin interaction model, our results suggest that significant braid collapse and coexistence only occurs for left handed braids. Regardless of the interaction model, the study highlights the possible qualitative physics of braid collapse and coexistence; and the role helix specific forces might play, if important. The model could be used to connect other microscopic theories of interaction with braiding experiments.
Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network
Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan
2014-01-01
Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Helicopter rotor trailing edge noise. [noise prediction
NASA Technical Reports Server (NTRS)
Schlinker, R. H.; Amier, R. K.
1981-01-01
A two dimensional section of a helicopter main rotor blade was tested in an acoustic wind tunnel at close to full-scale Reynolds numbers to obtain boundary layer data and acoustic data for use in developing an acoustic scaling law and testing a first principles trailing edge noise theory. Results were extended to the rotating frame coordinate system to develop a helicopter rotor trailing edge noise prediction. Comparisons of the calculated noise levels with helicopter flyover spectra demonstrate that trailing edge noise contributes significantly to the total helicopter noise spectrum at high frequencies. This noise mechanism is expected to control the minimum rotor noise. In the case of noise radiation from a local blade segment, the acoustic directivity pattern is predicted by the first principles trailing edge noise theory. Acoustic spectra are predicted by a scaling law which includes Mach number, boundary layer thickness and observer position. Spectrum shape and sound pressure level are also predicted by the first principles theory but the analysis does not predict the Strouhal value identifying the spectrum peak.
2017-01-01
Population demography is central to fundamental ecology and for predicting range shifts, decline of threatened species, and spread of invasive organisms. There is a mismatch between most demographic work, carried out on few populations and at local scales, and the need to predict dynamics at landscape and regional scales. Inspired by concepts from landscape ecology and Markowitz’s portfolio theory, we develop a landscape portfolio platform to quantify and predict the behavior of multiple populations, scaling up the expectation and variance of the dynamics of an ensemble of populations. We illustrate this framework using a 35-y time series on gypsy moth populations. We demonstrate the demography accumulation curve in which the collective growth of the ensemble depends on the number of local populations included, highlighting a minimum but adequate number of populations for both regional-scale persistence and cross-scale inference. The attainable set of landscape portfolios further suggests tools for regional population management for both threatened and invasive species. PMID:29109261
NASA Astrophysics Data System (ADS)
Lopresto, James C.; Mathews, John; Manross, Kevin
1995-12-01
Calcium K plage, H alpha plage and sunspot area have been monitored daily on the INTERNET since November of 1992. The plage and sunspot area have been measured by image processing. The purpose of the project is to investigate the degree of correlation between plage area and solar irradiance. The plage variation shows the expected variation produced by solar rotation and the longer secular changes produced by the solar cycle. The H alpha and sunspot plage area reached a minimum in about late 1994 or early 1995. This is in agreement with the K2 spectral index obtained daily from Sacramento Peak Observatory. The Calcium K plage area minimum seems delayed with respect to the others mentioned above. The minimum of the K line plage area is projected to come within the last few months of 1995.
2016-12-01
more years. At YOS 12, you expect to be an O- 4 . In 2016 dollars, you expect a monthly base pay of $7,081.50, meaning the minimum continuation pay...active duty for four more years. At YOS 12, you expect to be an O- 4 . In 2016 dollars, you expect a monthly base pay of $7,081.50, meaning the...blank) 2. REPORT DATE December 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4 . TITLE AND SUBTITLE RELATIONSHIP BETWEEN TIMING OF MULTIPLE
Practical implementation of channelized hotelling observers: effect of ROI size
NASA Astrophysics Data System (ADS)
Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size.
Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
32 CFR 32.44 - Procurement procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... acceptable characteristics or minimum acceptable standards. (iv) The specific features of “brand name or... expected to exceed the simplified acquisition threshold, specifies a “brand name” product. (4) The proposed...
... food restrictions with a minimum of stress. Reduce Holiday Stress by Educating Others Link [ more ] [ hide ] Administration ... But things can be especially difficult during the holidays, when people's expectations of one another are high ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-12
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and..., Takeoff Minimums and Obstacle DP, Amdt 2 Perham, MN, Perham Muni, RNAV (GPS) RWY 13, Orig Perham, MN, Perham Muni, RNAV (GPS) RWY 31, Amdt 1 Perham, MN, Perham Muni, Takeoff Minimums and Obstacle DP, Amdt 1...
Alarcón, Diego; Cavieres, Lohengrin A
2015-01-01
In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions.
Alarcón, Diego; Cavieres, Lohengrin A.
2015-01-01
In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions. PMID:25786226
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
75 FR 39500 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-09
... with ``Badge and vehicle control records that at a minimum include; name, Social Security Number (SSN... system: Badge and vehicle control records that at a minimum include; name, Social Security Number (SSN... maintenance of the system: 10 U.S.C. 8013, Secretary of the Air Force, Powers and Duties; Department of...
2015-12-15
from the ground to space solar minimum and solar maximum 5a. CONTRACT NUMBER BAA-76-11-01 5b. GRANT NUMBER N00173-12-1G010 5c. PROGRAM ELEMENT...atmospheric behavior from the ground to space under solar minimum and solar maximum conditions (Contract No.: N00173-12-1-G010 NRL) Project Summary...Dynamical response to solar radiative forcing is a crucial and poorly understood mechanisms. We propose to study the impacts of large dynamical events
On Making a Distinguished Vertex Minimum Degree by Vertex Deletion
NASA Astrophysics Data System (ADS)
Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes
For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.
NASA Astrophysics Data System (ADS)
Lee, H.; Sheen, D.; Kim, S.
2013-12-01
The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values of the earthquake catalogs of the Korea Meteorological Administration and Kim (2012) are estimated to be 0.72 and 0.74, respectively.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
High Tensile Strength Amalgams for In-Space Fabrication and Repair
NASA Technical Reports Server (NTRS)
Grugel, Richard N.
2006-01-01
Amalgams are well known for their use in dental practice as a tooth filling material. They have a number of useful attributes that include room temperature fabrication, corrosion resistance, dimensional stability, and very good compressive strength. These properties well serve dental needs but, unfortunately, amalgams have extremely poor tensile strength, a feature that severely limits other potential applications. Improved material properties (strength and temperature) of amalgams may have application to the freeform fabrication of repairs or parts that might be necessary during an extended space mission. Advantages would include, but are not limited to: the ability to produce complex parts, a minimum number of processing steps, minimum crew interaction, high yield - minimum wasted material, reduced gravity compatibility, minimum final finishing, safety, and minimum power consumption. The work presented here shows how the properties of amalgams can be improved by changing particle geometries in conjunction with novel engineering metals.
Predicting the Size and Timing of Sunspot Maximum for Cycle 24
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2010-01-01
For cycle 24, the minimum value of the 12-month moving average (12-mma) of the AA-geomagnetic index in the vicinity of sunspot minimum (AAm) appears to have occurred in September 2009, measuring about 8.4 nT and following sunspot minimum by 9 months. This is the lowest value of AAm ever recorded, falling below that of 8.9 nT, previously attributed to cycle 14, which also is the smallest maximum amplitude (RM) cycle of the modern era (RM = 64.2). Based on the method of Ohl (the preferential association between RM and AAm for an ongoing cycle), one expects cycle 24 to have RM = 55+/-17 (the +/-1 - sigma prediction interval). Instead, using a variation of Ohl's method, one based on using 2-cycle moving averages (2-cma), one expects cycle 23's 2-cma of RM to be about 115.5+/-8.7 (the +/-1 - sigma prediction interval), inferring an RM of about 62+/-35 for cycle 24. Hence, it seems clear that cycle 24 will be smaller in size than was seen in cycle 23 (RM = 120.8) and, likely, will be comparable in size to that of cycle 14. From the Waldmeier effect (the preferential association between the ascent duration (ASC) and RM for an ongoing cycle), one expects cycle 24 to be a slow-rising cycle (ASC > or equal to 48 months), having RM occurrence after December 2012, unless it turns out to be a statistical outlier.
Actual and future trends of extreme values of temperature for the NW Iberian Peninsula
NASA Astrophysics Data System (ADS)
Taboada, J.; Brands, S.; Lorenzo, N.
2009-09-01
It is now very well established that yearly averaged temperatures are increasing due to anthropogenic climate change. In the area of Galicia (NW Spain) this trend has also been determined. The main objective of this work is to assess actual and future trends of different extreme indices of temperature, which are of curcial importance for many impact studies. Station data for the study was provided by the CLIMA database of the regional government of Galicia (NW Spain). As direct GCM-output significantly underestimates the variance of daily surface temperature variables in NW Spain, these variables are obtained by applying a statistical downscaling technique (analog method), using 850hPa temperature and mean sea level pressure as combined predictors. The predictor fields have been extracted from three GCMs participating in the IPCC AR4 under A1, A1B and A2 scenarios. The definitions of the extreme indices have been taken from the joint CCl/CLIVAR/JCOMM Expert Team (ET) on Climate Change Detection and Indices (ETCCDI) This group has defined a set of standard extreme values to simplify intercomparisons of data from different regions of the world. For the temperatures in the period 1960-2006, results show a significant increase of the number of days with maximum temperatures above the 90th percentile. Furthermore, a significant decrease of the days with maximum temperatures below the 10th percentile has been found. The tendencies of minimum temperatures are reverse: less nights with minimum temperatures below 10th percentile, and more with minimum temperatures above 90th percentile. Those tendencies can be observed all over the year, but are more pronounced in summer. We have also calculated the relationship between the above mentioned extreme values and different teleconnection patterns appearing in the North Atlantic area. Results show that local tendencies are associated with trends of EA (Eastern Atlantic) and SCA (Scandinavian) patterns. NAO (North Atlantic Oscillation) has also some relationship with these tendencies, but only related with cold days and nights in winter. The results of the applied statistical downscaling technique indicate that observed trends in maximum and minimum temperatures in NW Spain are expected to continue in the next decades because of anthropogenic climate change. The common tendency is that hot days increase while cold nights diminish all over the year. As expected, these tendencies change between different scenarios: they are more marked for A2 and A1B scenarios than for the for the B1 scenario. Moreover, the three models behave different under the same scenario, leaving a great uncertainty for the future. Nevertheless, we conclude that more frequent hot days, as well as an increasing probability of summertime heat waves are to be expected in the next decades. Cold days tend to diminish, decreasing the probability of wintertime cold waves and leaving a greater part of the area under study without frost days throughout the year.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size
Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-01-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies. PMID:28943699
46 CFR 169.549 - Ring lifebuoys and water lights.
Code of Federal Regulations, 2014 CFR
2014-10-01
... chapter and be international orange in color. (2) Each water light must be approved under subpart 161.010... 46 Shipping 7 2014-10-01 2014-10-01 false Ring lifebuoys and water lights. 169.549 Section 169.549... lights. (a)(1) The minimum number of life buoys and the minimum number to which water lights must be...
46 CFR 169.549 - Ring lifebuoys and water lights.
Code of Federal Regulations, 2012 CFR
2012-10-01
... chapter and be international orange in color. (2) Each water light must be approved under subpart 161.010... 46 Shipping 7 2012-10-01 2012-10-01 false Ring lifebuoys and water lights. 169.549 Section 169.549... lights. (a)(1) The minimum number of life buoys and the minimum number to which water lights must be...
2016-09-01
Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and
Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J
2017-09-01
Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.
On dependence of seismic activity on 11 year variations in solar activity and/or cosmic rays
NASA Astrophysics Data System (ADS)
Zhantayev, Zhumabek; Khachikyan, Galina; Breusov, Nikolay
2014-05-01
It is found in the last decades that seismic activity of the Earth has a tendency to increase with decreasing solar activity (increasing cosmic rays). A good example of this effect may be the growing number of catastrophic earthquakes in the recent rather long solar minimum. Such results support idea on existence a solar-lithosphere relationship which, no doubts, is a part of total pattern of solar-terrestrial relationships. The physical mechanism of solar-terrestrial relationships is not developed yet. It is believed at present that one of the main contenders for such mechanism may be the global electric circuit (GEC) - vertical current loops, piercing and electrodynamically coupling all geospheres. It is also believed, that the upper boundary of the GEC is located at the magnetopause, where magnetic field of the solar wind reconnects with the geomagnetic field, that results in penetrating solar wind energy into the earth's environment. The effectiveness of the GEC operation depends on intensity of cosmic rays (CR), which ionize the air in the middle atmosphere and provide its conductivity. In connection with the foregoing, it can be expected: i) quantitatively, an increasing seismic activity from solar maximum to solar minimum may be in the same range as increasing CR flux; and ii) in those regions of the globe, where the crust is shipped by the magnetic field lines with number L= ~ 2.0, which are populated by anomalous cosmic rays (ACR), the relationship of seismic activity with variations in solar activity will be manifested most clearly, since there is a pronounced dependence of ACR on solar activity variations. Checking an assumption (i) with data of the global seismological catalog of the NEIC, USGS for 1973-2010, it was found that yearly number of earthquake with magnitude M≥4.5 varies into the 11 year solar cycle in a quantitative range of about 7-8% increasing to solar minimum, that qualitatively and quantitatively as well is in agreement with the variations of CR in the 11 year solar cycle. Checking an assumptions (ii), it is found that during the period from 1973 to 2010, the twenty earthquakes with magnitude M≥7.0 occurred in the seismic areas, where geomagnetic force lines L=2.0 -2.2 are loaned into the earth's crust. Surprisingly, all of these strong earthquakes occurred only at declining phase of the 11 year solar cycle, while were absent at ascending phase. This result proves an expectation (ii) and can be taken into account for forecasting strong earthquake occurrence in the seismic areas where the crust is riddled with geomagnetic field lines L= ~ 2.0. In conclusion: the results support a modern idea that earthquake occurrence is related to operation of global electric circuit, but more research are required to study this problem in more details.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
How many surgery appointments should be offered to avoid undesirable numbers of 'extras'?
Kendrick, T; Kerry, S
1999-04-01
Patients seen as 'extras' (or 'fit-ins') are usually given less time for their problems than those in pre-booked appointments. Consequently, long queues of 'extras' should be avoided. To determine whether a predictable relationship exists between the number of available appointments at the start of the day and the number of extra patients who must be fitted in. This might be used to help plan a practice appointment system. Numbers of available appointments at the start of the day and numbers of 'extras' seen were recorded prospectively in 1995 and 1997 in one group general practice. Minimum numbers of available appointments at the start of the day, below which undesirably large numbers of extra patients could be predicted, were determined using logistic regression applied to the 1995 data. Predictive values of the minimum numbers calculated for 1995, in terms of predicting undesirable numbers of 'extras', were then determined when applied to the 1997 data. Numbers of extra patients seen correlated negatively with available appointments at the start of the day for all days of the week, with coefficients ranging from -0.66 to -0.80. Minimum numbers of available appointments below which undesirably large numbers of extras could be predicted were 26 for Mondays and four for the other week-days. When applied to 1997 data, these minimum numbers gave positive and negative predictive values of 76% and 82% respectively, similar to their values for 1995, despite increases in patient attendance and changes in the day-to-day pattern of surgery provision between the two years. A predictable relationship exists between the number of available appointments at the start of the day and the number of extras who must be fitted in, which may be used to help plan the appointment system for some years ahead, at least in this relatively stable suburban practice.
Varying the forcing scale in low Prandtl number dynamos
NASA Astrophysics Data System (ADS)
Brandenburg, A.; Haugen, N. E. L.; Li, Xiang-Yu; Subramanian, K.
2018-06-01
Small-scale dynamos are expected to operate in all astrophysical fluids that are turbulent and electrically conducting, for example the interstellar medium, stellar interiors, and accretion disks, where they may also be affected by or competing with large-scale dynamos. However, the possibility of small-scale dynamos being excited at small and intermediate ratios of viscosity to magnetic diffusivity (the magnetic Prandtl number) has been debated, and the possibility of them depending on the large-scale forcing wavenumber has been raised. Here we show, using four values of the forcing wavenumber, that the small-scale dynamo does not depend on the scale-separation between the size of the simulation domain and the integral scale of the turbulence, i.e., the forcing scale. Moreover, the spectral bottleneck in turbulence, which has been implied as being responsible for raising the excitation conditions of small-scale dynamos, is found to be invariant under changing the forcing wavenumber. However, when forcing at the lowest few wavenumbers, the effective forcing wavenumber that enters in the definition of the magnetic Reynolds number is found to be about twice the minimum wavenumber of the domain. Our work is relevant to future studies of small-scale dynamos, of which several applications are being discussed.
Characterizing Protease Specificity: How Many Substrates Do We Need?
Schauperl, Michael; Fuchs, Julian E.; Waldner, Birgit J.; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.
2015-01-01
Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design. PMID:26559682
Computing the Partition Function for Kinetically Trapped RNA Secondary Structures
Lorenz, William A.; Clote, Peter
2011-01-01
An RNA secondary structure is locally optimal if there is no lower energy structure that can be obtained by the addition or removal of a single base pair, where energy is defined according to the widely accepted Turner nearest neighbor model. Locally optimal structures form kinetic traps, since any evolution away from a locally optimal structure must involve energetically unfavorable folding steps. Here, we present a novel, efficient algorithm to compute the partition function over all locally optimal secondary structures of a given RNA sequence. Our software, RNAlocopt runs in time and space. Additionally, RNAlocopt samples a user-specified number of structures from the Boltzmann subensemble of all locally optimal structures. We apply RNAlocopt to show that (1) the number of locally optimal structures is far fewer than the total number of structures – indeed, the number of locally optimal structures approximately equal to the square root of the number of all structures, (2) the structural diversity of this subensemble may be either similar to or quite different from the structural diversity of the entire Boltzmann ensemble, a situation that depends on the type of input RNA, (3) the (modified) maximum expected accuracy structure, computed by taking into account base pairing frequencies of locally optimal structures, is a more accurate prediction of the native structure than other current thermodynamics-based methods. The software RNAlocopt constitutes a technical breakthrough in our study of the folding landscape for RNA secondary structures. For the first time, locally optimal structures (kinetic traps in the Turner energy model) can be rapidly generated for long RNA sequences, previously impossible with methods that involved exhaustive enumeration. Use of locally optimal structure leads to state-of-the-art secondary structure prediction, as benchmarked against methods involving the computation of minimum free energy and of maximum expected accuracy. Web server and source code available at http://bioinformatics.bc.edu/clotelab/RNAlocopt/. PMID:21297972
Prevalence of autosomal dominant polycystic kidney disease in the European Union.
Willey, Cynthia J; Blais, Jaime D; Hall, Anthony K; Krasa, Holly B; Makin, Andrew J; Czerwiec, Frank S
2017-08-01
Autosomal dominant polycystic kidney disease (ADPKD) is a leading cause of end-stage renal disease, but estimates of its prevalence vary by >10-fold. The objective of this study was to examine the public health impact of ADPKD in the European Union (EU) by estimating minimum prevalence (point prevalence of known cases) and screening prevalence (minimum prevalence plus cases expected after population-based screening). A review of the epidemiology literature from January 1980 to February 2015 identified population-based studies that met criteria for methodological quality. These examined large German and British populations, providing direct estimates of minimum prevalence and screening prevalence. In a second approach, patients from the 2012 European Renal Association‒European Dialysis and Transplant Association (ERA-EDTA) Registry and literature-based inflation factors that adjust for disease severity and screening yield were used to estimate prevalence across 19 EU countries (N = 407 million). Population-based studies yielded minimum prevalences of 2.41 and 3.89/10 000, respectively, and corresponding estimates of screening prevalences of 3.3 and 4.6/10 000. A close correspondence existed between estimates in countries where both direct and registry-derived methods were compared, which supports the validity of the registry-based approach. Using the registry-derived method, the minimum prevalence was 3.29/10 000 (95% confidence interval 3.27-3.30), and if ADPKD screening was implemented in all countries, the expected prevalence was 3.96/10 000 (3.94-3.98). ERA-EDTA-based prevalence estimates and application of a uniform definition of prevalence to population-based studies consistently indicate that the ADPKD point prevalence is <5/10 000, the threshold for rare disease in the EU. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA.
Systemic and Local Vaccination against Breast Cancer with Minimum Autoimmune Sequelae
2012-10-01
AD_________________ Award Number: W81XWH-10-1-0466 TITLE: Systemic and Local Vaccination against...September 2012 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Systemic and Local Vaccination against Breast Cancer with Minimum Autoimmune Sequelae 5b...eliminate the tumor by vaccination and local ablation to render long-term immune protection without excessive autoimmune sequelae. Complimenting this
Systemic And Local Vaccination Against Breast Cancer With Minimum Autoimmune Sequelae
2011-10-01
AD_________________ Award Number: W81XWH-10-1-0466 TITLE: Systemic and Local Vaccination against...2011 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Systemic and Local Vaccination against Breast Cancer with Minimum Autoimmune Sequelae 5b. GRANT...eliminate the tumor by vaccination and local ablation to render long-term immune protection without excessive autoimmune sequelae. Complimenting this
Code of Federal Regulations, 2010 CFR
2010-10-01
... Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Mothership Under... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each...-out allocation (2,220) Column G Number of Chinook salmon deducted from the annual threshold amount of...
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Direct Monte Carlo simulation of chemical reaction systems: Simple bimolecular reactions
NASA Astrophysics Data System (ADS)
Piersall, Shannon D.; Anderson, James B.
1991-07-01
In applications to several simple reaction systems we have explored a ``direct simulation'' method for predicting and understanding the behavior of gas phase chemical reaction systems. This Monte Carlo method, originated by Bird, has been found remarkably successful in treating a number of difficult problems in rarefied dynamics. Extension to chemical reactions offers a powerful tool for treating reaction systems with nonthermal distributions, with coupled gas-dynamic and reaction effects, with emission and adsorption of radiation, and with many other effects difficult to treat in any other way. The usual differential equations of chemical kinetics are eliminated. For a bimolecular reaction of the type A+B→C+D with a rate sufficiently low to allow a continued thermal equilibrium of reactants we find that direct simulation reproduces the expected second order kinetics. Simulations for a range of temperatures yield the activation energies expected for the reaction models specified. For faster reactions under conditions leading to a depletion of energetic reactant species, the expected slowing of reaction rates and departures from equilibrium distributions are observed. The minimum sample sizes required for adequate simulations are as low as 1000 molecules for these cases. The calculations are found to be simple and straightforward for the homogeneous systems considered. Although computation requirements may be excessively high for very slow reactions, they are reasonably low for fast reactions, for which nonequilibrium effects are most important.
Deutsch, Eric W; Ball, Catherine A; Berman, Jules J; Bova, G Steven; Brazma, Alvis; Bumgarner, Roger E; Campbell, David; Causton, Helen C; Christiansen, Jeffrey H; Daian, Fabrice; Dauga, Delphine; Davidson, Duncan R; Gimenez, Gregory; Goo, Young Ah; Grimmond, Sean; Henrich, Thorsten; Herrmann, Bernhard G; Johnson, Michael H; Korb, Martin; Mills, Jason C; Oudes, Asa J; Parkinson, Helen E; Pascal, Laura E; Pollet, Nicolas; Quackenbush, John; Ramialison, Mirana; Ringwald, Martin; Salgado, David; Sansone, Susanna-Assunta; Sherlock, Gavin; Stoeckert, Christian J; Swedlow, Jason; Taylor, Ronald C; Walashek, Laura; Warford, Anthony; Wilkinson, David G; Zhou, Yi; Zon, Leonard I; Liu, Alvin Y; True, Lawrence D
2015-01-01
One purpose of the biomedical literature is to report results in sufficient detail so that the methods of data collection and analysis can be independently replicated and verified. Here we present for consideration a minimum information specification for gene expression localization experiments, called the “Minimum Information Specification For In Situ Hybridization and Immunohistochemistry Experiments (MISFISHIE)”. It is modelled after the MIAME (Minimum Information About a Microarray Experiment) specification for microarray experiments. Data specifications like MIAME and MISFISHIE specify the information content without dictating a format for encoding that information. The MISFISHIE specification describes six types of information that should be provided for each experiment: Experimental Design, Biomaterials and Treatments, Reporters, Staining, Imaging Data, and Image Characterizations. This specification has benefited the consortium within which it was initially developed and is expected to benefit the wider research community. We welcome feedback from the scientific community to help improve our proposal. PMID:18327244
Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka
2018-06-01
The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Multistrange Baryon elliptic flow in Au+Au collisions at square root of sNN=200 GeV.
Adams, J; Aggarwal, M M; Ahammed, Z; Amonett, J; Anderson, B D; Arkhipkin, D; Averichev, G S; Badyal, S K; Bai, Y; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellingeri-Laurikainen, A; Bellwied, R; Berger, J; Bezverkhny, B I; Bharadwaj, S; Bhasin, A; Bhati, A K; Bhatia, V S; Bichsel, H; Bielcik, J; Bielcikova, J; Billmeier, A; Bland, L C; Blyth, C O; Blyth, S L; Bonner, B E; Botje, M; Boucham, A; Bouchet, J; Brandin, A V; Bravar, A; Bystersky, M; Cadman, R V; Cai, X Z; Caines, H; Calderón de la Barca Sánchez, M; Castillo, J; Catu, O; Cebra, D; Chajecki, Z; Chaloupka, P; Chattopadhyay, S; Chen, H F; Chen, J H; Chen, Y; Cheng, J; Cherney, M; Chikanian, A; Christie, W; Coffin, J P; Cormier, T M; Cosentino, M R; Cramer, J G; Crawford, H J; Das, D; Das, S; Daugherity, M; de Moura, M M; Dedovich, T G; DePhillips, M; Derevschikov, A A; Didenko, L; Dietel, T; Dogra, S M; Dong, W J; Dong, X; Draper, J E; Du, F; Dubey, A K; Dunin, V B; Dunlop, J C; Dutta Mazumdar, M R; Eckardt, V; Edwards, W R; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Estienne, M; Fachini, P; Faivre, J; Fatemi, R; Fedorisin, J; Filimonov, K; Filip, P; Finch, E; Fine, V; Fisyak, Y; Fornazier, K S F; Fu, J; Gagliardi, C A; Gaillard, L; Gans, J; Ganti, M S; Geurts, F; Ghazikhanian, V; Ghosh, P; Gonzalez, J E; Gos, H; Grachov, O; Grebenyuk, O; Grosnick, D; Guertin, S M; Guo, Y; Gupta, A; Gupta, N; Gutierrez, T D; Hallman, T J; Hamed, A; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Hepplemann, S; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horner, M J; Huang, H Z; Huang, S L; Hughes, E W; Humanic, T J; Igo, G; Ishihara, A; Jacobs, P; Jacobs, W W; Jedynak, M; Jiang, H; Jones, P G; Judd, E G; Kabana, S; Kang, K; Kaplan, M; Keane, D; Kechechyan, A; Khodyrev, V Yu; Kiryluk, J; Kisiel, A; Kislov, E M; Klay, J; Klein, S R; Koetke, D D; Kollegger, T; Kopytine, M; Kotchenda, L; Kowalik, K L; Kramer, M; Kravtsov, P; Kravtsov, V I; Krueger, K; Kuhn, C; Kulikov, A I; Kumar, A; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Laue, F; Lauret, J; Lebedev, A; Lednicky, R; Lehocka, S; LeVine, M J; Li, C; Li, Q; Li, Y; Lin, G; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, H; Liu, J; Liu, L; Liu, Q J; Liu, Z; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Lu, Y; Ludlam, T; Lynn, D; Ma, G L; Ma, J G; Ma, Y G; Magestro, D; Mahajan, S; Mahapatra, D P; Majka, R; Mangotra, L K; Manweiler, R; Margetis, S; Markert, C; Martin, L; Marx, J N; Matis, H S; Matulenko, Yu A; McClain, C J; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Miller, M L; Minaev, N G; Mironov, C; Mischke, A; Mishra, D K; Mitchell, J; Mohanty, B; Molnar, L; Moore, C F; Morozov, D A; Munhoz, M G; Nandi, B K; Nayak, S K; Nayak, T K; Nelson, J M; Netrakanti, P K; Nikitin, V A; Nogach, L V; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Pal, S K; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Peitzmann, T; Perevoztchikov, V; Perkins, C; Peryt, W; Petrov, V A; Phatak, S C; Picha, R; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potekhin, M; Potrebenikova, E; Potukuchi, B V K S; Prindle, D; Pruneau, C; Putschke, J; Rakness, G; Raniwala, R; Raniwala, S; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Reinnarth, J; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevskiy, O V; Romero, J L; Rose, A; Roy, C; Ruan, L; Russcher, M; Sahoo, R; Sakrejda, I; Salur, S; Sandweiss, J; Sarsour, M; Savin, I; Sazhin, P S; Schambach, J; Scharenberg, R P; Schmitz, N; Schweda, K; Seger, J; Seyboth, P; Shahaliev, E; Shao, M; Shao, W; Sharma, M; Shen, W Q; Shestermanov, K E; Shimanskiy, S S; Sichtermann, E; Simon, F; Singaraju, R N; Smirnov, N; Snellings, R; Sood, G; Sorensen, P; Sowinski, J; Speltz, J; Spinka, H M; Srivastava, B; Stadnik, A; Stanislaus, T D S; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Swanger, M; Symons, T J M; Szanto de Toledo, A; Tai, A; Takahashi, J; Tang, A H; Tarnowsky, T; Thein, D; Thomas, J H; Timmins, A R; Timoshenko, S; Tokarev, M; Trentalange, S; Tribble, R E; Tsai, O D; Ulery, J; Ullrich, T; Underwood, D G; Van Buren, G; van der Kolk, N; van Leeuwen, M; Vander Molen, A M; Varma, R; Vasilevski, I M; Vasiliev, A N; Vernet, R; Vigdor, S E; Viyogi, Y P; Vokal, S; Voloshin, S A; Waggoner, W T; Wang, F; Wang, G; Wang, G; Wang, X L; Wang, Y; Wang, Y; Wang, Z M; Ward, H; Watson, J W; Webb, J C; Westfall, G D; Wetzler, A; Whitten, C; Wieman, H; Wissink, S W; Witt, R; Wood, J; Wu, J; Xu, N; Xu, Z; Xu, Z Z; Yamamoto, E; Yepes, P; Yurevich, V I; Zborovsky, I; Zhang, H; Zhang, W M; Zhang, Y; Zhang, Z P; Zhong, C; Zoulkarneev, R; Zoulkarneeva, Y; Zubarev, A N; Zuo, J X
2005-09-16
We report on the first measurement of elliptic flow v2(pT) of multistrange baryons Xi- +Xi+ and Omega- + Omega+ in heavy-ion collisions. In minimum-bias Au+Au collisions at square root of s(NN)=200 GeV, a significant amount of elliptic flow, comparable to other nonstrange baryons, is observed for multistrange baryons which are expected to be particularly sensitive to the dynamics of the partonic stage of heavy-ion collisions. The pT dependence of v2 of the multistrange baryons confirms the number of constituent quark scaling previously observed for lighter hadrons. These results support the idea that a substantial fraction of the observed collective motion is developed at the early partonic stage in ultrarelativistic nuclear collisions at the Relativistic Heavy Ion Collider.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
An optimal diagnostic strategy for finding malfunctioning components in systems
NASA Technical Reports Server (NTRS)
Wong, J. T.
1983-01-01
A solution to the following problem is presented: Given that an n-component functional system is down, it is required to find a malfunctioning component of the system such that the expected expenditure is minimum.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
An early solar dynamo prediction: Cycle 23 is approximately cycle 22
NASA Technical Reports Server (NTRS)
Schatten, Kenneth H.; Pesnell, W. Dean
1993-01-01
In this paper, we briefly review the 'dynamo' and 'geomagnetic precursor' methods of long-term solar activity forecasting. These methods depend upon the most basic aspect of dynamo theory to predict future activity, future magnetic field arises directly from the magnification of pre-existing magnetic field. We then generalize the dynamo technique, allowing the method to be used at any phase of the solar cycle, through the development of the 'Solar Dynamo Amplitude' (SODA) index. This index is sensitive to the magnetic flux trapped within the Sun's convection zone but insensitive to the phase of the solar cycle. Since magnetic fields inside the Sun can become buoyant, one may think of the acronym SODA as describing the amount of buoyant flux. Using the present value of the SODA index, we estimate that the next cycle's smoothed peak activity will be about 210 +/- 30 solar flux units for the 10.7 cm radio flux and a sunspot number of 170 +/- 25. This suggests that solar cycle #23 will be large, comparable to cycle #22. The estimated peak is expected to occur near 1999.7 +/- 1 year. Since the current approach is novel (using data prior to solar minimum), these estimates may improve when the upcoming solar minimum is reached.
How to resolve the SLOSS debate: lessons from species-diversity models.
Tjørve, Even
2010-05-21
The SLOSS debate--whether a single large reserve will conserve more species than several small--of the 1970s and 1980s never came to a resolution. The first rule of reserve design states that one large reserve will conserve the most species, a rule which has been heavily contested. Empirical data seem to undermine the reliance on general rules, indicating that the best strategy varies from case to case. Modeling has also been deployed in this debate. We may divide the modeling approaches to the SLOSS enigma into dynamic and static approaches. Dynamic approaches, covered by the fields of island equilibrium theory of island biogeography and metapopulation theory, look at immigration, emigration, and extinction. Static approaches, such as the one in this paper, illustrate how several factors affect the number of reserves that will save the most species. This article approaches the effect of different factors by the application of species-diversity models. These models combine species-area curves for two or more reserves, correcting for the species overlap between them. Such models generate several predictions on how different factors affect the optimal number of reserves. The main predictions are: Fewer and larger reserves are favored by increased species overlap between reserves, by faster growth in number of species with reserve area increase, by higher minimum-area requirements, by spatial aggregation and by uneven species abundances. The effect of increased distance between smaller reserves depends on the two counteracting factors: decreased species density caused by isolation (which enhances minimum-area effect) and decreased overlap between isolates. The first decreases the optimal number of reserves; the second increases the optimal number. The effect of total reserve-system area depends both on the shape of the species-area curve and on whether overlap between reserves changes with scale. The approach to modeling presented here has several implications for conservational strategies. It illustrates well how the SLOSS enigma can be reduced to a question of the shape of the species-area curve that is expected or generated from reserves of different sizes and a question of overlap between isolates (or reserves). Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Minimum Detectable Dose as a Measure of Bioassay Programme Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, Eugene H.
2003-01-01
This paper suggests that minimum detectable dose (MDD) be used to describe the capability of bioassay programs for which intakes are expected to be rare. This allows expression of the capability in units that correspond directly to primary dose limits. The concept uses the well-established analytical statistic minimum detectable amount (MDA) as the starting point and assumes MDA detection at a prescribed time post intake. The resulting dose can then be used as an indication of the adequacy or capability of the program for demonstrating compliance with the performance criteria. MDDs can be readily tabulated or plotted to demonstrate themore » effectiveness of different types of monitoring programs. The inclusion of cost factors for bioassay measurements can allow optimisation.« less
Minimum detectable dose as a measure of bioassay programme capability.
Carbaugh, E H
2003-01-01
This paper suggests that minimum detectable dose (MDD) be used to describe the capability of bioassay programmes for which intakes are expected to be rare. This allows expression of the capability in units that correspond directly to primary dose limits. The concept uses the well established analytical statistic minimum detectable amount (MDA) as the starting point, and assumes MDA detection at a prescribed time post-intake. The resulting dose can then be used as an indication of the adequacy or capability of the programme for demonstrating compliance with the performance criteria. MDDs can be readily tabulated or plotted to demonstrate the effectiveness of different types of monitoring programmes. The inclusion of cost factors for bioassay measurements can allow optimisation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-15
.... The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and the need... DP, Amdt 2 Alexandria, MN, Chandler Field, RNAV (GPS) RWY 22, Orig Bemidji, MN, Bemidji Rgnl, RNAV (GPS) RWY 25, Orig Granite Falls, MN, Granite Falls Muni/Lenzen-Roe Meml Fld, Takeoff Minimums and...
Minimum Wage Increases and the Working Poor. Changing Domestic Priorities Discussion Paper.
ERIC Educational Resources Information Center
Mincy, Ronald B.
Most economists agree that the difficulties of targeting minimum wage increases to low-income families make such increases ineffective tools for reducing poverty. This paper provides estimates of the impact of minimum wage increases on the poverty gap and the number of poor families, and shows which factors are barriers to decreasing poverty…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 8003(b) and (e)? 222.36 Section 222.36 Education Regulations of the Offices of the Department of... for Federally Connected Children Under Section 8003(b) and (e) of the Act § 222.36 What minimum number... of those children under section 8003(b) and (e)? (a) Except as provided in paragraph (d) of this...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 8003(b) and (e)? 222.36 Section 222.36 Education Regulations of the Offices of the Department of... for Federally Connected Children Under Section 8003(b) and (e) of the Act § 222.36 What minimum number... of those children under section 8003(b) and (e)? (a) Except as provided in paragraph (d) of this...
Vedenov, Dmitry; Alhotan, Rashed A; Wang, Runlian; Pesti, Gene M
2017-02-01
Nutritional requirements and responses of all organisms are estimated using various models representing the response to different dietary levels of the nutrient in question. To help nutritionists design experiments for estimating responses and requirements, we developed a simulation workbook using Microsoft Excel. The objective of the present study was to demonstrate the influence of different numbers of nutrient levels, ranges of nutrient levels and replications per nutrient level on the estimates of requirements based on common nutritional response models. The user provides estimates of the shape of the response curve, requirements and other parameters and observation to observation variation. The Excel workbook then produces 1-1000 randomly simulated responses based on the given response curve and estimates the standard errors of the requirement (and other parameters) from different models as an indication of the expected power of the experiment. Interpretations are based on the assumption that the smaller the standard error of the requirement, the more powerful the experiment. The user can see the potential effects of using one or more subjects, different nutrient levels, etc., on the expected outcome of future experiments. From a theoretical perspective, each organism should have some enzyme-catalysed reaction whose rate is limited by the availability of some limiting nutrient. The response to the limiting nutrient should therefore be similar to enzyme kinetics. In conclusion, the workbook eliminates some of the guesswork involved in designing experiments and determining the minimum number of subjects needed to achieve desired outcomes.
Yu, Yajuan; Chen, Bo; Huang, Kai; Wang, Xiang; Wang, Dong
2014-01-01
Based on Life Cycle Assessment (LCA) and Eco-indicator 99 method, a LCA model was applied to conduct environmental impact and end-of-life treatment policy analysis for secondary batteries. This model evaluated the cycle, recycle and waste treatment stages of secondary batteries. Nickel-Metal Hydride (Ni-MH) batteries and Lithium ion (Li-ion) batteries were chosen as the typical secondary batteries in this study. Through this research, the following results were found: (1) A basic number of cycles should be defined. A minimum cycle number of 200 would result in an obvious decline of environmental loads for both battery types. Batteries with high energy density and long life expectancy have small environmental loads. Products and technology that help increase energy density and life expectancy should be encouraged. (2) Secondary batteries should be sorted out from municipal garbage. Meanwhile, different types of discarded batteries should be treated separately under policies and regulations. (3) The incineration rate has obvious impact on the Eco-indicator points of Nickel-Metal Hydride (Ni-MH) batteries. The influence of recycle rate on Lithium ion (Li-ion) batteries is more obvious. These findings indicate that recycling is the most promising direction for reducing secondary batteries’ environmental loads. The model proposed here can be used to evaluate environmental loads of other secondary batteries and it can be useful for proposing policies and countermeasures to reduce the environmental impact of secondary batteries. PMID:24646862
Yu, Yajuan; Chen, Bo; Huang, Kai; Wang, Xiang; Wang, Dong
2014-03-18
Based on Life Cycle Assessment (LCA) and Eco-indicator 99 method, a LCA model was applied to conduct environmental impact and end-of-life treatment policy analysis for secondary batteries. This model evaluated the cycle, recycle and waste treatment stages of secondary batteries. Nickel-Metal Hydride (Ni-MH) batteries and Lithium ion (Li-ion) batteries were chosen as the typical secondary batteries in this study. Through this research, the following results were found: (1) A basic number of cycles should be defined. A minimum cycle number of 200 would result in an obvious decline of environmental loads for both battery types. Batteries with high energy density and long life expectancy have small environmental loads. Products and technology that help increase energy density and life expectancy should be encouraged. (2) Secondary batteries should be sorted out from municipal garbage. Meanwhile, different types of discarded batteries should be treated separately under policies and regulations. (3) The incineration rate has obvious impact on the Eco-indicator points of Nickel-Metal Hydride (Ni-MH) batteries. The influence of recycle rate on Lithium ion (Li-ion) batteries is more obvious. These findings indicate that recycling is the most promising direction for reducing secondary batteries' environmental loads. The model proposed here can be used to evaluate environmental loads of other secondary batteries and it can be useful for proposing policies and countermeasures to reduce the environmental impact of secondary batteries.
Trends and variability in the hydrological regime of the Mackenzie River Basin
NASA Astrophysics Data System (ADS)
Abdul Aziz, Omar I.; Burn, Donald H.
2006-03-01
Trends and variability in the hydrological regime were analyzed for the Mackenzie River Basin in northern Canada. The procedure utilized the Mann-Kendall non-parametric test to detect trends, the Trend Free Pre-Whitening (TFPW) approach for correcting time-series data for autocorrelation and a bootstrap resampling method to account for the cross-correlation structure of the data. A total of 19 hydrological and six meteorological variables were selected for the study. Analysis was conducted on hydrological data from a network of 54 hydrometric stations and meteorological data from a network of 10 stations. The results indicated that several hydrological variables exhibit a greater number of significant trends than are expected to occur by chance. Noteworthy were strong increasing trends over the winter month flows of December to April as well as in the annual minimum flow and weak decreasing trends in the early summer and late fall flows as well as in the annual mean flow. An earlier onset of the spring freshet is noted over the basin. The results are expected to assist water resources managers and policy makers in making better planning decisions in the Mackenzie River Basin.
Bantam System Technology Project Ground System Operations Concept and Plan
NASA Technical Reports Server (NTRS)
Moon, Jesse M.; Beveridge, James R.
1997-01-01
The Low Cost Booster Technology Program, also known as the Bantam Booster program, is a NASA sponsored initiative to establish a viable commercial technology to support the market for placing small payloads in low earth orbit. This market is currently served by large boosters which orbit a number of small payloads on a single launch vehicle, or by these payloads taking up available space on major commercial launches. Even by sharing launch costs, the minimum cost to launch one of these small satellites is in the 6 to 8 million dollar range. Additionally, there is a shortage of available launch opportunities which can be shared in this manner. The goal of the Bantam program is to develop two competing launch vehicles, with launch costs in the neighborhood of 1.5 million dollars to launch a 150 kg payload into low earth orbit (200 nautical mile sun synchronous). Not only could the cost of the launch be significantly less than the current situation, but the payload sponsor could expect better service for his expenditure, the ability to specify his own orbit, and a dedicated vehicle. By developing two distinct launch vehicles, market forces are expected to aid in keeping customer costs low.
20 CFR 229.4 - Applying for the overall minimum.
Code of Federal Regulations, 2010 CFR
2010-04-01
... from employment and self-employment in order to determine whether the claimant or annuitant qualifies for the overall minimum. (Approved by the Office of Management and Budget under control number 3220...
20 CFR 229.4 - Applying for the overall minimum.
Code of Federal Regulations, 2011 CFR
2011-04-01
... from employment and self-employment in order to determine whether the claimant or annuitant qualifies for the overall minimum. (Approved by the Office of Management and Budget under control number 3220...
Tactical Miniature Crystal Oscillator.
1980-08-01
manufactured by this process are expected to require 30 days to achieve minimum aging rates. (4) FUNDEMENTAL CRYSTAL RETRACE MEASUREMENT. An important crystal...considerable measurement time to detect differences and characterize components. Before investing considerable time in a candidate reactive element, a
Density measurement verification for hot mixed asphalt concrete pavement construction.
DOT National Transportation Integrated Search
2010-06-01
Oregon Department of Transportation (ODOT) requires a minimum density for the construction of dense-graded hot mix asphalt concrete (HMAC) pavements to ensure the likelihood that the pavement will not experience distresses that reduce the expected se...
Density measurement verification for hot mix asphalt concrete pavement construction.
DOT National Transportation Integrated Search
2010-06-01
Oregon Department of Transportation (ODOT) requires a minimum density for the construction of dense-graded hot mix asphalt concrete (HMAC) pavements to ensure the likelihood that the pavement will not experience distresses that reduce the expected se...
Maintaining traffic sign retroreflectivity : impacts on state and local agencies
DOT National Transportation Integrated Search
2007-04-01
This report analyzes the impacts that might be expected from the adoption of proposed minimum maintained retroreflectivity levels for traffic signs to improve night visibility. The report evaluates the broad spectrum of concerns expressed by State an...
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
NASA Astrophysics Data System (ADS)
Mills, Cameron; Tiwari, Vaibhav; Fairhurst, Stephen
2018-05-01
The observation of gravitational wave signals from binary black hole and binary neutron star mergers has established the field of gravitational wave astronomy. It is expected that future networks of gravitational wave detectors will possess great potential in probing various aspects of astronomy. An important consideration for successive improvement of current detectors or establishment on new sites is knowledge of the minimum number of detectors required to perform precision astronomy. We attempt to answer this question by assessing the ability of future detector networks to detect and localize binary neutron stars mergers on the sky. Good localization ability is crucial for many of the scientific goals of gravitational wave astronomy, such as electromagnetic follow-up, measuring the properties of compact binaries throughout cosmic history, and cosmology. We find that although two detectors at improved sensitivity are sufficient to get a substantial increase in the number of observed signals, at least three detectors of comparable sensitivity are required to localize majority of the signals, typically to within around 10 deg2 —adequate for follow-up with most wide field of view optical telescopes.
Electroconvulsive therapy: a Canadian perspective.
Smith, W E; Richman, A
1984-12-01
Recent ECT practices in Canada are reviewed from a historical perspective with respect to specific criticisms. Utilization is decreasing; utilization rates vary widely between Provinces and between regions; disproportionate numbers of females have been receiving ECT; a substantial group of patients diagnosed as neurotic and schizophrenic continue to receive ECT; criteria and guidelines for its use are not consistently applied. Expected rates of ECT used are estimated, based on theory and practice as well as on published data on the epidemiology of affective disorders. Data on actual Canadian usage are reviewed and compared with an estimated minimum ratio of 30-45+ cases per year of non bi-polar depression per 100,000 population requiring ECT. Results show that there may be a substantial number of patients in some Provinces for whom ECT is the best available treatment and who are not receiving it. There is some ethical concern associated with possible under-use of ECT as the best therapy available for certain patient groups. Clinical cases and patterns of care should be reviewed at the hospital level to determine how best to effect improvements in the use of this treatment.
Using optimal transport theory to estimate transition probabilities in metapopulation dynamics
Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.
2017-01-01
This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.
Stochastic Optimization in The Power Management of Bottled Water Production Planning
NASA Astrophysics Data System (ADS)
Antoro, Budi; Nababan, Esther; Mawengkang, Herman
2018-01-01
This paper review a model developed to minimize production costs on bottled water production planning through stochastic optimization. As we know, that planning a management means to achieve the goal that have been applied, since each management level in the organization need a planning activities. The built models is a two-stage stochastic models that aims to minimize the cost on production of bottled water by observing that during the production process, neither interfernce nor vice versa occurs. The models were develop to minimaze production cost, assuming the availability of packing raw materials used considered to meet for each kind of bottles. The minimum cost for each kind production of bottled water are expressed in the expectation of each production with a scenario probability. The probability of uncertainly is a representation of the number of productions and the timing of power supply interruption. This is to ensure that the number of interruption that occur does not exceed the limit of the contract agreement that has been made by the company with power suppliers.
Exponential bound in the quest for absolute zero
NASA Astrophysics Data System (ADS)
Stefanatos, Dionisis
2017-10-01
In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.
Exponential bound in the quest for absolute zero.
Stefanatos, Dionisis
2017-10-01
In most studies for the quantification of the third law of thermodynamics, the minimum temperature which can be achieved with a long but finite-time process scales as a negative power of the process duration. In this article, we use our recent complete solution for the optimal control problem of the quantum parametric oscillator to show that the minimum temperature which can be obtained in this system scales exponentially with the available time. The present work is expected to motivate further research in the active quest for absolute zero.
A finite-state, finite-memory minimum principle, part 2
NASA Technical Reports Server (NTRS)
Sandell, N. R., Jr.; Athans, M.
1975-01-01
In part 1 of this paper, a minimum principle was found for the finite-state, finite-memory (FSFM) stochastic control problem. In part 2, conditions for the sufficiency of the minimum principle are stated in terms of the informational properties of the problem. This is accomplished by introducing the notion of a signaling strategy. Then a min-H algorithm based on the FSFM minimum principle is presented. This algorithm converges, after a finite number of steps, to a person - by - person extremal solution.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-10
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and... Three Rivers, MI, Three Rivers Muni Dr. Haines, Takeoff Minimums and Obstacle DP, Orig Brainerd, MN, Brainerd Lakes Rgnl, ILS OR LOC/DME RWY 34, Amdt 1 Park Rapids, MN, Park Rapids Muni-Konshok Field, NDB RWY...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2013 CFR
2013-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2014 CFR
2014-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2012 CFR
2012-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to Each Catcher... Allocation and Annual Threshold Amount, and Percent Used To Calculate IPA Minimum Participation Assigned to... threshold amount of 13,516 Column H Percent used to calculate IPA minimum participation Vessel name USCG...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas
The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as amore » supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.« less
ERIC Educational Resources Information Center
Currents, 2000
2000-01-01
A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)
Brenneman, Susan K; Shen, Wei; Brekke, Lee; Paczkowski, Rosirene; Bancroft, Tim; Kaplan, Sherrie H; Greenfield, Sheldon; Berger, Marc; Buesching, Don P
2014-09-01
To assess the ability of ENterprising SElective Multi-instrument BLend for hEterogeneity analysis (ENSEMBLE) Minimum Dataset instrument dimensions to discriminate among subgroups of patients expected to have differential outcomes. Patients with Type 2 diabetes, knee osteoarthritis, ischemic heart disease or heart failure completed a survey designed to represent three dimensions (health, personality and behavior). Health-related outcomes and utilization were investigated using claims data. Discriminant validity and associations between the dimensions and outcomes were assessed. A total of 2625 patients completed the survey. The dimensions discriminated 50-100% of the outcome levels across disease cohorts; behavior dimension scores did not differ significantly among the healthcare utilization level subgroups in any disease cohort. ENSEMBLE Minimum Dataset dimensions discriminated health-related outcome levels among patients with varied diseases.
Minimum-Time Consensus-Based Approach for Power System Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Tao; Wu, Di; Sun, Yannan
2016-02-01
This paper presents minimum-time consensus based distributed algorithms for power system applications, such as load shedding and economic dispatch. The proposed algorithms are capable of solving these problems in a minimum number of time steps instead of asymptotically as in most of existing studies. Moreover, these algorithms are applicable to both undirected and directed communication networks. Simulation results are used to validate the proposed algorithms.
Optimizing conceptual aircraft designs for minimum life cycle cost
NASA Technical Reports Server (NTRS)
Johnson, Vicki S.
1989-01-01
A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.
Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.
Bouhrara, Mustapha; Spencer, Richard G
2018-06-01
The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Geology and assessment of undiscovered oil and gas resources of the Yukon Flats Basin Province, 2008
Bird, Kenneth J.; Stanley, Richard G.; Moore, Thomas E.; Gautier, Donald L.
2017-12-22
The hydrocarbon potential of the Yukon Flats Basin Province in Central Alaska was assessed in 2004 as part of an update to the National Oil and Gas Assessment. Three assessment units (AUs) were identified and assessed using a methodology somewhat different than that of the 2008 Circum-Arctic Resource Appraisal (CARA). An important difference in the methodology of the two assessments is that the 2004 assessment specified a minimum accumulation size of 0.5 million barrels of oil equivalent (MMBOE), whereas the 2008 CARA assessment specified a minimum size of 50 MMBOE. The 2004 assessment concluded that >95 percent of the estimated mean undiscovered oil and gas resources occur in a single AU, the Tertiary Sandstone AU. This is also the only AU of the three that extends north of the Arctic Circle.For the CARA project, the number of oil and gas accumulations in the 2004 assessment of the Tertiary Sandstone AU was re-evaluated in terms of the >50-MMBOE minimum accumulation size. By this analysis, and assuming the resource to be evenly distributed across the AU, 0.23 oil fields and 1.20 gas fields larger than 50 MMBOE are expected in the part of the AU north of the Arctic Circle. The geology suggests, however, that the area north of the Arctic Circle has a lower potential for oil and gas accumulations than the area to the south where the sedimentary section is thicker, larger volumes of hydrocarbons may have been generated, and potential structural traps are probably more abundant. Because of the low potential implied for the area of the AU north of the Arctic Circle, the Yukon Flats Tertiary Sandstone AU was not quantitatively assessed for the 2008 CARA.
Low-noise encoding of active touch by layer 4 in the somatosensory cortex.
Hires, Samuel Andrew; Gutnisky, Diego A; Yu, Jianing; O'Connor, Daniel H; Svoboda, Karel
2015-08-06
Cortical spike trains often appear noisy, with the timing and number of spikes varying across repetitions of stimuli. Spiking variability can arise from internal (behavioral state, unreliable neurons, or chaotic dynamics in neural circuits) and external (uncontrolled behavior or sensory stimuli) sources. The amount of irreducible internal noise in spike trains, an important constraint on models of cortical networks, has been difficult to estimate, since behavior and brain state must be precisely controlled or tracked. We recorded from excitatory barrel cortex neurons in layer 4 during active behavior, where mice control tactile input through learned whisker movements. Touch was the dominant sensorimotor feature, with >70% spikes occurring in millisecond timescale epochs after touch onset. The variance of touch responses was smaller than expected from Poisson processes, often reaching the theoretical minimum. Layer 4 spike trains thus reflect the millisecond-timescale structure of tactile input with little noise.
Opportunistic brood theft in the context of colony relocation in an Indian queenless ant
Paul, Bishwarup; Paul, Manabi; Annagiri, Sumana
2016-01-01
Brood is a very valuable part of an ant colony and behaviours increasing its number with minimum investment is expected to be favoured by natural selection. Brood theft has been well documented in ants belonging to the subfamilies Myrmicinae and Formicinae. In this study we report opportunistic brood theft in the context of nest relocation in Diacamma indicum, belonging to the primitively eusocial subfamily Ponerinae. Pupae was the preferred stolen item both in laboratory conditions and in natural habitat and a small percentage of the members of a colony acting as thieves stole about 12% of the brood of the victim colony. Stolen brood were not consumed but became slaves. We propose a new dimension to the risks of relocation in the form of brood theft by conspecific neighbours and speculate that examination of this phenomenon in other primitively eusocial species will help understand the origin of brood theft in ants. PMID:27796350
Opportunistic brood theft in the context of colony relocation in an Indian queenless ant.
Paul, Bishwarup; Paul, Manabi; Annagiri, Sumana
2016-10-31
Brood is a very valuable part of an ant colony and behaviours increasing its number with minimum investment is expected to be favoured by natural selection. Brood theft has been well documented in ants belonging to the subfamilies Myrmicinae and Formicinae. In this study we report opportunistic brood theft in the context of nest relocation in Diacamma indicum, belonging to the primitively eusocial subfamily Ponerinae. Pupae was the preferred stolen item both in laboratory conditions and in natural habitat and a small percentage of the members of a colony acting as thieves stole about 12% of the brood of the victim colony. Stolen brood were not consumed but became slaves. We propose a new dimension to the risks of relocation in the form of brood theft by conspecific neighbours and speculate that examination of this phenomenon in other primitively eusocial species will help understand the origin of brood theft in ants.
The gravitomagnetic interaction and its relationship to other relativistic gravitational effects
NASA Technical Reports Server (NTRS)
Nordtvedt, Kenneth
1991-01-01
To better understand the relationship between the expected precession rates of an orbiting gyroscope (GP-B) and other observable consequences in the solar system of relativistic, post-Newtonian gravity, a phenomenological model was developed of post-Newtonian gravity which presupposes the very minimum possible concerning the nature and foundations of the gravitational interaction. Solar system observations, chiefly interplanetary ranging, fix all the parameters in the phenomenological model to various levels of precision. This permits prediction of gyroscope precession rates to better than 10 pct. accuracy. A number of new precession terms are calculated which would exist if gravity were not a metric field phenomenon, but this would clash with other empirical observations of post-Newtonian effects in gravity. It is shown that gravitomagnetism, the post-Newtonian gravitational corrections to the interactions between moving matter, plays a ubiquitous role in determining a wide variety of gravitational effects, including the precession of orbiting gyroscopes.
A Search for r-Modes from 1825 to the Present
NASA Technical Reports Server (NTRS)
Wolff, Charles L.
1998-01-01
Global oscillations (r-modes) of the Sun's outer convective envelope with periods approximately 1 month and longer have been detected in several short data strings of several years duration. To test whether r-modes might persist beyond one 11 year cycle, the daily sunspot numbers from 1825 to the present were analyzed. Good evidence, but confidence level less than 3sigma, was found for most of the 14 r-modes with spherical harmonic index lambda less than or equal to 5 that can exist in the presence of solar differential rotation. The characteristic rotation rate of almost every such r-mode was detected, displaced systematically from its expected value by only 0.15%. If this probable detection is real, then most low harmonic r-modes have lifetimes exceeding one century and the rotation of the Sun's outer layers varies by less than 0.05%, except possibly at solar minimum.
Dynamo theory prediction of solar activity
NASA Technical Reports Server (NTRS)
Schatten, Kenneth H.
1988-01-01
The dynamo theory technique to predict decadal time scale solar activity variations is introduced. The technique was developed following puzzling correlations involved with geomagnetic precursors of solar activity. Based upon this, a dynamo theory method was developed to predict solar activity. The method was used successfully in solar cycle 21 by Schatten, Scherrer, Svalgaard, and Wilcox, after testing with 8 prior solar cycles. Schatten and Sofia used the technique to predict an exceptionally large cycle, peaking early (in 1990) with a sunspot value near 170, likely the second largest on record. Sunspot numbers are increasing, suggesting that: (1) a large cycle is developing, and (2) that the cycle may even surpass the largest cycle (19). A Sporer Butterfly method shows that the cycle can now be expected to peak in the latter half of 1989, consistent with an amplitude comparable to the value predicted near the last solar minimum.
78 FR 19734 - Notice of Proposed Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-02
... and Enforcement. ACTION: Notice and request for comments. SUMMARY: In compliance with the Paperwork... information for Permit Applications--Minimum Requirements for Legal, Financial, Compliance, and Related... expected burden and cost. DATES: Comments on the proposed information collection must be received by June 3...
Amphibole and Phlogopite Formation on the R Chondrite Parent Body: An Experimental Investigation
NASA Astrophysics Data System (ADS)
Lunning, N. G.; Waters, L. E.; McCoy, T. J.
2017-07-01
High-temperature hydrated minerals can form at the pressures and the temperatures expected for the interiors of planetesimals. Under water-saturated conditions, minimum silicate melting can initiate at temperatures as low as 870°C at 40 MPa.
10 CFR 706.2 - Basis and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ENERGY SECURITY POLICIES AND PRACTICES RELATING TO LABOR-MANAGEMENT RELATIONS General § 706.2 Basis and... objectives for labor-management relations in the DOE program, namely: (a) Wholehearted acceptance by... efficient management expected from DOE contractors; (e) Minimum interference with the traditional rights and...
Student Involvement Can Be Stressful: Implications and Interventions.
ERIC Educational Resources Information Center
Floerchinger, Debra S.
1988-01-01
Involvement on campus varies from involvement in student organization leadership positions to paid paraprofessional positions that also reflect strong leadership expectations. A minimum amount of student development knowledge is essential for advisers for a successful functioning and ethical interaction with students. (MLW)
The Maximums and Minimums of a Polnomial or Maximizing Profits and Minimizing Aircraft Losses.
ERIC Educational Resources Information Center
Groves, Brenton R.
1984-01-01
Plotting a polynomial over the range of real numbers when its derivative contains complex roots is discussed. The polynomials are graphed by calculating the minimums, maximums, and zeros of the function. (MNS)
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
False CAM alarms from radon fluctuations.
Hayes, Robert
2003-11-01
The root cause of many false continuous air monitor (CAM) alarms is revealed for CAMs that use constant spectral shape assumptions in transuranic (TRU) alpha activity determination algorithms. This paper shows that when atmospheric radon levels continually decrease and bottom out at a minimum level, reduced false TRU count rates are not only expected but measured. Similarly, when the radon levels continually increase to a maximum level, elevated false TRU count rates were measured as predicted. The basis for expecting this dependence on changes in radon levels is discussed.
Effects of anchoring and adjustment in the evaluation of product pricing.
Elaad, Eitan; Sayag, Neta; Ezer, Aliya
2010-08-01
Anchoring and adjustment comprise a heuristic that creates expectations. Two types of anchors were applied on participants' evaluation of products: the price reference of the product (maximum, minimum, or no price reference) and the context in which the products were evaluated (the prestige of the shopping center). Results showed that both factors anchored evaluations of products' value. Context effects were explained by the different expectations of visitors in prestigious (looking for quality) and less prestigious (seeking a bargain) centers.
Death and rebirth of neural activity in sparse inhibitory networks
NASA Astrophysics Data System (ADS)
Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro
2017-05-01
Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.
Annual plants change in size over a century of observations.
Leger, Elizabeth A
2013-07-01
Studies have documented changes in animal body sizes over the last century, but very little is known about changes in plant sizes, even though reduced plant productivity is potentially responsible for declines in size of other organisms. Here, I ask whether warming trends in the Great Basin have affected plant size by measuring specimens preserved on herbarium sheets collected between 1893 and 2011. I asked how maximum and minimum temperatures, precipitation, and the Pacific Decadal Oscillation (PDO) in the year of collection affected plant height, leaf size, and flower number, and asked whether changes in climate resulted in decreasing sizes for seven annual forbs. Species had contrasting responses to climate factors, and would not necessarily be expected to respond in parallel to climatic shifts. There were generally positive relationships between plant size and increased minimum and maximum temperatures, which would have been predicted to lead to small increases in plant sizes over the observation period. While one species increased in size and flower number over the observation period, five of the seven species decreased in plant height, four of these decreased in leaf size, and one species also decreased in flower production. One species showed no change. The mechanisms behind these size changes are unknown, and the limited data available on these species (germination timing, area of occupancy, relative abundance) did not explain why some species shrank while others grew or did not change in size over time. These results show that multiple annual forbs are decreasing in size, but that even within the same functional group, species may have contrasting responses to similar environmental stimuli. Changes in plant size could have cascading effects on other members of these communities, and differential responses to directional change may change the composition of plant communities over time. © 2013 Blackwell Publishing Ltd.
Analysis of UAS DAA Surveillance in Fast-Time Simulations without DAA Mitigation
NASA Technical Reports Server (NTRS)
Thipphavong, David P.; Santiago, Confesor; Isaacson, David R.; Lee, Seung Man; Refai, Mohamad Said; Snow, James William
2015-01-01
Realization of the expected proliferation of Unmanned Aircraft System (UAS) operations in the National Airspace System (NAS) depends on the development and validation of performance standards for UAS Detect and Avoid (DAA) Systems. The RTCA Special Committee 228 is charged with leading the development of draft Minimum Operational Performance Standards (MOPS) for UAS DAA Systems. NASA, as a participating member of RTCA SC-228 is committed to supporting the development and validation of draft requirements for DAA surveillance system performance. A recent study conducted using NASA's ACES (Airspace Concept Evaluation System) simulation capability begins to address questions surrounding the development of draft MOPS for DAA surveillance systems. ACES simulations were conducted to study the performance of sensor systems proposed by the SC-228 DAA Surveillance sub-group. Analysis included but was not limited to: 1) number of intruders (both IFR and VFR) detected by all sensors as a function of UAS flight time, 2) number of intruders (both IFR and VFR) detected by radar alone as a function of UAS flight time, and 3) number of VFR intruders detected by all sensors as a function of UAS flight time. The results will be used by SC-228 to inform decisions about the surveillance standards of UAS DAA systems and future requirements development and validation efforts.
Syndromic surveillance system based on near real-time cattle mortality monitoring.
Torres, G; Ciaravino, V; Ascaso, S; Flores, V; Romero, L; Simón, F
2015-05-01
Early detection of an infectious disease incursion will minimize the impact of outbreaks in livestock. Syndromic surveillance based on the analysis of readily available data can enhance traditional surveillance systems and allow veterinary authorities to react in a timely manner. This study was based on monitoring the number of cattle carcasses sent for rendering in the veterinary unit of Talavera de la Reina (Spain). The aim was to develop a system to detect deviations from expected values which would signal unexpected health events. Historical weekly collected dead cattle (WCDC) time series stabilized by the Box-Cox transformation and adjusted by the minimum least squares method were used to build the univariate cycling regression model based on a Fourier transformation. Three different models, according to type of production system, were built to estimate the baseline expected number of WCDC. Two types of risk signals were generated: point risk signals when the observed value was greater than the upper 95% confidence interval of the expected baseline, and cumulative risk signals, generated by a modified cumulative sum algorithm, when the cumulative sums of reported deaths were above the cumulative sum of expected deaths. Data from 2011 were used to prospectively validate the model generating seven risk signals. None of them were correlated to infectious disease events but some coincided, in time, with very high climatic temperatures recorded in the region. The harvest effect was also observed during the first week of the study year. Establishing appropriate risk signal thresholds is a limiting factor of predictive models; it needs to be adjusted based on experience gained during the use of the models. To increase the sensitivity and specificity of the predictions epidemiological interpretation of non-specific risk signals should be complemented by other sources of information. The methodology developed in this study can enhance other existing early detection surveillance systems. Syndromic surveillance based on mortality monitoring can reduce the detection time for certain disease outbreaks associated with mild mortality only detected at regional level. The methodology can be adapted to monitor other parameters routinely collected at farm level which can be influenced by communicable diseases. Copyright © 2015 Elsevier B.V. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-25
..., individual SIAP and Takeoff Minimums and ODP copies may be obtained from: 1. FAA Public Inquiry Center (APA... number of SIAPs, their complex nature, and the need for a special format make their verbatim publication...
The double-lined spectroscopic binary Iota Pegasi
NASA Technical Reports Server (NTRS)
Fekel, F. C.; Tomkin, J.
1983-01-01
Reticon observations of the spectroscopic binary Iota Peg at 6430 A show the secondary star's weak, but well defined lines. Determinations have accordingly been made of the secondary velocity curve as well as that of the primary, together with the orbits and the minimum masses of the two components. The 1.31 + or - 0.02 and 0.81 + or - 0.01 solar mass minimum masses are sufficiently close to the expected actual masses to suggest eclipses, despite the relatively long, 10.2-day period. The spectral type of the secondary is estimated to be G8 V.
1982-08-01
DATA NUMBER OF POINTS 1988 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 PS3 -218.12 294.77 3 T3 -341.54 738.15 4 T5 -464.78 623.47 5 PT51 12.317...Continued) CRUISE AND TAKE-OFF MODE DATA I NUMBER OF POINTS 4137 CHANNEL MINIMUM MAXIMUM 1 PHMG -130.13 130.00 2 P53 -218.12 376.60 3 T3 -482.72
Implications of potential future grand solar minimum for ozone layer and climate
NASA Astrophysics Data System (ADS)
Arsenovic, Pavle; Rozanov, Eugene; Anet, Julien; Stenke, Andrea; Schmutz, Werner; Peter, Thomas
2018-03-01
Continued anthropogenic greenhouse gas (GHG) emissions are expected to cause further global warming throughout the 21st century. Understanding the role of natural forcings and their influence on global warming is thus of great interest. Here we investigate the impact of a recently proposed 21st century grand solar minimum on atmospheric chemistry and climate using the SOCOL3-MPIOM chemistry-climate model with an interactive ocean element. We examine five model simulations for the period 2000-2199, following the greenhouse gas concentration scenario RCP4.5 and a range of different solar forcings. The reference simulation is forced by perpetual repetition of solar cycle 23 until the year 2199. This reference is compared with grand solar minimum simulations, assuming a strong decline in solar activity of 3.5 and 6.5 W m-2, respectively, that last either until 2199 or recover in the 22nd century. Decreased solar activity by 6.5 W m-2 is found to yield up to a doubling of the GHG-induced stratospheric and mesospheric cooling. Under the grand solar minimum scenario, tropospheric temperatures are also projected to decrease compared to the reference. On the global scale a reduced solar forcing compensates for at most 15 % of the expected greenhouse warming at the end of the 21st and around 25 % at the end of the 22nd century. The regional effects are predicted to be significant, in particular in northern high-latitude winter. In the stratosphere, the reduction of around 15 % of incoming ultraviolet radiation leads to a decrease in ozone production by up to 8 %, which overcompensates for the anticipated ozone increase due to reduced stratospheric temperatures and an acceleration of the Brewer-Dobson circulation. This, in turn, leads to a delay in total ozone column recovery from anthropogenic halogen-induced depletion, with a global ozone recovery to the pre-ozone hole values happening only upon completion of the grand solar minimum.
Shrot, Yoav; Frydman, Lucio
2011-04-01
A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.
Teaching Labor Market Survey Methodology in Rehabilitation Counseling
ERIC Educational Resources Information Center
Barros-Bailey, Mary
2012-01-01
Labor Market Survey (LMS) and labor market analysis knowledge and methodologies are minimum competencies expected of rehabilitation counselors through credentialing and accreditation boards. However, LMS knowledge and methodology is an example of a contemporary oral tradition that is universally recognized in rehabilitation and disability services…
Investigation of the medical applications of the unique biocarbons developed by NASA
NASA Technical Reports Server (NTRS)
Mooney, V.
1976-01-01
Experience with 127 percutaneous implants in 43 patients and volunteers is discussed. Pure carbon has demonstrated the highest level of success. It is indicated that prolonged success of these implants can be expected if mechanical factors are reduced to a minimum.
7 CFR 636.9 - Cost-share agreements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF... a minimum duration of one year after the completion of conservation activities identified in the... O&M agreement that describes the O&M for each conservation activity and the agency expectation that...
7 CFR 636.9 - Cost-share agreements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF... a minimum duration of one year after the completion of conservation activities identified in the... O&M agreement that describes the O&M for each conservation activity and the agency expectation that...
7 CFR 636.9 - Cost-share agreements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF... a minimum duration of one year after the completion of conservation activities identified in the... O&M agreement that describes the O&M for each conservation activity and the agency expectation that...
7 CFR 636.9 - Cost-share agreements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES CONSERVATION SERVICE, DEPARTMENT OF... a minimum duration of one year after the completion of conservation activities identified in the... O&M agreement that describes the O&M for each conservation activity and the agency expectation that...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
38 CFR 36.4365 - Appraisal requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... statement must also give an estimate of the expected useful life of the roof, elevators, heating and cooling, plumbing and electrical systems assuming normal maintenance. A minimum of 10 years estimated remaining... operation of offsite facilities—(1) Title requirements. Evidence must be presented that the offsite facility...
Associative polymers bridging between layers of multilamellar vesicles.
NASA Astrophysics Data System (ADS)
Choi, Seo; Bhatia, Surita
2006-03-01
Multilamellar vesicles can be found in a variety of pharmaceutical formulations, personal care products, and home care products. Hydrophobically modified associative polymers are often used to stabilize the vesicles or to control the rheological properties of these formulations. The hydrophobic groups are expected to insert themselves into the vesicle bilayers. Recent experimental work shows that hydrophobically modified polymers may from bridges between vesicles or may bridge between layers of a single vesicle. The latter configuration forces an interlayer spacing roughly equal to the radius of gyration of the backbone between associative groups. We have performed simple mean-field calculations on ideal telechelic associative polymers between concentric spherical surfaces. We find that the free energy per chain has an attractive minimum when the layer spacing is approximately N^1/2l, which is consistent with experimental results. The depth of the minimum depends on both chain length and curvature, and as expected when the curvature becomes small, the result for telechelic chains between flat surfaces is recovered.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage.
Cadena, Brian C
2014-03-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants' location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents.
NASA Astrophysics Data System (ADS)
Castro, Andrew; Alice-Usa Collaboration; Alice-Tpc Collaboration
2017-09-01
The Time Projection Chamber (TPC) currently used for ALICE (A Large Ion Collider Experiment at CERN) is a gaseous tracking detector used to study both proton-proton and heavy-ion collisions at the Large Hadron Collider (LHC) In order to accommodate the higher luminosit collisions planned for the LHC Run-3 starting in 2021, the ALICE-TPC will undergo a major upgrade during the next LHC shut down. The TPC is limited to a read out of 1000 Hz in minimum bias events due to the intrinsic dead time associated with back ion flow in the multi wire proportional chambers (MWPC) in the TPC. The TPC upgrade will handle the increase in event readout to 50 kHz for heavy ion minimum bias triggered events expected with the Run-3 luminosity by switching the MWPCs to a stack of four Gaseous Electron Multiplier (GEM) foils. The GEM layers will combine different hole pitches to reduce the dead time while maintaining the current spatial and energy resolution of the existing TPC. Undertaking the upgrade of the TPC represents a massive endeavor in terms of design, production, construction, quality assurance, and installation, thus the upgrade is coordinated over a number of institutes worldwide. The talk will go over the physics motivation for the upgrade, the ALICE-USA contribution to the construction of Inner Read Out Chambers IROCs, and QA from the first chambers built in the U.S
Multiple-rule bias in the comparison of classification rules
Yousefi, Mohammadmahdi R.; Hua, Jianping; Dougherty, Edward R.
2011-01-01
Motivation: There is growing discussion in the bioinformatics community concerning overoptimism of reported results. Two approaches contributing to overoptimism in classification are (i) the reporting of results on datasets for which a proposed classification rule performs well and (ii) the comparison of multiple classification rules on a single dataset that purports to show the advantage of a certain rule. Results: This article provides a careful probabilistic analysis of the second issue and the ‘multiple-rule bias’, resulting from choosing a classification rule having minimum estimated error on the dataset. It quantifies this bias corresponding to estimating the expected true error of the classification rule possessing minimum estimated error and it characterizes the bias from estimating the true comparative advantage of the chosen classification rule relative to the others by the estimated comparative advantage on the dataset. The analysis is applied to both synthetic and real data using a number of classification rules and error estimators. Availability: We have implemented in C code the synthetic data distribution model, classification rules, feature selection routines and error estimation methods. The code for multiple-rule analysis is implemented in MATLAB. The source code is available at http://gsp.tamu.edu/Publications/supplementary/yousefi11a/. Supplementary simulation results are also included. Contact: edward@ece.tamu.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21546390
On the Relation Between Spotless Days and the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2005-01-01
Spotless days are examined as a predictor for the size and timing of a sunspot cycle. For cycles 16-23 the first spotless day for a new cycle, which occurs during the decline of the old cycle, is found to precede minimum amplitude for the new cycle by about approximately equal to 34 mo, having a range of 25-40 mo. Reports indicate that the first spotless day for cycle 24 occurred in January 2004, suggesting that minimum amplitude for cycle 24 should be expected before April 2007, probably sometime during the latter half of 2006. If true, then cycle 23 will be classified as a cycle of shorter period, inferring further that cycle 24 likely will be a cycle of larger than average minimum and maximum amplitudes and faster than average rise, peaking sometime in 2010.
Deutsch, Eric W; Ball, Catherine A; Berman, Jules J; Bova, G Steven; Brazma, Alvis; Bumgarner, Roger E; Campbell, David; Causton, Helen C; Christiansen, Jeffrey H; Daian, Fabrice; Dauga, Delphine; Davidson, Duncan R; Gimenez, Gregory; Goo, Young Ah; Grimmond, Sean; Henrich, Thorsten; Herrmann, Bernhard G; Johnson, Michael H; Korb, Martin; Mills, Jason C; Oudes, Asa J; Parkinson, Helen E; Pascal, Laura E; Pollet, Nicolas; Quackenbush, John; Ramialison, Mirana; Ringwald, Martin; Salgado, David; Sansone, Susanna-Assunta; Sherlock, Gavin; Stoeckert, Christian J; Swedlow, Jason; Taylor, Ronald C; Walashek, Laura; Warford, Anthony; Wilkinson, David G; Zhou, Yi; Zon, Leonard I; Liu, Alvin Y; True, Lawrence D
2008-03-01
One purpose of the biomedical literature is to report results in sufficient detail that the methods of data collection and analysis can be independently replicated and verified. Here we present reporting guidelines for gene expression localization experiments: the minimum information specification for in situ hybridization and immunohistochemistry experiments (MISFISHIE). MISFISHIE is modeled after the Minimum Information About a Microarray Experiment (MIAME) specification for microarray experiments. Both guidelines define what information should be reported without dictating a format for encoding that information. MISFISHIE describes six types of information to be provided for each experiment: experimental design, biomaterials and treatments, reporters, staining, imaging data and image characterizations. This specification has benefited the consortium within which it was developed and is expected to benefit the wider research community. We welcome feedback from the scientific community to help improve our proposal.
Galactic CR in the Heliosphere according to NM data, 3. Results for even solar cycles 20 and 22.
NASA Astrophysics Data System (ADS)
Dorman, L.; Dorman, I.; Iucci, N.; Parisi, M.; Villoresi, G.; Zukerman, I.
We found that the maximum of correlation coefficient between cosmic ray (CR) intensity and solar activity (SA) variations is occurred for even cycles 20 and 22 for about two-three times in the shorter time than for odd cycles 19 and 21. We came to conclusion that this difference is caused by CR drift effects: during even cycle drifts produced the small increasing of CR global modulation (additional to the caused by convection-diffusion mechanism) in the period from minimum to maximum of SA, and after the maximum of SA up to the minimum- about the same decreasing of CR modulation. This gives sufficient decreasing of observed time lag between CR and- SA in even solar cycles. We analyzed monthly and 11 months smoothed data of (CR) intensity observed by neutron monitors with different cut-off rigidities for even solar cycles 20 and 22. We use a special model described the connection between solar activity (characterized by monthly sunspot numbers) and CR convection- diff usion global modulation with taking into account time-lag of processes in the Heliosphere relative to the active processes on the Sun. For taking into account drifts we use models described in literature. In the first we correct observed long-term CR modulation on drifts with different amplitudes from 0 (no drifts), then 0.15%, 0.25%,... up to 4%. For each expected amplitude of drifts we determine the correlation coefficient between expected CR variations and observed by neutron monitors with different cut - off rigidities for different times of solar wind transportation from the Sun to the boundary of the modulation region from 1 to 60 average months (it corresponds approximately to dimension of modulation region from about 6 to 360 AU). We compare observed res ults for even solar cycles 20 and 22.
Mathematics. Suggested Learner Outcomes: Grades 9-12.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Education, Oklahoma City.
This publication provides suggested learner outcomes for guiding instruction and evaluation of students in grades 9-12 in Oklahoma. The goals are intended to provide teachers, administrators, school boards, parents, and other concerned citizens with a clear understanding of expected minimum learner outcomes for each mathematics course. Teachers…
24 CFR 3280.603 - General requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... heat tape located on the underside of the manufactured home within 2 feet of the water supply inlet... service for a reasonable life expectancy. (2) Conservation. Water closets shall be selected and adjusted to use the minimum quantity of water consistent with proper performance and cleaning. (3) Connection...
28 CFR 545.24 - Inmate work conditions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 545.24 Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INSTITUTIONAL MANAGEMENT WORK... appropriate minimum standards for health and safety. Safety equipment is to be available where needed. (e) An inmate is expected to perform the work assignment in a safe manner, using safety equipment as instructed...
45 CFR 156.145 - Determination of minimum value.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 156.145 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO... the expected spending for health care costs in a benefit year so that: (i) Any current year HSA...
45 CFR 156.145 - Determination of minimum value.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 156.145 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO... the expected spending for health care costs in a benefit year so that: (i) Any current year HSA...
Quantifying the Impact of Additional Laboratory Tests on the Quality of a Geomechanical Model
NASA Astrophysics Data System (ADS)
Fillion, Marie-Hélène; Hadjigeorgiou, John
2017-05-01
In an open-pit mine operation, the design of safe and economically viable slopes can be significantly influenced by the quality and quantity of collected geomechanical data. In several mining jurisdictions, codes and standards are available for reporting exploration data, but similar codes or guidelines are not formally available or enforced for geotechnical design. Current recommendations suggest a target level of confidence in the rock mass properties used for slope design. As these guidelines are qualitative and somewhat subjective, questions arise regarding the minimum number of tests to perform in order to reach the proposed level of confidence. This paper investigates the impact of defining a priori the required number of laboratory tests to conduct on rock core samples based on the geomechanical database of an operating open-pit mine in South Africa. In this review, to illustrate the process, the focus is on uniaxial compressive strength properties. Available strength data for 2 project stages were analysed using the small-sampling theory and the confidence interval approach. The results showed that the number of specimens was too low to obtain a reliable strength value for some geotechnical domains even if more specimens than the minimum proposed by the ISRM suggested methods were tested. Furthermore, the testing sequence used has an impact on the minimum number of specimens required. Current best practice cannot capture all possibilities regarding the geomechanical property distributions, and there is a demonstrated need for a method to determine the minimum number of specimens required while minimising the influence of the testing sequence.
Software Development Cost Estimation Executive Summary
NASA Technical Reports Server (NTRS)
Hihn, Jairus M.; Menzies, Tim
2006-01-01
Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.
Cherner, M; Suarez, P; Lazzaretto, D; Fortuny, L Artiola I; Mindt, Monica Rivera; Dawes, S; Marcotte, Thomas; Grant, I; Heaton, R
2007-03-01
The large number of primary Spanish speakers both in the United States and the world makes it imperative that appropriate neuropsychological assessment instruments be available to serve the needs of these populations. In this article we describe the norming process for Spanish speakers from the U.S.-Mexico border region on the Brief Visuospatial Memory Test-revised and the Hopkins Verbal Learning Test-revised. We computed the rates of impairment that would be obtained by applying the original published norms for these tests to raw scores from the normative sample, and found substantial overestimates compared to expected rates. As expected, these overestimates were most salient at the lowest levels of education, given the under-representation of poorly educated subjects in the original normative samples. Results suggest that demographically corrected norms derived from healthy Spanish-speaking adults with a broad range of education, are less likely to result in diagnostic errors. At minimum, demographic corrections for the tests in question should include the influence of literacy or education, in addition to the traditional adjustments for age. Because the age range of our sample was limited, the norms presented should not be applied to elderly populations.
An integrated hyperspectral and SAR satellite constellation for environment monitoring
NASA Astrophysics Data System (ADS)
Wang, Jinnian; Ren, Fuhu; Xie, Chou; An, Jun; Tong, Zhanbo
2017-09-01
A fully-integrated, Hyperspectral optical and SAR (Synthetic Aperture Radar) constellation of small earth observation satellites will be deployed over multiple launches from last December to next five years. The Constellation is expected to comprise a minimum of 16 satellites (8 SAR and 8 optical ) flying in two orbital planes, with each plane consisting of four satellite pairs, equally-spaced around the orbit plane. Each pair of satellites will consist of a hyperspectral/mutispectral optical satellite and a high-resolution SAR satellite (X-band) flying in tandem. The constellation is expected to offer a number of innovative capabilities for environment monitoring. As a pre-launch experiment, two hyperspectral earth observation minisatellites, Spark 01 and 02 were launched as secondary payloads together with Tansat in December 2016 on a CZ-2D rocket. The satellites feature a wide-range hyperspectral imager. The ground resolution is 50 m, covering spectral range from visible to near infrared (420 nm - 1000 nm) and a swath width of 100km. The imager has an average spectral resolution of 5 nm with 148 channels, and a single satellite could obtain hyperspectral imagery with 2.5 million km2 per day, for global coverage every 16 days. This paper describes the potential applications of constellation image in environment monitoring.
An estimate of the number of tropical tree species.
Slik, J W Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L; Bellingham, Peter J; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L M; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K; Chazdon, Robin L; Robin, Chazdon L; Clark, Connie; Clark, David B; Clark, Deborah A; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A O; Eisenlohr, Pedro V; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A; Joly, Carlos A; de Jong, Bernardus H J; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F; Lawes, Michael J; Amaral, Ieda Leao do; Letcher, Susan G; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H; Meilby, Henrik; Melo, Felipe P L; Metcalfe, Daniel J; Medjibe, Vincent P; Metzger, Jean Paul; Millet, Jerome; Mohandass, D; Montero, Juan C; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T F; Pitman, Nigel C A; Poorter, Lourens; Poulsen, Axel D; Poulsen, John; Powers, Jennifer; Prasad, Rama C; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; Dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A; Santos, Fernanda; Sarker, Swapan K; Satdichanh, Manichanh; Schmitt, Christine B; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I-Fang; Sunderland, Terry; Sunderand, Terry; Suresh, H S; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L C H; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A; Webb, Campbell O; Whitfeld, Timothy; Wich, Serge A; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C Yves; Yap, Sandra L; Yoneda, Tsuyoshi; Zahawi, Rakan A; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L; Garcia Luize, Bruno; Venticinque, Eduardo M
2015-06-16
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher's alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼ 40,000 and ∼ 53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼ 19,000-25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼ 4,500-6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa.
Organizational Deviance and Multi-Factor Leadership
ERIC Educational Resources Information Center
Aksu, Ali
2016-01-01
Organizational deviant behaviors can be defined as behaviors that have deviated from standards and uncongenial to organization's expectations. When such behaviors have been thought to damage the organization, it can be said that reducing the deviation behaviors at minimum level is necessary for a healthy organization. The aim of this research is…
Positive Classroom Environments = Positive Academic Results
ERIC Educational Resources Information Center
Wilson-Fleming, LaTerra; Wilson-Younger, Dylinda
2012-01-01
This article discusses the effects of a positive classroom environment and its impact on student behavior and achievement. It also provides strategies for developing expectations for student achievement and the importance of parental involvement. A positive classroom environment is essential in keeping behavior problems to a minimum. There are a…
47 CFR 64.604 - Mandatory minimum standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... from a location primarily used as his or her home. (iv) A VRS provider leasing or licensing an... determined annually by the Commission. The contribution factor shall be based on the ratio between expected... who discloses to a designated manager of the provider, the Commission, the TRS Fund administrator or...
Code of Federal Regulations, 2010 CFR
2010-10-01
... within the expected Federal and non-Federal cash resources. (l) For those entities where CMIA and its... INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS Post-Award Requirements § 19.22... established in section § 19.21. (2) Cash advances to a recipient organization shall be limited to the minimum...
A Comparison of Item Selection Techniques for Testlets
ERIC Educational Resources Information Center
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.
2010-01-01
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Okeke, Claudia C; Allen, Loyd V
2009-01-01
The standard operating procedures suggested in this article are presented to compounding pharmacies to ensure the quality of the environment in which a CSP is prepared. Since United States Pharmacopeia Chapter 797 provides minimum standards, each facility should aim for best practice gold standard. The standard operating procedures should be tailored to meet the expectations and design of each facility. Compounding personnel are expected to know and understand each standard operating procedure to allow for complete execution of the procedures.
Government mandates and employer-sponsored health insurance: who is still not covered?
Vanness, David J; Wolfe, Barbara L
2002-06-01
We characterize employer-sponsored health insurance offering strategies in light of benefit non-discrimination and minimum wage regulation when workers have heterogeneous earnings and partially unobservable demand for (and cost of) insurance. We then empirically examine how earnings and expected medical expenses are associated with low wage workers' ability to obtain insurance before and after enactment of federal benefit non-discrimination rules. We find no evidence that the non-discrimination rules helped low wage workers (especially those with high own or children's expected medical expenses) to obtain insurance.
An Analysis of Minimum System Requirements to Support Computerized Adaptive Testing.
1986-09-01
adaptive test ( CAT ); adaptive test ing A;4SRAC:’ (Continue on reverie of necessary and ident4f by block number) % This pape-r discusses the minimum system...requirements needed to develop a computerized adaptive test ( CAT ). It lists some of the benefits of adaptive testing, establishes a set of...discusses the minimum system requirements needed to develop a computerized adaptive test ( CAT ). It lists some of the benefits of adaptive testing
Self-aligned quadruple patterning using spacer on spacer integration optimization for N5
NASA Astrophysics Data System (ADS)
Thibaut, Sophie; Raley, Angélique; Mohanty, Nihar; Kal, Subhadeep; Liu, Eric; Ko, Akiteru; O'Meara, David; Tapily, Kandabara; Biolsi, Peter
2017-04-01
To meet scaling requirements, the semiconductor industry has extended 193nm immersion lithography beyond its minimum pitch limitation using multiple patterning schemes such as self-aligned double patterning, self-aligned quadruple patterning and litho-etch / litho etch iterations. Those techniques have been declined in numerous options in the last few years. Spacer on spacer pitch splitting integration has been proven to show multiple advantages compared to conventional pitch splitting approach. Reducing the number of pattern transfer steps associated with sacrificial layers resulted in significant decrease of cost and an overall simplification of the double pitch split technique. While demonstrating attractive aspects, SAQP spacer on spacer flow brings challenges of its own. Namely, material set selections and etch chemistry development for adequate selectivities, mandrel shape and spacer shape engineering to improve edge placement error (EPE). In this paper we follow up and extend upon our previous learning and proceed into more details on the robustness of the integration in regards to final pattern transfer and full wafer critical dimension uniformity. Furthermore, since the number of intermediate steps is reduced, one will expect improved uniformity and pitch walking control. This assertion will be verified through a thorough pitch walking analysis.
A clustering-based fuzzy wavelet neural network model for short-term load forecasting.
Kodogiannis, Vassilis S; Amina, Mahdi; Petrounias, Ilias
2013-10-01
Load forecasting is a critical element of power system operation, involving prediction of the future level of demand to serve as the basis for supply and demand planning. This paper presents the development of a novel clustering-based fuzzy wavelet neural network (CB-FWNN) model and validates its prediction on the short-term electric load forecasting of the Power System of the Greek Island of Crete. The proposed model is obtained from the traditional Takagi-Sugeno-Kang fuzzy system by replacing the THEN part of fuzzy rules with a "multiplication" wavelet neural network (MWNN). Multidimensional Gaussian type of activation functions have been used in the IF part of the fuzzyrules. A Fuzzy Subtractive Clustering scheme is employed as a pre-processing technique to find out the initial set and adequate number of clusters and ultimately the number of multiplication nodes in MWNN, while Gaussian Mixture Models with the Expectation Maximization algorithm are utilized for the definition of the multidimensional Gaussians. The results corresponding to the minimum and maximum power load indicate that the proposed load forecasting model provides significantly accurate forecasts, compared to conventional neural networks models.
Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks
NASA Astrophysics Data System (ADS)
Tsakmalis, Anestis; Chatzinotas, Symeon; Ottersten, Bjorn
2018-02-01
In this paper, a sequential probing method for interference constraint learning is proposed to allow a centralized Cognitive Radio Network (CRN) accessing the frequency band of a Primary User (PU) in an underlay cognitive scenario with a designed PU protection specification. The main idea is that the CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire the binary ACK/NACK packet. This feedback indicates whether the probing-induced interference is harmful or not and can be used to learn the PU interference constraint. The cognitive part of this sequential probing process is the selection of the power levels of the Secondary Users (SUs) which aims to learn the PU interference constraint with a minimum number of probing attempts while setting a limit on the number of harmful probing-induced interference events or equivalently of NACK packet observations over a time window. This constrained design problem is studied within the Active Learning (AL) framework and an optimal solution is derived and implemented with a sophisticated, accurate and fast Bayesian Learning method, the Expectation Propagation (EP). The performance of this solution is also demonstrated through numerical simulations and compared with modified versions of AL techniques we developed in earlier work.
Recent Immigrants as Labor Market Arbitrageurs: Evidence from the Minimum Wage*
Cadena, Brian C.
2014-01-01
This paper investigates the local labor supply effects of changes to the minimum wage by examining the response of low-skilled immigrants’ location decisions. Canonical models emphasize the importance of labor mobility when evaluating the employment effects of the minimum wage; yet few studies address this outcome directly. Low-skilled immigrant populations shift toward labor markets with stagnant minimum wages, and this result is robust to a number of alternative interpretations. This mobility provides behavior-based evidence in favor of a non-trivial negative employment effect of the minimum wage. Further, it reduces the estimated demand elasticity using teens; employment losses among native teens are substantially larger in states that have historically attracted few immigrant residents. PMID:24999288
Kloster, Stine; Danquah, Ida Høgstedt; Holtermann, Andreas; Aadahl, Mette; Tolstrup, Janne Schurmann
2017-01-01
Harmful health effects associated with sedentary behavior may be attenuated by breaking up long periods of sitting by standing or walking. However, studies assess interruptions in sitting time differently, making comparisons between studies difficult. It has not previously been described how the definition of minimum break duration affects sitting outcomes. Therefore, the aim was to address how definitions of break length affect total sitting time, number of sit-to-stand transitions, prolonged sitting periods and time accumulated in prolonged sitting periods among office workers. Data were collected from 317 office workers. Thigh position was assessed with an ActiGraph GT3X+ fixed on the right thigh. Data were exported with varying bout length of breaks. Afterward, sitting outcomes were calculated for the respective break lengths. Absolute numbers of sit-to-stand transitions decreased, and number of prolonged sitting periods and total time accumulated in prolonged sitting periods increased, with increasing minimum break length. Total sitting time was not influenced by varying break length. The definition of minimum break length influenced the sitting outcomes with the exception of total sitting time. A standard definition of break length is needed for comparison and interpretation of studies in the evolving research field of sedentary behavior.
Fernandez, E; Williams, D G
2009-10-01
The implementation of the European Working Time Directive (WTD) has reduced the hours worked by trainees in the UK to a maximum of 56 h per week. With a further and final reduction to 48 h per week scheduled for August 2009, there is concern amongst doctors about the impact on training and on patient care. Paediatric anaesthesia is one of the specialist areas of anaesthesia for which the Royal College of Anaesthetists (RCoA) recommends a minimum caseload during the period of advanced training. We conducted a retrospective analysis of theatre logbook data from 62 Specialist Registrars (SpRs) who had completed a 12 month period of advanced training in paediatric anaesthesia in our institution between 2000 and 2007. After the implementation of the WTD 56 h week in 2004, the mean total number of cases performed by SpRs per year decreased from 441 to 336, a 24% reduction. We found a statistically significant reduction across all age groups with the largest reduction in the under 1 month of age group. The post-WTD group did not meet the RCoA recommended total minimum caseload or the minimum number of cases of <1 yr of age. Since the implementation of the WTD, there has been a significant reduction in the number of cases performed by SpRs in paediatric anaesthesia and they are no longer achieving the RCoA recommended minimum numbers for advanced training.
Longevity in Calumma parsonii, the World's largest chameleon.
Tessa, Giulia; Glaw, Frank; Andreone, Franco
2017-03-01
Large body size of ectothermic species can be correlated with high life expectancy. We assessed the longevity of the World's largest chameleon, the Parson's chameleon Calumma parsonii from Madagascar by using skeletochronology of phalanges taken from preserved specimens held in European natural history museums. Due to the high bone resorption we can provide only the minimum age of each specimen. The highest minimum age detected was nine years for a male and eight years for a female, confirming that this species is considerably long living among chameleons. Our data also show a strong correlation between snout-vent length and estimated age. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimal Trajectories For Orbital Transfers Using Low And Medium Thrust Propulsion Systems
NASA Technical Reports Server (NTRS)
Cobb, Shannon S.
1992-01-01
For many problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well-known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs which will require more revolutions, thus making the low thrust transfer computational intensive. Here, we consider optimal low and medium thrust orbital transfers.
Electron affinity of perhalogenated benzenes: A theoretical DFT study
NASA Astrophysics Data System (ADS)
Volatron, François; Roche, Cécile
2007-10-01
The potential energy surfaces (PES) of unsubstituted and perhalogenated benzene anions ( CX6-, X = F, Cl, Br, and I) were explored by means of DFT-B3LYP calculations. In the F and Cl cases seven extrema were located and characterized. In the Br and I cases only one minimum and two extrema were found. In each case the minimum was recomputed at the CCSD(T) level. The electron affinities of C 6X 6 were calculated (ZPE included). The results obtained agree well with the experimental determinations when available. The values obtained in the X = Br and the X = I cases are expected to be valuable predictions.
NASA Astrophysics Data System (ADS)
Farhang, Nastaran; Safari, Hossein; Wheatland, Michael S.
2018-05-01
Solar flares are an abrupt release of magnetic energy in the Sun’s atmosphere due to reconnection of the coronal magnetic field. This occurs in response to turbulent flows at the photosphere that twist the coronal field. Similar to earthquakes, solar flares represent the behavior of a complex system, and expectedly their energy distribution follows a power law. We present a statistical model based on the principle of minimum energy in a coronal loop undergoing magnetic reconnection, which is described as an avalanche process. We show that the distribution of peaks for the flaring events in this self-organized critical system is scale-free. The obtained power-law index of 1.84 ± 0.02 for the peaks is in good agreement with satellite observations of soft X-ray flares. The principle of minimum energy can be applied for general avalanche models to describe many other phenomena.
Antidepressant treatment of depression in rural nursing home residents.
Kerber, Cindy Sullivan; Dyck, Mary J; Culp, Kennith R; Buckwalter, Kathleen
2008-09-01
Under-diagnosis and under-treatment of depression are major problems in nursing home residents. The purpose of this study was to determine antidepressant use among nursing home residents who were diagnosed with depression using three different methods: (1) the Geriatric Depression Scale, (2) Minimum Data Set, and (3) primary care provider assessments. As one would expect, the odds of being treated with an antidepressant were about eight times higher for those diagnosed as depressed by the primary care provider compared to the Geriatric Depression Scale or the Minimum Data Set. Men were less likely to be diagnosed and treated with antidepressants by their primary care provider than women. Depression detected by nurses through the Minimum Data Set was treated at a lower rate with antidepressants, which generates issues related to interprofessional communication, nursing staff communication, and the need for geropsychiatric role models in nursing homes.
Little or no solar wind enters Venus' atmosphere at solar minimum.
Zhang, T L; Delva, M; Baumjohann, W; Auster, H-U; Carr, C; Russell, C T; Barabash, S; Balikhin, M; Kudela, K; Berghofer, G; Biernat, H K; Lammer, H; Lichtenegger, H; Magnes, W; Nakamura, R; Schwingenschuh, K; Volwerk, M; Vörös, Z; Zambelli, W; Fornacon, K-H; Glassmeier, K-H; Richter, I; Balogh, A; Schwarzl, H; Pope, S A; Shi, J K; Wang, C; Motschmann, U; Lebreton, J-P
2007-11-29
Venus has no significant internal magnetic field, which allows the solar wind to interact directly with its atmosphere. A field is induced in this interaction, which partially shields the atmosphere, but we have no knowledge of how effective that shield is at solar minimum. (Our current knowledge of the solar wind interaction with Venus is derived from measurements at solar maximum.) The bow shock is close to the planet, meaning that it is possible that some solar wind could be absorbed by the atmosphere and contribute to the evolution of the atmosphere. Here we report magnetic field measurements from the Venus Express spacecraft in the plasma environment surrounding Venus. The bow shock under low solar activity conditions seems to be in the position that would be expected from a complete deflection by a magnetized ionosphere. Therefore little solar wind enters the Venus ionosphere even at solar minimum.
SOLAR CYCLE 25: ANOTHER MODERATE CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, R. H.; Schüssler, M.; Jiang, J., E-mail: cameron@mps.mpg.de
2016-06-01
Surface flux transport simulations for the descending phase of Cycle 24 using random sources (emerging bipolar magnetic regions) with empirically determined scatter of their properties provide a prediction of the axial dipole moment during the upcoming activity minimum together with a realistic uncertainty range. The expectation value for the dipole moment around 2020 (2.5 ± 1.1 G) is comparable to that observed at the end of Cycle 23 (about 2 G). The empirical correlation between the dipole moment during solar minimum and the strength of the subsequent cycle thus suggests that Cycle 25 will be of moderate amplitude, not muchmore » higher than that of the current cycle. However, the intrinsic uncertainty of such predictions resulting from the random scatter of the source properties is considerable and fundamentally limits the reliability with which such predictions can be made before activity minimum is reached.« less
Refractive Index Dispersion in Ternary Germanate Glasses
NASA Astrophysics Data System (ADS)
Sakaguchi, Shigeki; Todoroki, Shinichi; Rigout, Nathalie
1995-10-01
The refractive index dispersion in germanate oxyfluoride glasses of GeO2-P2O5-MF2 (M=Ca, Zn), which are developed for optical fiber application, is investigated in the 0.4-4 µ m wavelength range by the minimum deviation method. The prepared glasses have a GeO2 content varying from 80 to 30 mol%. The dispersion curves for these glasses tend to shift to shorter wavelengths as the GeO2 content is decreased. Material dispersions are also derived from the refractive index measurements and the zero-material dispersion wavelengths (λ0) are found in the vicinity of 1.5 µ m. On the basis of the empirical relationship between λ0 and the minimum loss wavelength (λ0), the λ min values are located at around 1.8 µ m. A minimum loss of as low as 0.08 dB/km is expected for the present germanate glasses.
77 FR 8896 - Notice of Proposed Information Collection for 1029-0036
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... request for Surface Mining Permit Applications--Minimum Requirements for Reclamation and Operation Plan... Permit Applications-- Minimum Requirements for Reclamation and Operation Plan. OSM is requesting a 3-year... Requirements for Reclamation and Operation Plan. OMB Control Number: 1029-0036. SUMMARY: Sections 507(b), 508(a...
Proposal for Support of Miami Inner City Marine Summer Intern Program, Dade County.
1987-12-21
employer NUMBER OF POSITIONS ONE MINIMUM AGE 16 SPECIAL REQUIREMENTS * General Science * Basic knowledge of library orncedures, an interest in library ... science in helpful * Minimum Grade Point Average 3.0 DRESS REQUIREMENTS Discuss with employer JOB DESCRIPTION p. * Catalogs and files new sets of
76 FR 34294 - Proposed Collection; Comment Request for Form 8827
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... 8827, Credit for Prior Year Minimum Tax-Corporations. DATES: Written comments should be received on or before August 12, 2011 to be assured of consideration. ADDRESSES: Direct all written comments to Yvette B....gov . SUPPLEMENTARY INFORMATION: Title: Credit for Prior Year Minimum Tax-Corporations. OMB Number...
78 FR 29433 - Proposed Collection; Comment Request for Form 8801
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-20
..., Public Law 104-13 (44 U.S.C. 3506(c)(2)(A)). Currently, the IRS is soliciting comments concerning Form 8801, Credit For Prior Year Minimum Tax--Individuals, Estates and Trusts. DATES: Written comments... . SUPPLEMENTARY INFORMATION: Title: Credit For Prior Year Minimum Tax--Individuals, Estates and Trusts. OMB Number...
NASA Technical Reports Server (NTRS)
Schatten, K. H.; Scherrer, P. H.; Svalgaard, L.; Wilcox, J. M.
1978-01-01
On physical grounds it is suggested that the sun's polar field strength near a solar minimum is closely related to the following cycle's solar activity. Four methods of estimating the sun's polar magnetic field strength near solar minimum are employed to provide an estimate of cycle 21's yearly mean sunspot number at solar maximum of 140 plus or minus 20. This estimate is considered to be a first order attempt to predict the cycle's activity using one parameter of physical importance.
The number counts and infrared backgrounds from infrared-bright galaxies
NASA Technical Reports Server (NTRS)
Hacking, P. B.; Soifer, B. T.
1991-01-01
Extragalactic number counts and diffuse backgrounds at 25, 60, and 100 microns are predicted using new luminosity functions and improved spectral-energy distribution density functions derived from IRAS observations of nearby galaxies. Galaxies at redshifts z less than 3 that are like those in the local universe should produce a minimum diffuse background of 0.0085, 0.038, and 0.13 MJy/sr at 25, 60, and 100 microns, respectively. Models with significant luminosity evolution predict backgrounds about a factor of 4 greater than this minimum.
Sunspot variation and selected associated phenomena: A look at solar cycle 21 and beyond
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1982-01-01
Solar sunspot cycles 8 through 21 are reviewed. Mean time intervals are calculated for maximum to maximum, minimum to minimum, minimum to maximum, and maximum to minimum phases for cycles 8 through 20 and 8 through 21. Simple cosine functions with a period of 132 years are compared to, and found to be representative of, the variation of smoothed sunspot numbers at solar maximum and minimum. A comparison of cycles 20 and 21 is given, leading to a projection for activity levels during the Spacelab 2 era (tentatively, November 1984). A prediction is made for cycle 22. Major flares are observed to peak several months subsequent to the solar maximum during cycle 21 and to be at minimum level several months after the solar minimum. Additional remarks are given for flares, gradual rise and fall radio events and 2800 MHz radio emission. Certain solar activity parameters, especially as they relate to the near term Spacelab 2 time frame are estimated.
Daviaud, Emmanuelle; Chopra, Mickey
2008-01-01
To quantify staff requirements in primary health care facilities in South Africa through an adaptation of the WHO workload indicator of staff needs tool. We use a model to estimate staffing requirements at primary health care facilities. The model integrates several empirically-based assumptions including time and type of health worker required for each type of consultation, amount of management time required, amount of clinical support required and minimum staff requirements per type of facility. We also calculate the number of HIV-related consultations per district. The model incorporates type of facility, monthly travelling time for mobile clinics, opening hours per week, yearly activity and current staffing and calculates the expected staffing per category of staff per facility and compares it to the actual staffing. Across all the districts there is either an absence of doctors visiting clinics or too few doctors to cover the opening times of community health centres. Overall the number of doctors is only 7% of the required amount. There is 94% of the required number of professional nurses but with wide variations between districts, with a few districts having excesses while most have shortages. The number of enrolled nurses is 60% of what it should be. There are 17% too few enrolled nurse assistants. Across all districts there is wide variation in staffing levels between facilities leading to inefficient use of professional staff. The application of an adapted WHO workload tool identified important human resource planning issues.
Immediate loading with fixed full-arch prostheses in the maxilla: Review of the literature
Peñarrocha-Oltra, David; Covani, Ugo; Peñarrocha-Diago, Miguel
2014-01-01
Objectives: To critically review the evidence-based literature on immediate loading of implants with fixed full-arch prostheses in the maxilla to determine 1) currently recommended performance criteria and 2) the outcomes that can be expected with this procedure. Study Desing: Studies from 2001 to 2011 on immediate loading with fixed full-arch maxillary prostheses were reviewed. Clinical series with at least 5 patients and 12 months of follow-up were included. Case reports, studies with missing data and repeatedly published studies were excluded. In each study the following was assessed: type of study, implant type, number of patients, number of implants, number of implants per patient, use of post-extraction implants, minimum implant length and diameter, type of prosthesis, time until loading, implant survival rate, prosthesis survival rate, marginal bone loss, complications andmean follow-up time. Criteria for patient selection, implant primary stability and bone regeneration were also studied. Results: Thirteen studies were included, reporting a total of 2484 immediately loaded implants in 365 patients. Currently accepted performance criteria regarding patient and implant selection, and surgical and prosthetic procedures were deduced from the reviewed articles. Implant survival rates went from 87.5% to 100%, prosthesis survival rates from 93.8% to 100% and radiographic marginal bone loss from 0.8 mm to 1.6 mm.No intraoperative complications and only minor prosthetic complications were reported. Conclusions: The literature on immediate loading with fixed full-arch prostheses in the maxilla shows that a successful outcome can be expected if adequate criteria are used to evaluate the patient, choose the implant and perform the surgical and prosthetic treatment. Lack of homogeneity within studies limits the relevance of the conclusions that can be drawn, and more controlled randomized studies are necessary to enable comparison between the immediate and the conventional loading procedures. Key words:Immediate loading, full-arch, dental implants, loading protocols. PMID:24880445
A Multi-Hop Clustering Mechanism for Scalable IoT Networks.
Sung, Yoonyoung; Lee, Sookyoung; Lee, Meejeong
2018-03-23
It is expected that up to 26 billion Internet of Things (IoT) equipped with sensors and wireless communication capabilities will be connected to the Internet by 2020 for various purposes. With a large scale IoT network, having each node connected to the Internet with an individual connection may face serious scalability issues. The scalability problem of the IoT network may be alleviated by grouping the nodes of the IoT network into clusters and having a representative node in each cluster connect to the Internet on behalf of the other nodes in the cluster instead of having a per-node Internet connection and communication. In this paper, we propose a multi-hop clustering mechanism for IoT networks to minimize the number of required Internet connections. Specifically, the objective of proposed mechanism is to select the minimum number of coordinators, which take the role of a representative node for the cluster, i.e., having the Internet connection on behalf of the rest of the nodes in the cluster and to map a partition of the IoT nodes onto the selected set of coordinators to minimize the total distance between the nodes and their respective coordinator under a certain constraint in terms of maximum hop count between the IoT nodes and their respective coordinator. Since this problem can be mapped into a set cover problem which is known as NP-hard, we pursue a heuristic approach to solve the problem and analyze the complexity of the proposed solution. Through a set of experiments with varying parameters, the proposed scheme shows 63-87.3% reduction of the Internet connections depending on the number of the IoT nodes while that of the optimal solution is 65.6-89.9% in a small scale network. Moreover, it is shown that the performance characteristics of the proposed mechanism coincide with expected performance characteristics of the optimal solution in a large-scale network.
Alban, L; Barfod, K; Petersen, J V; Dahl, J; Ajufo, J C; Sandø, G; Krog, H H; Aabo, S
2010-11-01
Salmonella in pork can be combated during pre- or post-harvest. For large slaughterhouses, post-harvest measures like decontamination might be cost-effective while this is less likely with small-to-medium sized slaughterhouses. In this study, pre-harvest measures might be more relevant. We describe an extended surveillance-and-control programme for Salmonella in finisher pigs, which, to establish equivalence to the Swedish control programme, is intended for implementation on the Danish island, Bornholm. The effect of the programme on food safety was estimated by analysing Salmonella data from pig carcasses originating from herds that would have qualified for the programme during 2006-2008. Food safety was interpreted as prevalence of Salmonella on carcasses as well as the estimated number of human cases of salmonellosis related to pork produced within the programme. Data from the Danish Salmonella programme were obtained from Bornholm. We used a simulation model developed to estimate the number of human cases based on the prevalence of Salmonella on carcass swabs. Herds are only accepted in the programme if they have one or less seropositive sample within the previous 6 months. In this way, the Salmonella load is kept to a minimum. The programme is not yet in operation and pigs that qualify for the programme are currently mixed at slaughter with those that do not qualify. Therefore, we had to assess the impact on the carcass prevalence indirectly. The prevalence of Salmonella in carcass swabs among qualifying herds was 0.46% for the 3 years as a whole, with 2006 as the year with highest prevalence. According to the simulation the expected number of human cases relating to pork produced within the programme was below 10. When the programme is in operation, an extra effect of separating pigs within the programme from those outside is expected to lower the prevalence of Salmonella even further. © 2010 Blackwell Verlag GmbH.
Lollivier, S
1984-06-01
This study uses data from tax declarations for 40,000 French households for 1975 to propose a model that permits quantification of the effects of certain significant factors on the economic activity of married women. The PROBIT model of analysis of variance was used to determine the specific effect of several variables, including age of the woman, number of children under 25 years of age in the household, the age of the youngest child, husband's income and socioprofessional status, wife's level and type of education, size of community of residence and region of residence. The principal factors influencing activity rates were found to be educational level, age, and to those of childless women, but activity rates dropped by about 30% for mothers of 2 and even more for mothers of 3 or more children. Influence of the place of residence and the husband's income were associated with lesser disparities. The reasons for variations in female labor force participation can be viewed as analogous to a balance. Underlying factors can increase or decrease the income the woman hopes to earn (offered income) as well as the minimum income for which she will work (required salary). A TOBIT model was constructed in which income was a function of age, education, geographic location, and number of children, and salary required was a function of the variables related to the husband including income and socioprofessional status. For most of the effects considered, the observed variation in activity rates resulted from variations in offered income. The husband's income influences only the desired salary. The offered income decreases and the required salary increases when the number of children is 2 or more, reducing the rate of activity. More educated women have slightly greater salary expectations, but command much higher salaries, resulting in an increased rate of professional activity.
A Multi-Hop Clustering Mechanism for Scalable IoT Networks
2018-01-01
It is expected that up to 26 billion Internet of Things (IoT) equipped with sensors and wireless communication capabilities will be connected to the Internet by 2020 for various purposes. With a large scale IoT network, having each node connected to the Internet with an individual connection may face serious scalability issues. The scalability problem of the IoT network may be alleviated by grouping the nodes of the IoT network into clusters and having a representative node in each cluster connect to the Internet on behalf of the other nodes in the cluster instead of having a per-node Internet connection and communication. In this paper, we propose a multi-hop clustering mechanism for IoT networks to minimize the number of required Internet connections. Specifically, the objective of proposed mechanism is to select the minimum number of coordinators, which take the role of a representative node for the cluster, i.e., having the Internet connection on behalf of the rest of the nodes in the cluster and to map a partition of the IoT nodes onto the selected set of coordinators to minimize the total distance between the nodes and their respective coordinator under a certain constraint in terms of maximum hop count between the IoT nodes and their respective coordinator. Since this problem can be mapped into a set cover problem which is known as NP-hard, we pursue a heuristic approach to solve the problem and analyze the complexity of the proposed solution. Through a set of experiments with varying parameters, the proposed scheme shows 63–87.3% reduction of the Internet connections depending on the number of the IoT nodes while that of the optimal solution is 65.6–89.9% in a small scale network. Moreover, it is shown that the performance characteristics of the proposed mechanism coincide with expected performance characteristics of the optimal solution in a large-scale network. PMID:29570691
Effects of anisotropic conduction and heat pipe interaction on minimum mass space radiators
NASA Technical Reports Server (NTRS)
Baker, Karl W.; Lund, Kurt O.
1991-01-01
Equations are formulated for the two dimensional, anisotropic conduction of heat in space radiator fins. The transverse temperature field was obtained by the integral method, and the axial field by numerical integration. A shape factor, defined for the axial boundary condition, simplifies the analysis and renders the results applicable to general heat pipe/conduction fin interface designs. The thermal results are summarized in terms of the fin efficiency, a radiation/axial conductance number, and a transverse conductance surface Biot number. These relations, together with those for mass distribution between fins and heat pipes, were used in predicting the minimum radiator mass for fixed thermal properties and fin efficiency. This mass is found to decrease monotonically with increasing fin conductivity. Sensitivities of the minimum mass designs to the problem parameters are determined.
Self-Advocacy and Perceptions of College Readiness among Students with ADHD
ERIC Educational Resources Information Center
Stamp, Lucy; Banerjee, Manju; Brown, Franklin C.
2014-01-01
This study examined issues related to college adjustment and self-advocacy from the perspective of students diagnosed with a primarily inattentive presentation of Attention Deficit Hyperactive Disorder (ADHD) who were unable to meet minimum academic expectations in their first attempt at college. Data were gathered from 12 students with ADHD who,…
40 CFR 63.7521 - What fuel analyses and procedures must I use?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., at a point prior to mixing with other dissimilar fuel types. (iv) For each fuel type, the analytical methods, with the expected minimum detection levels, to be used for the measurement of selected total metals, chlorine, or mercury. (v) If you request to use an alternative analytical method other than those...
Five Steps for Improving Evaluation Reports by Using Different Data Analysis Methods.
ERIC Educational Resources Information Center
Thompson, Bruce
Although methodological integrity is not the sole determinant of the value of a program evaluation, decision-makers do have a right, at a minimum, to be able to expect competent work from evaluators. This paper explores five areas where evaluators might improve methodological practices. First, evaluation reports should reflect the limited…
49 CFR 194.103 - Significant and substantial harm; operator's statement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... a stress level greater than 50 percent of the specified minimum yield strength of the pipe, (4) Is located within a 5 mile (8 kilometer) radius of potentially affected public drinking water intakes and could reasonably be expected to reach public drinking water intakes, or (5) Is located within a 1 mile...
49 CFR 194.103 - Significant and substantial harm; operator's statement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... a stress level greater than 50 percent of the specified minimum yield strength of the pipe, (4) Is located within a 5 mile (8 kilometer) radius of potentially affected public drinking water intakes and could reasonably be expected to reach public drinking water intakes, or (5) Is located within a 1 mile...
Travel and Tourism Module. An Advanced-Level Option For Distribution and Marketing.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. Bureau of Occupational Education Curriculum Development.
Intended as an advanced option for distributive education students in the twelfth grade, this travel and tourism module is designed to cover a minimum of ten weeks or a maximum of twenty weeks. Introductory material includes information on employment demands, administrative considerations, course format, teaching suggestions, expected outcomes,…
Affective Management Strategies for Behavior Disordered Students--Elementary and Secondary Levels.
ERIC Educational Resources Information Center
Burkholder, Lynn D.; And Others
The Positive Education Program (PEP) in Cuyahoga, Ohio, incorporates a token economy and group process approach into the daily school routine for emotionally disturbed and behaviorally handicapped students. At elementary and secondary levels, minimum rules and expectations as well as privileges awarded for behaviors are clearly set forth. The…
2006-05-01
bedridden , who are wheel-chair bound, or who have short life expectancies). Key Research Accomplishments • 27 FE models were created and analyzed...minimally invasive procedure is a viable option for, at a minimum, situations with low cyclic loading, such as for patients who are bedridden , who are wheel
Upward ant distribution shift corresponds with minimum, not maximum, temperature tolerance
Robert J. Warren; Lacy Chick
2013-01-01
Rapid climate change may prompt species distribution shifts upward and poleward, but species movement in itself is not sufficient to establish climate causation. Other dynamics, such as disturbance history, may prompt species distribution shifts resembling those expected from rapid climate change. Links between species distributions, regional climate trends and...
Forest/non-forest stratification in Georgia with Landsat Thematic Mapper data
William H. Cooke
2000-01-01
Geographically accurate Forest Inventory and Analysis (FIA) data may be useful for training, classification, and accuracy assessment of Landsat Thematic Mapper (TM) data. Minimum expectation for maps derived from Landsat data is accurate discrimination of several land cover classes. Landsat TM costs have decreased dramatically, but acquiring cloud-free scenes at...
How Do We Make Inclusive Education Happen When Exclusion Is a Political Predisposition?
ERIC Educational Resources Information Center
Slee, Roger
2013-01-01
Convening a conference under the banner: Making Inclusion Happen, reminds us that the struggle for disabled people's rights to the minimum expectations of citizenship; access to education, work, housing, health care, civic connection remains urgent. Notwithstanding the hard fought for United Nations, human rights charters and national…
ERIC Educational Resources Information Center
Van Boxtel, Joanne M.
2017-01-01
The Common Core State Standards (CCSS) have been described as the next chapter in American education with the promise to deliver "fewer, clearer, and higher" standards aimed at preparing "all" students for college "and" career (Rothman, 2013). Though CCSS articulates minimum expectations for what college and…
ERIC Educational Resources Information Center
Gribble, Nigel; Dender, Alma; Lawrence, Emma; Manning, Kirrily; Falkmer, Torbjorn
2014-01-01
In the increasingly global world, skills in cultural competence now form part of the minimum standards of practice required for allied health professionals. During an international work-integrated learning (WIL) placement, allied health students' cultural competence is expected to be enhanced. The present study scrutinized reflective journals of…
Application of quadratic optimization to supersonic inlet control.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Zeller, J. R.
1972-01-01
This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.
Electron Pitch-Angle Distribution in Pressure Balance Structures Measured by Ulysses/SWOOPS
NASA Technical Reports Server (NTRS)
Yamauchi, Yohei; Suess, Steven T.; Sakurai, Takashi; Six, N. Frank (Technical Monitor)
2002-01-01
Pressure balance structures (PBSs) are a common feature in the high-latitude solar wind near solar minimum. From previous studies, PBSs are believed to be remnants of coronal plumes. Yamauchi et al [2002] investigated the magnetic structures of the PBSs, applying a minimum variance analysis to Ulysses/Magnetometer data. They found that PBSs contain structures like current sheets or plasmoids, and suggested that PBSs are associated with network activity such as magnetic reconnection in the photosphere at the base of polar plumes. We have investigated energetic electron data from Ulysses/SWOOPS to see whether bi-directional electron flow exists and we have found evidence supporting the earlier conclusions. We find that 45 ot of 53 PBSs show local bi-directional or isotopic electron flux or flux associated with current-sheet structure. Only five events show the pitch-angle distribution expected for Alfvenic fluctuations. We conclude that PBSs do contain magnetic structures such as current sheets or plasmoids that are expected as a result of network activity at the base of polar plumes.
49 CFR 238.230 - Safety appliances-new equipment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... a minimum weld strength, based on yield, of three times the strength of the number of SAE grade 2, 1...; (v) The weld is designed for infinite fatigue life in the application that it will be placed; (vi... upon request. At a minimum, this record shall include the date, time, location, identification of the...
Child Labour Remains "Massive Problem."
ERIC Educational Resources Information Center
World of Work, 2002
2002-01-01
Despite significant progress in efforts to abolish child labor, an alarming number of children are engaged in its worst forms. Although 106 million are engaged in acceptable labor (light work for those above the minimum age for employment), 246 million are involved in child labor that should be abolished (under minimum age, hazardous work). (JOW)
Minimum viable populations: Is there a 'magic number' for conservation practitioners?
Curtis H. Flather; Gregory D. Hayward; Steven R. Beissinger; Philip A. Stephens
2011-01-01
Establishing species conservation priorities and recovery goals is often enhanced by extinction risk estimates. The need to set goals, even in data-deficient situations, has prompted researchers to ask whether general guidelines could replace individual estimates of extinction risk. To inform conservation policy, recent studies have revived the concept of the minimum...
29 CFR 779.507 - Fourteen-year minimum.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the age minimum to 14 years where he finds that such employment is confined to periods which will not... in a limited number of occupations where the work is performed outside school hours and is confined... through 570.38 of this chapter), employment of minors in this age group is not permitted in the following...
29 CFR 779.507 - Fourteen-year minimum.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the age minimum to 14 years where he finds that such employment is confined to periods which will not... in a limited number of occupations where the work is performed outside school hours and is confined... through 570.38 of this chapter), employment of minors in this age group is not permitted in the following...
29 CFR 779.507 - Fourteen-year minimum.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the age minimum to 14 years where he finds that such employment is confined to periods which will not... in a limited number of occupations where the work is performed outside school hours and is confined... through 570.38 of this chapter), employment of minors in this age group is not permitted in the following...
29 CFR 779.507 - Fourteen-year minimum.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the age minimum to 14 years where he finds that such employment is confined to periods which will not... in a limited number of occupations where the work is performed outside school hours and is confined... through 570.38 of this chapter), employment of minors in this age group is not permitted in the following...
G-NEST: A gene neighborhood scoring tool to identify co-conserved, co-expressed genes
USDA-ARS?s Scientific Manuscript database
In previous studies, gene neighborhoods--spatial clusters of co-expressed genes in the genome--have been defined using arbitrary rules such as requiring adjacency, a minimum number of genes, a fixed window size, or a minimum expression level. In the current study, we developed a Gene Neighborhood Sc...
40 CFR 180.960 - Polymers; exemptions from the requirement of a tolerance.
Code of Federal Regulations, 2013 CFR
2013-07-01
...-hydroxypoly (oxypropylene) and/or poly (oxyethylene) polymers where the alkyl chain contains a minimum of six... (oxypropylene) poly(oxyethylene) block copolymer; the minimum poly(oxypropylene) content is 27 moles and the... number average molecular weight (in amu), 900,000 62386-95-2 Monophosphate ester of the block copolymer α...
Minimum Income Allocation System (RMI): a longitudinal view.
Cordazzo, Philippe
2005-10-01
In 2000, for the first time, the number of minimum income allocation system (RMI) recipients decreased. In 2001, this drop in the number of recipients began to stabilize, and the number started to increase again in 2002. The author observed a stabilization of the number of new recipients, whereas the number of exits decreased. This situation is different according to local countries (departments). The probability of RMI entries is more important for populations living in the south and southeast of France. RMI recipients of the more recent cohorts leave more quickly and in proportion more significantly than do the recipients of the older cohorts. This phenomenon is alarming because the exits occur massively during the first 2 years spent in the RMI device and because the probability of leaving decreases sharply. The author has thus observed that a significant portion of the recipients (28%) is present after 5 years or more in the RMI device.
Multi-valued logic gates based on ballistic transport in quantum point contacts.
Seo, M; Hong, C; Lee, S-Y; Choi, H K; Kim, N; Chung, Y; Umansky, V; Mahalu, D
2014-01-22
Multi-valued logic gates, which can handle quaternary numbers as inputs, are developed by exploiting the ballistic transport properties of quantum point contacts in series. The principle of a logic gate that finds the minimum of two quaternary number inputs is demonstrated. The device is scalable to allow multiple inputs, which makes it possible to find the minimum of multiple inputs in a single gate operation. Also, the principle of a half-adder for quaternary number inputs is demonstrated. First, an adder that adds up two quaternary numbers and outputs the sum of inputs is demonstrated. Second, a device to express the sum of the adder into two quaternary digits [Carry (first digit) and Sum (second digit)] is demonstrated. All the logic gates presented in this paper can in principle be extended to allow decimal number inputs with high quality QPCs.
A unified viscous theory of lift and drag of 2-D thin airfoils and 3-D thin wings
NASA Technical Reports Server (NTRS)
Yates, John E.
1991-01-01
A unified viscous theory of 2-D thin airfoils and 3-D thin wings is developed with numerical examples. The viscous theory of the load distribution is unique and tends to the classical inviscid result with Kutta condition in the high Reynolds number limit. A new theory of 2-D section induced drag is introduced with specific applications to three cases of interest: (1) constant angle of attack; (2) parabolic camber; and (3) a flapped airfoil. The first case is also extended to a profiled leading edge foil. The well-known drag due to absence of leading edge suction is derived from the viscous theory. It is independent of Reynolds number for zero thickness and varies inversely with the square root of the Reynolds number based on the leading edge radius for profiled sections. The role of turbulence in the section induced drag problem is discussed. A theory of minimum section induced drag is derived and applied. For low Reynolds number the minimum drag load tends to the constant angle of attack solution and for high Reynolds number to an approximation of the parabolic camber solution. The parabolic camber section induced drag is about 4 percent greater than the ideal minimum at high Reynolds number. Two new concepts, the viscous induced drag angle and the viscous induced separation potential are introduced. The separation potential is calculated for three 2-D cases and for a 3-D rectangular wing. The potential is calculated with input from a standard doublet lattice wing code without recourse to any boundary layer calculations. Separation is indicated in regions where it is observed experimentally. The classical induced drag is recovered in the 3-D high Reynolds number limit with an additional contribution that is Reynold number dependent. The 3-D viscous theory of minimum induced drag yields an equation for the optimal spanwise and chordwise load distribution. The design of optimal wing tip planforms and camber distributions is possible with the viscous 3-D wing theory.
The influence of climate variables on dengue in Singapore.
Pinto, Edna; Coelho, Micheline; Oliver, Leuda; Massad, Eduardo
2011-12-01
In this work we correlated dengue cases with climatic variables for the city of Singapore. This was done through a Poisson Regression Model (PRM) that considers dengue cases as the dependent variable and the climatic variables (rainfall, maximum and minimum temperature and relative humidity) as independent variables. We also used Principal Components Analysis (PCA) to choose the variables that influence in the increase of the number of dengue cases in Singapore, where PC₁ (Principal component 1) is represented by temperature and rainfall and PC₂ (Principal component 2) is represented by relative humidity. We calculated the probability of occurrence of new cases of dengue and the relative risk of occurrence of dengue cases influenced by climatic variable. The months from July to September showed the highest probabilities of the occurrence of new cases of the disease throughout the year. This was based on an analysis of time series of maximum and minimum temperature. An interesting result was that for every 2-10°C of variation of the maximum temperature, there was an average increase of 22.2-184.6% in the number of dengue cases. For the minimum temperature, we observed that for the same variation, there was an average increase of 26.1-230.3% in the number of the dengue cases from April to August. The precipitation and the relative humidity, after analysis of correlation, were discarded in the use of Poisson Regression Model because they did not present good correlation with the dengue cases. Additionally, the relative risk of the occurrence of the cases of the disease under the influence of the variation of temperature was from 1.2-2.8 for maximum temperature and increased from 1.3-3.3 for minimum temperature. Therefore, the variable temperature (maximum and minimum) was the best predictor for the increased number of dengue cases in Singapore.
van Lettow, Monique; Tweya, Hannock; Rosenberg, Nora E; Trapence, Clement; Kayoyo, Virginia; Kasende, Florence; Kaunda, Blessings; Hosseinipour, Mina C; Eliya, Michael; Cataldo, Fabian; Gugsa, Salem; Phiri, Sam
2017-07-11
Malawi introduced an ambitious public health program known as "Option B+" which provides all HIV-infected pregnant and breastfeeding women with lifelong combination antiretroviral therapy, regardless of WHO clinical stage or CD4 cell count. The PMTCT Uptake and REtention (PURE) study aimed at evaluating the effect of peer-support on care-seeking and retention in care. PURE Malawi was a three-arm cluster randomized controlled trial that compared facility-based and community-based models of peer support to standard of care under Option B+ strategy. Each arm was expected to enroll a minimum of 360 women with a total minimum sample size of 1080 participants. 21 sites (clusters) were selected for inclusion in the study. This paper describes the site selection, recruitment, enrollment process and baseline characteristics of study sites and women enrolled in the trial. Study implementation was managed by 3 partner organizations; each responsible for 7 study sites. The trial was conducted in the South East, South West, and Central West zones of Malawi, the zones where the implementing partners operate. Study sites included 2 district hospitals, 2 mission hospitals, 2 rural hospitals, 13 health centers and 1 private clinic. Enrollment occurred from November 2013 to November 2014, over a median period of 31 weeks (range 17-51) by site. A total of 1269 HIV-infected pregnant (1094) and breastfeeding (175) women, who were eligible to initiate ART under Option B+, were enrolled. Each site reached or surpassed the minimum sample size. Comparing the number of women enrolled versus antenatal cohort reports, sites recruited a median of 90% (IQR 75-100) of eligible reported women. In the majority of sites the ratio of pregnant and lactating women enrolled in the study was similar to the ratio of reported pregnant and lactating women starting ART in the same sites. The median age of all women was 27 (IQR 22-31) years. All women have ≥20 months of possible follow-up time; 96% ≥ 2 years (24-32 months). The PURE Malawi study showed that 3 implementing partner organizations could successfully recruit a complex cohort of pregnant and lactating women across 3 geographical zones in Malawi within a reasonable timeline. This study is registered at clinicaltrials.gov - ID Number NCT02005835 . Registered 4 December, 2013.
Optimum flight paths of turbojet aircraft
NASA Technical Reports Server (NTRS)
Miele, Angelo
1955-01-01
The climb of turbojet aircraft is analyzed and discussed including the accelerations. Three particular flight performances are examined: minimum time of climb, climb with minimum fuel consumption, and steepest climb. The theoretical results obtained from a previous study are put in a form that is suitable for application on the following simplifying assumptions: the Mach number is considered an independent variable instead of the velocity; the variations of the airplane mass due to fuel consumption are disregarded; the airplane polar is assumed to be parabolic; the path curvatures and the squares of the path angles are disregarded in the projection of the equation of motion on the normal to the path; lastly, an ideal turbojet with performance independent of the velocity is involved. The optimum Mach number for each flight condition is obtained from the solution of a sixth order equation in which the coefficients are functions of two fundamental parameters: the ratio of minimum drag in level flight to the thrust and the Mach number which represents the flight at constant altitude and maximum lift-drag ratio.
An estimate of the number of tropical tree species
Slik, J. W. Ferry; Arroyo-Rodríguez, Víctor; Aiba, Shin-Ichiro; Alvarez-Loayza, Patricia; Alves, Luciana F.; Ashton, Peter; Balvanera, Patricia; Bastian, Meredith L.; Bellingham, Peter J.; van den Berg, Eduardo; Bernacci, Luis; da Conceição Bispo, Polyanna; Blanc, Lilian; Böhning-Gaese, Katrin; Boeckx, Pascal; Bongers, Frans; Boyle, Brad; Bradford, Matt; Brearley, Francis Q.; Breuer-Ndoundou Hockemba, Mireille; Bunyavejchewin, Sarayudh; Calderado Leal Matos, Darley; Castillo-Santiago, Miguel; Catharino, Eduardo L. M.; Chai, Shauna-Lee; Chen, Yukai; Colwell, Robert K.; Chazdon, Robin L.; Clark, Connie; Clark, David B.; Clark, Deborah A.; Culmsee, Heike; Damas, Kipiro; Dattaraja, Handanakere S.; Dauby, Gilles; Davidar, Priya; DeWalt, Saara J.; Doucet, Jean-Louis; Duque, Alvaro; Durigan, Giselda; Eichhorn, Karl A. O.; Eisenlohr, Pedro V.; Eler, Eduardo; Ewango, Corneille; Farwig, Nina; Feeley, Kenneth J.; Ferreira, Leandro; Field, Richard; de Oliveira Filho, Ary T.; Fletcher, Christine; Forshed, Olle; Franco, Geraldo; Fredriksson, Gabriella; Gillespie, Thomas; Gillet, Jean-François; Amarnath, Giriraj; Griffith, Daniel M.; Grogan, James; Gunatilleke, Nimal; Harris, David; Harrison, Rhett; Hector, Andy; Homeier, Jürgen; Imai, Nobuo; Itoh, Akira; Jansen, Patrick A.; Joly, Carlos A.; de Jong, Bernardus H. J.; Kartawinata, Kuswata; Kearsley, Elizabeth; Kelly, Daniel L.; Kenfack, David; Kessler, Michael; Kitayama, Kanehiro; Kooyman, Robert; Larney, Eileen; Laumonier, Yves; Laurance, Susan; Laurance, William F.; Lawes, Michael J.; do Amaral, Ieda Leao; Letcher, Susan G.; Lindsell, Jeremy; Lu, Xinghui; Mansor, Asyraf; Marjokorpi, Antti; Martin, Emanuel H.; Meilby, Henrik; Melo, Felipe P. L.; Metcalfe, Daniel J.; Medjibe, Vincent P.; Metzger, Jean Paul; Millet, Jerome; Mohandass, D.; Montero, Juan C.; de Morisson Valeriano, Márcio; Mugerwa, Badru; Nagamasu, Hidetoshi; Nilus, Reuben; Ochoa-Gaona, Susana; Onrizal; Page, Navendu; Parolin, Pia; Parren, Marc; Parthasarathy, Narayanaswamy; Paudel, Ekananda; Permana, Andrea; Piedade, Maria T. F.; Pitman, Nigel C. A.; Poorter, Lourens; Poulsen, Axel D.; Poulsen, John; Powers, Jennifer; Prasad, Rama C.; Puyravaud, Jean-Philippe; Razafimahaimodison, Jean-Claude; Reitsma, Jan; dos Santos, João Roberto; Roberto Spironello, Wilson; Romero-Saltos, Hugo; Rovero, Francesco; Rozak, Andes Hamuraby; Ruokolainen, Kalle; Rutishauser, Ervan; Saiter, Felipe; Saner, Philippe; Santos, Braulio A.; Santos, Fernanda; Sarker, Swapan K.; Satdichanh, Manichanh; Schmitt, Christine B.; Schöngart, Jochen; Schulze, Mark; Suganuma, Marcio S.; Sheil, Douglas; da Silva Pinheiro, Eduardo; Sist, Plinio; Stevart, Tariq; Sukumar, Raman; Sun, I.-Fang; Sunderland, Terry; Suresh, H. S.; Suzuki, Eizi; Tabarelli, Marcelo; Tang, Jangwei; Targhetta, Natália; Theilade, Ida; Thomas, Duncan W.; Tchouto, Peguy; Hurtado, Johanna; Valencia, Renato; van Valkenburg, Johan L. C. H.; Van Do, Tran; Vasquez, Rodolfo; Verbeeck, Hans; Adekunle, Victor; Vieira, Simone A.; Webb, Campbell O.; Whitfeld, Timothy; Wich, Serge A.; Williams, John; Wittmann, Florian; Wöll, Hannsjoerg; Yang, Xiaobo; Adou Yao, C. Yves; Yap, Sandra L.; Yoneda, Tsuyoshi; Zahawi, Rakan A.; Zakaria, Rahmad; Zang, Runguo; de Assis, Rafael L.; Garcia Luize, Bruno; Venticinque, Eduardo M.
2015-01-01
The high species richness of tropical forests has long been recognized, yet there remains substantial uncertainty regarding the actual number of tropical tree species. Using a pantropical tree inventory database from closed canopy forests, consisting of 657,630 trees belonging to 11,371 species, we use a fitted value of Fisher’s alpha and an approximate pantropical stem total to estimate the minimum number of tropical forest tree species to fall between ∼40,000 and ∼53,000, i.e., at the high end of previous estimates. Contrary to common assumption, the Indo-Pacific region was found to be as species-rich as the Neotropics, with both regions having a minimum of ∼19,000–25,000 tree species. Continental Africa is relatively depauperate with a minimum of ∼4,500–6,000 tree species. Very few species are shared among the African, American, and the Indo-Pacific regions. We provide a methodological framework for estimating species richness in trees that may help refine species richness estimates of tree-dependent taxa. PMID:26034279
Dependence of the quantum speed limit on system size and control complexity
NASA Astrophysics Data System (ADS)
Lee, Juneseo; Arenz, Christian; Rabitz, Herschel; Russell, Benjamin
2018-06-01
We extend the work in 2017 New J. Phys. 19 103015 by deriving a lower bound for the minimum time necessary to implement a unitary transformation on a generic, closed quantum system with an arbitrary number of classical control fields. This bound is explicitly analyzed for a specific N-level system similar to those used to represent simple models of an atom, or the first excitation sector of a Heisenberg spin chain, both of which are of interest in quantum control for quantum computation. Specifically, it is shown that the resultant bound depends on the dimension of the system, and on the number of controls used to implement a specific target unitary operation. The value of the bound determined numerically, and an estimate of the true minimum gate time are systematically compared for a range of system dimension and number of controls; special attention is drawn to the relationship between these two variables. It is seen that the bound captures the scaling of the minimum time well for the systems studied, and quantitatively is correct in the order of magnitude.
Impact of socioeconomic status on municipal solid waste generation rate.
Khan, D; Kumar, A; Samadder, S R
2016-03-01
The solid waste generation rate was expected to vary in different socioeconomic groups due to many environmental and social factors. This paper reports the assessment of solid waste generation based on different socioeconomic parameters like education, occupation, income of the family, number of family members etc. A questionnaire survey was conducted in the study area to identify the different socioeconomic groups that may affect the solid waste generation rate and composition. The average waste generated in the municipality is 0.41 kg/capita/day in which the maximum waste was found to be generated by lower middle socioeconomic group (LMSEG) with average waste generation of 0.46 kg/capita/day. Waste characterization indicated that there was no much difference in the composition of wastes among different socioeconomic groups except ash residue and plastic. Ash residue is found to increase as we move lower down the socioeconomic groups with maximum (31%) in lower socioeconomic group (LSEG). The study area is a coal based city hence application of coal and wood as fuel for cooking in the lower socioeconomic group is the reason for high amount of ash content. Plastic waste is maximum (15%) in higher socioeconomic group (HSEG) and minimum (1%) in LSEG. Food waste is a major component of generated waste in almost every socioeconomic group with maximum (38%) in case of HSEG and minimum (28%) in LSEG. This study provides new insights on the role of various socioeconomic parameters on generation of household wastes. Copyright © 2016 Elsevier Ltd. All rights reserved.
On Channel-Discontinuity-Constraint Routing in Wireless Networks☆
Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.
2011-01-01
Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646
Tang, J. Y.; Riley, W. J.
2016-02-05
We present a generic flux limiter to account for mass limitations from an arbitrary number of substrates in a biogeochemical reaction network. The flux limiter is based on the observation that substrate (e.g., nitrogen, phosphorus) limitation in biogeochemical models can be represented as to ensure mass conservative and non-negative numerical solutions to the governing ordinary differential equations. Application of the flux limiter includes two steps: (1) formulation of the biogeochemical processes with a matrix of stoichiometric coefficients and (2) application of Liebig's law of the minimum using the dynamic stoichiometric relationship of the reactants. This approach contrasts with the ad hoc down-regulationmore » approaches that are implemented in many existing models (such as CLM4.5 and the ACME (Accelerated Climate Modeling for Energy) Land Model (ALM)) of carbon and nutrient interactions, which are error prone when adding new processes, even for experienced modelers. Through an example implementation with a CENTURY-like decomposition model that includes carbon, nitrogen, and phosphorus, we show that our approach (1) produced almost identical results to that from the ad hoc down-regulation approaches under non-limiting nutrient conditions, (2) properly resolved the negative solutions under substrate-limited conditions where the simple clipping approach failed, (3) successfully avoided the potential conceptual ambiguities that are implied by those ad hoc down-regulation approaches. We expect our approach will make future biogeochemical models easier to improve and more robust.« less
On the normalization of the minimum free energy of RNAs by sequence length.
Trotta, Edoardo
2014-01-01
The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.
On the Normalization of the Minimum Free Energy of RNAs by Sequence Length
Trotta, Edoardo
2014-01-01
The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size. PMID:25405875
Howard, Larry L
2014-01-01
Gains in life expectancy around the world have increasingly placed pressure on governments to ensure that the elderly receive assistance with activities of daily living. This research examines the impact of government oversight of Medicaid payment policies on access to nursing home care services in the United States. Variation in price levels induced by a federal policy shift in 1997 is used to identify the effect of Medicaid reimbursements on the number of nursing homes and beds available. Court rulings prior to the policy change are used to categorically define a range of oversight treatments at the state level. Difference-in-differences estimates indicate a significant decline in access to nursing home care services for individuals living in states in which courts consistently ruled that Medicaid reimbursements did not meet the minimum standard implied by federal law. The findings suggest that nursing home care services were made more accessible through a combination of legislative and judicial oversight of Medicaid payment policies. © The Author(s) 2014.
Galiana, Gigi; Constable, R. Todd
2014-01-01
Purpose Previous nonlinear gradient research has focused on trajectories that reconstruct images with a minimum number of echoes. Here we describe sequences where the nonlinear gradients vary in time to acquire the image in a single readout. The readout is designed to be very smooth so that it can be compressed to minimal time without violating peripheral nerve stimulation limits, yielding an image from a single 4 ms echo. Theory and Methods This sequence was inspired by considering the code of each voxel, i.e. the phase accumulation that a voxel follows through the readout, an approach connected to traditional encoding theory. We present simulations for the initial sequence, a low slew rate analog, and higher resolution reconstructions. Results Extremely fast acquisitions are achievable, though as one would expect, SNR is reduced relative to the slower Cartesian sampling schemes because of the high gradient strengths. Conclusions The prospect that nonlinear gradients can acquire images in a single <10 ms echo makes this a novel and interesting approach to image encoding. PMID:24465837
NASA Astrophysics Data System (ADS)
Odenwald, Sten F.; Green, James L.
2007-06-01
We calculate the economic impact on the existing geosynchronous Earth-orbiting satellite population of an 1859-caliber superstorm event were it to occur between 2008 and 2018 during the next solar activity cycle. From a detailed model for transponder capacity and leasing, we have investigated the total revenue loss over the entire solar cycle, as a function of superstorm onset year and intensity. Our Monte Carlo simulations of 1000 possible superstorms, of varying intensity and onset year, suggest that the minimum revenue loss could be of the order of 30 billion. The losses would be larger than this if more that 20 satellites are disabled, if future launch rates do not keep up with the expected rate of retirements, or if the number of spare transponders falls below ˜30%. Consequently, revenue losses can be significantly reduced below 30 billion if the current satellite population undergoes net growth beyond 300 units during Solar Cycle 24 and a larger margin of unused transponders is maintained.
Software simulations of the detection of rapidly moving asteroids by a charge-coupled device
NASA Astrophysics Data System (ADS)
McMillan, R. S.; Stoll, C. P.
1982-10-01
A rendezvous of an unmanned probe to an earth-approaching asteroid has been given a high priority in the planning of interplanetary missions for the 1990s. Even without a space mission, much could be learned about the history of asteroids and comet nuclei if more information were available concerning asteroids with orbits which cross or approach the orbit of earth. It is estimated that the total number of earth-crossers accessible to ground-based survey telescopes should be approximately 1000. However, in connection with the small size and rapid angular motion expected of many of these objects an average of only one object is discovered per year. Attention is given to the development of the software necessary to distinguish such rapidly moving asteroids from stars and noise in continuously scanned CCD exposures of the night sky. Model and input parameters are considered along with detector sensitivity, aspects of minimum detectable displacement, and the point-spread function of the CCD.
Effect of temperature on the population dynamics of Aedes aegypti
NASA Astrophysics Data System (ADS)
Yusoff, Nuraini; Tokachil, Mohd Najir
2015-10-01
Aedes aegypti is one of the main vectors in the transmission of dengue fever. Its abundance may cause the spread of the disease to be more intense. In the study of its biological life cycle, temperature was found to increase the development rate of each stage of this species and thus, accelerate the process of the development from egg to adult. In this paper, a Lefkovitch matrix model will be used to study the stage-structured population dynamics of Aedes aegypti. In constructing the transition matrix, temperature will be taken into account. As a case study, temperature recorded at the Subang Meteorological Station for year 2006 until 2010 will be used. Population dynamics of Aedes aegypti at maximum, average and minimum temperature for each year will be simulated and compared. It is expected that the higher the temperature, the faster the mosquito will breed. The result will be compared to the number of dengue fever incidences to see their relationship.
2014-01-01
Gains in life expectancy around the world have increasingly placed pressure on governments to ensure that the elderly receive assistance with activities of daily living. This research examines the impact of government oversight of Medicaid payment policies on access to nursing home care services in the United States. Variation in price levels induced by a federal policy shift in 1997 is used to identify the effect of Medicaid reimbursements on the number of nursing homes and beds available. Court rulings prior to the policy change are used to categorically define a range of oversight treatments at the state level. Difference-in-differences estimates indicate a significant decline in access to nursing home care services for individuals living in states in which courts consistently ruled that Medicaid reimbursements did not meet the minimum standard implied by federal law. The findings suggest that nursing home care services were made more accessible through a combination of legislative and judicial oversight of Medicaid payment policies. PMID:25526725
Zang, Qing; Hsieh, C L; Zhao, Junyu; Chen, Hui; Li, Fengjuan
2013-09-01
The detector circuit is the core component of filter polychromator which is used for scattering light analysis in Thomson scattering diagnostic, and is responsible for the precision and stability of a system. High signal-to-noise and stability are primary requirements for the diagnostic. Recently, an upgraded detector circuit for weak light detecting in Experimental Advanced Superconducting Tokamak (EAST) edge Thomson scattering system has been designed, which can be used for the measurement of large electron temperature (T(e)) gradient and low electron density (n(e)). In this new circuit, a thermoelectric-cooled avalanche photodiode with the aid circuit is involved for increasing stability and enhancing signal-to-noise ratio (SNR), especially the circuit will never be influenced by ambient temperature. These features are expected to improve the accuracy of EAST Thomson diagnostic dramatically. Related mechanical construction of the circuit is redesigned as well for heat-sinking and installation. All parameters are optimized, and SNR is dramatically improved. The number of minimum detectable photons is only 10.
Registered nurses' perceptions of nurse staffing ratios and new hospital payment regulations.
Buerhaus, Peter I; Donelan, Karen; DesRoches, Catherine; Hess, Robert
2009-01-01
Two regulatory initiatives weigh heavily on the nursing workforce: establishing minimum patient-to-nurse staffing ratios in hospitals and payment policy that eliminates payment to hospitals for negative consequences of care. Although the majority of RNs favor ratios, results also indicate that a good number of RNs either do not support ratios or are unsure, which suggests that while strong support for ratios exists, the support is not universal. With regard to the Centers for Medicare and Medicaid Services hospital payment regulations, while many RNs expect that this policy change will increase the emphasis on prevention and additional education and training, RNs also believe they will be blamed if adverse patient conditions occur. A clear majority think that their work will increase, and only a small percentage of RNs think the regulations will lead to added respect, more staffing, higher pay, or raise their status. Beyond affecting the clinical environment, both regulations will impact RNs' economic value in the eyes of the hospitals that employ them.
Mortality amongst Paris fire-fighters.
Deschamps, S; Momas, I; Festy, B
1995-12-01
This paper is the first mortality cohort study undertaken in France to examine the association between fire-fighting and cause of death. The cohort investigated in this study consisted of 830 male members of the Brigade des sapeurs-pompiers de Paris (BSPP). These professional had served for a minimum of 5 years on 1 January 1977. They were monitored for a 14 year period, finishing 1 January 1991. When compared to the average French male, the Paris fire-fighters were found to have a far lower overall mortality (SMR = 0.52 [0.35-0.75]). None of the cause specific SMRs were significantly different from unity. However a greater number of deaths than expected was observed for genito-urinary cancer (SMR = 3.29), digestive cancer (SMR = 1.14), respiratory cancer (SMR = 1.12) and 'cerebrovascular disease' (SMR = 1.16). The low overall SMR observed was consistent with the healthy worker effect. As for cause specific SMRs, they will be confirmed or invalidated by a further analysis as the follow-up of this cohort is being carried on.
NASA Astrophysics Data System (ADS)
Popova, E.; Zharkova, V. V.; Shepherd, S. J.; Zharkov, S.
2016-12-01
Using the principal components of solar magnetic field variations derived from the synoptic maps for solar cycles 21-24 with Principal Components Analysis (PCA) (Zharkova et al, 2015) we confirm our previous prediction of the upcoming Maunder minimum to occur in cycles 25-27, or in 2020-2055. We also use a summary curve of the two eigen vectors of solar magnetic field oscillations (or two dynamo waves) to extrapolate solar activity backwards to the three millennia and to compare it with relevant historic and Holocene data. Extrapolation of the summary curve confirms the eight grand cycles of 350-400-years superimposed on 22 year-cycles caused by beating effect of the two dynamo waves generated in the two (deep and shallow) layers of the solar interior. The grand cycles in different periods comprise a different number of individual 22-year cycles; the longer the grand cycles the larger number of 22 year cycles and the smaller their amplitudes. We also report the super-grand cycle of about 2000 years often found in solas activity with spectral analysis. Furthermore, the summary curve reproduces a remarkable resemblance to the sunspot and terrestrial activity reported in the past: the recent Maunder Minimum (1645-1715), Dalton minimum (1790-1815), Wolf minimum (1200), Homeric minimum (800-900 BC), the Medieval Warmth Period (900-1200), the Roman Warmth Period (400-10BC) and so on. Temporal variations of these dynamo waves are modelled with the two layer mean dynamo model with meridional circulation revealing a remarkable resemblance of the butterfly diagram to the one derived for the last Maunder minimum in 17 century and predicting the one for the upcoming Maunder minimum in 2020-2055.
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or recordsto be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or recordsto be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Code of Federal Regulations, 2012 CFR
2012-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or recordsto be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Extremal entanglement witnesses
NASA Astrophysics Data System (ADS)
Hansen, Leif Ove; Hauge, Andreas; Myrheim, Jan; Sollid, Per Øyvind
2015-02-01
We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus TrΩρ≥0 when Ω is a witness and ρ is a pure product state, ρ=ψψ† with ψ=ϕ⊗χ. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value f(ϕ,χ)=TrΩρ as a function of vectors ϕ and χ is a positive semidefinite biquadratic form. Every zero of f(ϕ,χ) imposes strong real-linear constraints on f and Ω. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of f(ϕ,χ) is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the witnesses on a real Hilbert space.
Shuttle payload minimum cost vibroacoustic tests
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.
Low thrust optimal orbital transfers
NASA Technical Reports Server (NTRS)
Cobb, Shannon S.
1994-01-01
For many optimal transfer problems it is reasonable to expect that the minimum time solution is also the minimum fuel solution. However, if one allows the propulsion system to be turned off and back on, it is clear that these two solutions may differ. In general, high thrust transfers resemble the well known impulsive transfers where the burn arcs are of very short duration. The low and medium thrust transfers differ in that their thrust acceleration levels yield longer burn arcs and thus will require more revolutions. In this research, we considered two approaches for solving this problem: a powered flight guidance algorithm previously developed for higher thrust transfers was modified and an 'averaging technique' was investigated.
50 CFR 680.21 - Crab harvesting cooperatives.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Minimum number of members. Each crab harvesting cooperative must include at least four unique QS holding... permanent business address, telephone number, facsimile number, and e-mail address (if available) of the... instructed on the application. Forms are available on the NMFS Alaska Region website at http...
50 CFR 680.21 - Crab harvesting cooperatives.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Minimum number of members. Each crab harvesting cooperative must include at least four unique QS holding... permanent business address, telephone number, facsimile number, and e-mail address (if available) of the... instructed on the application. Forms are available on the NMFS Alaska Region website at http...
Azimuthal magnetorotational instability with super-rotation
NASA Astrophysics Data System (ADS)
Rüdiger, G.; Schultz, M.; Gellert, M.; Stefani, F.
2018-02-01
It is demonstrated that the azimuthal magnetorotational instability (AMRI) also works with radially increasing rotation rates contrary to the standard magnetorotational instability for axial fields which requires negative shear. The stability against non-axisymmetric perturbations of a conducting Taylor-Couette flow with positive shear under the influence of a toroidal magnetic field is considered if the background field between the cylinders is current free. For small magnetic Prandtl number the curves of neutral stability converge in the (Hartmann number,Reynolds number) plane approximating the stability curve obtained in the inductionless limit . The numerical solutions for indicate the existence of a lower limit of the shear rate. For large the curves scale with the magnetic Reynolds number of the outer cylinder but the flow is always stable for magnetic Prandtl number unity as is typical for double-diffusive instabilities. We are particularly interested to know the minimum Hartmann number for neutral stability. For models with resting or almost resting inner cylinder and with perfectly conducting cylinder material the minimum Hartmann number occurs for a radius ratio of \\text{in}=0.9$ . The corresponding critical Reynolds numbers are smaller than 4$ .
Aerodynamics of a Transitioning Turbine Stator Over a Range of Reynolds Numbers
NASA Technical Reports Server (NTRS)
Boyle, R. J.; Lucci, B. L.; Verhoff, V. G.; Camperchioli, W. P.; La, H.
1998-01-01
Midspan aerodynamic measurements for a three vane-four passage linear turbine vane cascade are given. The vane axial chord was 4.45 cm. Surface pressures and loss coefficients were measured at exit Mach numbers of 0.3, 0.7, and 0.9. Reynolds number was varied by a factor of six at the two highest Mach numbers, and by a factor of ten at the lowest Mach number. Measurements were made with and without a turbulence grid. Inlet turbulence intensities were less than I% and greater than IO%. Length scales were also measured. Pressurized air fed the test section, and exited to a low pressure exhaust system. Maximum inlet pressure was two atmospheres. The minimum inlet pressure for an exit Mach number of 0.9 was one-third of an atmosphere, and at a Mach number of 0.3, the minimum pressure was half this value. The purpose of the test was to provide data for verification of turbine vane aerodynamic analyses, especially at low Reynolds numbers. Predictions obtained using a Navier-Stokes analysis with an algebraic turbulence model are also given.
Atomistic modeling of dropwise condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikarwar, B. S., E-mail: bssikarwar@amity.edu; Singh, P. L.; Muralidhar, K.
The basic aim of the atomistic modeling of condensation of water is to determine the size of the stable cluster and connect phenomena occurring at atomic scale to the macroscale. In this paper, a population balance model is described in terms of the rate equations to obtain the number density distribution of the resulting clusters. The residence time is taken to be large enough so that sufficient time is available for all the adatoms existing in vapor-phase to loose their latent heat and get condensed. The simulation assumes clusters of a given size to be formed from clusters of smallermore » sizes, but not by the disintegration of the larger clusters. The largest stable cluster size in the number density distribution is taken to be representative of the minimum drop radius formed in a dropwise condensation process. A numerical confirmation of this result against predictions based on a thermodynamic model has been obtained. Results show that the number density distribution is sensitive to the surface diffusion coefficient and the rate of vapor flux impinging on the substrate. The minimum drop radius increases with the diffusion coefficient and the impinging vapor flux; however, the dependence is weak. The minimum drop radius predicted from thermodynamic considerations matches the prediction of the cluster model, though the former does not take into account the effect of the surface properties on the nucleation phenomena. For a chemically passive surface, the diffusion coefficient and the residence time are dependent on the surface texture via the coefficient of friction. Thus, physical texturing provides a means of changing, within limits, the minimum drop radius. The study reveals that surface texturing at the scale of the minimum drop radius does not provide controllability of the macro-scale dropwise condensation at large timescales when a dynamic steady-state is reached.« less
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2011-01-01
On the basis of 12-month moving averages (12-mma) of monthly mean sunspot number (R), sunspot cycle 24 had its minimum amplitude (Rm = 1.7) in December 2008. At 12 mo past minimum, R measured 8.3, and at 18 mo past minimum, it measured 16.4. Thus far, the maximum month-to-month rate of rise in 12-mma values of monthly mean sunspot number (AR(t) max) has been 1.7, having occurred at elapsed times past minimum amplitude (t) of 14 and 15 mo. Compared to other sunspot cycles of the modern era, cycle 24?s Rm and AR(t) max (as observed so far) are the smallest on record, suggesting that it likely will be a slow-rising, long-period sunspot cycle of below average maximum amplitude (RM). Supporting this view is the now observed relative strength of cycle 24?s geomagnetic minimum amplitude as measured using the 12-mma value of the aa-geomagnetic index (aam = 8.4), which also is the smallest on record, having occurred at t equals 8 and 9 mo. From the method of Ohl (the inferred preferential association between RM and aam), one predicts RM = 55 +/- 17 (the ?1 se prediction interval) for cycle 24. Furthermore, from the Waldmeier effect (the inferred preferential association between the ascent duration (ASC) and RM) one predicts an ASC longer than 48 mo for cycle 24; hence, maximum amplitude occurrence should be after December 2012. Application of the Hathaway-Wilson-Reichmann shape-fitting function, using an RM = 70 and ASC = 56 mo, is found to adequately fit the early sunspot number growth of cycle 24.
Nonlocality and the critical Reynolds numbers of the minimum state magnetohydrodynamic turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Ye; Oughton, Sean
2011-07-15
Magnetohydrodynamic (MHD) systems can be strongly nonlinear (turbulent) when their kinetic and magnetic Reynolds numbers are high, as is the case in many astrophysical and space plasma flows. Unfortunately these high Reynolds numbers are typically much greater than those currently attainable in numerical simulations of MHD turbulence. A natural question to ask is how can researchers be sure that their simulations have reproduced all of the most influential physics of the flows and magnetic fields? In this paper, a metric is defined to indicate whether the necessary physics of interest has been captured. It is found that current computing resourcesmore » will typically not be sufficient to achieve this minimum state metric.« less
On the combinatorics of sparsification.
Huang, Fenix Wd; Reidys, Christian M
2012-10-22
We study the sparsification of dynamic programming based on folding algorithms of RNA structures. Sparsification is a method that improves significantly the computation of minimum free energy (mfe) RNA structures. We provide a quantitative analysis of the sparsification of a particular decomposition rule, Λ∗. This rule splits an interval of RNA secondary and pseudoknot structures of fixed topological genus. Key for quantifying sparsifications is the size of the so called candidate sets. Here we assume mfe-structures to be specifically distributed (see Assumption 1) within arbitrary and irreducible RNA secondary and pseudoknot structures of fixed topological genus. We then present a combinatorial framework which allows by means of probabilities of irreducible sub-structures to obtain the expectation of the Λ∗-candidate set w.r.t. a uniformly random input sequence. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) in case of RNA secondary structures as well as RNA pseudoknot structures. Furthermore, for RNA secondary structures we also analyze a simplified loop-based energy model. Our combinatorial analysis is then compared to the expected number of Λ∗-candidates obtained from the folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop-based energy model our results imply that sparsification provides a significant, constant improvement of 91% (theory) to be compared to an 96% (experimental, simplified arc-based model) reduction. However, we do not observe a linear factor improvement. Finally, in case of the "full" loop-energy model we can report a reduction of 98% (experiment). Sparsification was initially attributed a linear factor improvement. This conclusion was based on the so called polymer-zeta property, which stems from interpreting polymer chains as self-avoiding walks. Subsequent findings however reveal that the O(n) improvement is not correct. The combinatorial analysis presented here shows that, assuming a specific distribution (see Assumption 1), of mfe-structures within irreducible and arbitrary structures, the expected number of Λ∗-candidates is Θ(n2). However, the constant reduction is quite significant, being in the range of 96%. We furthermore show an analogous result for the sparsification of the Λ∗-decomposition rule for RNA pseudoknotted structures of genus one. Finally we observe that the effect of sparsification is sensitive to the employed energy model.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-29
... the transit, and a requirement to maintain a minimum underkeel clearance to prevent groundings. Based...' at Mean Lower Low Water (MLLW), and a minimum channel width of 600'. While most shoaling was removed... number of small entities. The term ``small entities'' comprises small businesses, not-for-profit...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... as the anticipated impact is so minimal. For the same reason, the FAA certifies that this amendment will not have a significant economic impact on a substantial number of small entities under the... Rgnl, Takeoff Minimums and Obstacle DP, Orig Beaver Falls, PA, Beaver County, LOC RWY 10, Amdt 4...
Code of Federal Regulations, 2011 CFR
2011-07-01
... than the minimum wage? (a) A separate application must be made for each plant or establishment...; (5) If requesting authorization for the employment of learners at subminimum wages for a learning... period prior to making application; (7) Total number of nonsupervisory workers in the particular plant or...
40 CFR 63.1385 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable emission limits: (1) Method 1 (40 CFR part 60, appendix A) for the selection of the sampling port location and number of sampling ports; (2) Method 2 (40 CFR part 60, appendix A) for volumetric flow rate.... Each run shall consist of a minimum run time of 2 hours and a minimum sample volume of 60 dry standard...
ERIC Educational Resources Information Center
Kidney, John
This self-instructional module, the eleventh in a series of 16 on techniques for coordinating work experience programs, deals with federal and state employment laws. Addressed in the module are federal and state employment laws pertaining to minimum wage for student learners, minimum wage for full-time students, unemployment insurance, child labor…
29 CFR 778.220 - “Show-up” or “reporting” pay.
Code of Federal Regulations, 2010 CFR
2010-07-01
... scheduled work on any day will receive a minimum of 4 hours' work or pay. The employee thus receives not... failure to provide expected work during regular hours. One of the primary purposes of such an arrangement... that an employee entitled to overtime pay after 40 hours a week whose workweek begins on Monday and who...
Building Florida's Future: Quality and Access or Business as Usual?
ERIC Educational Resources Information Center
Board of Governors, State University System of Florida, 2006
2006-01-01
How many of Florida's four million children should expect to attend the State University System someday? And what should they find when they arrive? The bare minimum? Or world-class universities with facilities on a par with the best the nation has to offer? This report states that a "business as usual" approach has corroded the link…
Bricklayer: Apprenticeship Course Outline. Apprenticeship and Industry Training. 0110
ERIC Educational Resources Information Center
Alberta Advanced Education and Technology, 2010
2010-01-01
The graduate of the Bricklayer apprenticeship training is a journeyperson who will be able to: (1) responsibly do all work tasks expected of a journeyperson; (2) supervise, train and coach apprentices; (3) produce a better quality product than the minimum acceptable by industry standard; (4) use and maintain tools and equipment to the standards of…
Bricklayer: Apprenticeship Course Outline. Apprenticeship and Industry Training. 0110.1
ERIC Educational Resources Information Center
Alberta Advanced Education and Technology, 2010
2010-01-01
The graduate of the Bricklayer apprenticeship training is a journeyperson who will be able to: (1) responsibly do all work tasks expected of a journeyperson; (2) supervise, train and coach apprentices; (3) produce a better quality product than the minimum acceptable by industry standard; (4) use and maintain tools and equipment to the standards of…
Standards for the Language Competence of French Immersion Teachers: Is There a Danger of Erosion
ERIC Educational Resources Information Center
Veilleux, Ingrid; Bournot-Trites, Monique
2005-01-01
We examined standards used by Canadian universities and British Columbia school districts to verify the language competence of French Immersion (FI) teachers in a time of shortage, confirmed by 56 per cent of school districts surveyed. Parents and Directors of Human Resources agreed on their minimum expectations about French teachers'…
Strategies to Reduce Underage Alcohol Use: Typology and Brief Overview. Revised
ERIC Educational Resources Information Center
Stewart, Kathryn
2009-01-01
This guide updates the original one published in September 1999. It discusses the effectiveness of minimum legal drinking age laws and provides a conceptual framework for understanding the array of strategies available to prevent underage alcohol use. It also provides a simple assessment of the level of effectiveness that might be expected from…
ERIC Educational Resources Information Center
Smith, Clifton L.
This guide, developed by a project to revise the minimum core competencies for the advanced marketing course in secondary marketing education in Missouri, contains four sections. The first section explains competency-based marketing education, including its mission, nature, curriculum, and the fundamentals of competency-based instruction. The…
ERIC Educational Resources Information Center
Smith, Clifton L.
This guide, developed by a project to revise the minimum core competencies for the Fundamentals of Marketing course in secondary marketing education in Missouri, contains four sections. The first section explains competency-based marketing education, including its mission, nature, curriculum, and the fundamentals of competency-based instruction.…
Variation of Solar, Interplanetary and Geomagnetic Parameters during Solar Cycles 21-24
NASA Astrophysics Data System (ADS)
Oh, Suyeon; Kim, Bogyeong
2013-06-01
The length of solar cycle 23 has been prolonged up to about 13 years. Many studies have speculated that the solar cycle 23/24 minimum will indicate the onset of a grand minimum of solar activity, such as the Maunder Minimum. We check the trends of solar (sunspot number, solar magnetic fields, total solar irradiance, solar radio flux, and frequency of solar X-ray flare), interplanetary (interplanetary magnetic field, solar wind and galactic cosmic ray intensity), and geomagnetic (Ap index) parameters (SIG parameters) during solar cycles 21-24. Most SIG parameters during the period of the solar cycle 23/24 minimum have remarkably low values. Since the 1970s, the space environment has been monitored by ground observatories and satellites. Such prevalently low values of SIG parameters have never been seen. We suggest that these unprecedented conditions of SIG parameters originate from the weakened solar magnetic fields. Meanwhile, the deep 23/24 solar cycle minimum might be the portent of a grand minimum in which the global mean temperature of the lower atmosphere is as low as in the period of Dalton or Maunder minimum.
Selected low-flow frequency statistics for continuous-record streamgage locations in Maryland, 2010
Doheny, Edward J.; Banks, William S.L.
2010-01-01
According to a 2008 report by the Governor's Advisory Committee on the Management and Protection of the State's Water Resources, Maryland's population grew by 35 percent between 1970 and 2000, and is expected to increase by an additional 27 percent between 2000 and 2030. Because domestic water demand generally increases in proportion to population growth, Maryland will be facing increased pressure on water resources over the next 20 years. Water-resources decisions should be based on sound, comprehensive, long-term data and low-flow frequency statistics from all available streamgage locations with unregulated streamflow and adequate record lengths. To provide the Maryland Department of the Environment with tools for making future water-resources decisions, the U.S. Geological Survey initiated a study in October 2009 to compute low-flow frequency statistics for selected streamgage locations in Maryland with 10 or more years of continuous streamflow records. This report presents low-flow frequency statistics for 114 continuous-record streamgage locations in Maryland. The computed statistics presented for each streamgage location include the mean 7-, 14-, and 30-consecutive day minimum daily low-flow dischages for recurrence intervals of 2, 10, and 20 years, and are based on approved streamflow records that include a minimum of 10 complete climatic years of record as of June 2010. Descriptive information for each of these streamgage locations, including the station number, station name, latitude, longitude, county, physiographic province, and drainage area, also is presented. The statistics are planned for incorporation into StreamStats, which is a U.S. Geological Survey Web application for obtaining stream information, and is being used by water-resource managers and decision makers in Maryland to address water-supply planning and management, water-use appropriation and permitting, wastewater and industrial discharge permitting, and setting minimum required streamflows to protect freshwater biota and ecosystems.
Optimal adaptation to extreme rainfalls in current and future climate
NASA Astrophysics Data System (ADS)
Rosbjerg, Dan
2017-01-01
More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases, the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time-span. Immediate as well as delayed adaptation is considered.
Optimal adaptation to extreme rainfalls under climate change
NASA Astrophysics Data System (ADS)
Rosbjerg, Dan
2017-04-01
More intense and frequent rainfalls have increased the number of urban flooding events in recent years, prompting adaptation efforts. Economic optimization is considered an efficient tool to decide on the design level for adaptation. The costs associated with a flooding to the T-year level and the annual capital and operational costs of adapting to this level are described with log-linear relations. The total flooding costs are developed as the expected annual damage of flooding above the T-year level plus the annual capital and operational costs for ensuring no flooding below the T-year level. The value of the return period T that corresponds to the minimum of the sum of these costs will then be the optimal adaptation level. The change in climate, however, is expected to continue in the next century, which calls for expansion of the above model. The change can be expressed in terms of a climate factor (the ratio between the future and the current design level) which is assumed to increase in time. This implies increasing costs of flooding in the future for many places in the world. The optimal adaptation level is found for immediate as well as for delayed adaptation. In these cases the optimum is determined by considering the net present value of the incurred costs during a sufficiently long time span. Immediate as well as delayed adaptation is considered.
Layered motion segmentation and depth ordering by tracking edges.
Smith, Paul; Drummond, Tom; Cipolla, Roberto
2004-04-01
This paper presents a new Bayesian framework for motion segmentation--dividing a frame from an image sequence into layers representing different moving objects--by tracking edges between frames. Edges are found using the Canny edge detector, and the Expectation-Maximization algorithm is then used to fit motion models to these edges and also to calculate the probabilities of the edges obeying each motion model. The edges are also used to segment the image into regions of similar color. The most likely labeling for these regions is then calculated by using the edge probabilities, in association with a Markov Random Field-style prior. The identification of the relative depth ordering of the different motion layers is also determined, as an integral part of the process. An efficient implementation of this framework is presented for segmenting two motions (foreground and background) using two frames. It is then demonstrated how, by tracking the edges into further frames, the probabilities may be accumulated to provide an even more accurate and robust estimate, and segment an entire sequence. Further extensions are then presented to address the segmentation of more than two motions. Here, a hierarchical method of initializing the Expectation-Maximization algorithm is described, and it is demonstrated that the Minimum Description Length principle may be used to automatically select the best number of motion layers. The results from over 30 sequences (demonstrating both two and three motions) are presented and discussed.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or records to be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Code of Federal Regulations, 2010 CFR
2010-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or records to be... universe is: the number of admissions in the last year for each of the two quality control samples under...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yarmand, H; Winey, B; Craft, D
2014-06-15
Purpose: To efficiently find quality-guaranteed treatment plans with the minimum number of beams for stereotactic body radiation therapy using RayStation. Methods: For a pre-specified pool of candidate beams we use RayStation (a treatment planning software for clinical use) to identify the deliverable plan which uses all the beams with the minimum dose to organs at risk (OARs) and dose to the tumor and other structures in specified ranges. Then use the dose matrix information for the generated apertures from RayStation to solve a linear program to find the ideal plan with the same objective and constraints allowing use of allmore » beams. Finally we solve a mixed integer programming formulation of the beam angle optimization problem (BAO) with the objective of minimizing the number of beams while remaining in a predetermined epsilon-optimality of the ideal plan with respect to the dose to OARs. Since the treatment plan optimization is a multicriteria optimization problem, the planner can exploit the multicriteria optimization capability of RayStation to navigate the ideal dose distribution Pareto surface and select a plan of desired target coverage versus OARs sparing, and then use the proposed technique to reduce the number of beams while guaranteeing quality. For the numerical experiments two liver cases and one lung case with 33 non-coplanar beams are considered. Results: The ideal plan uses an impractically large number of beams. The proposed technique reduces the number of beams to the range of practical application (5 to 9 beams) while remaining in the epsilon-optimal range of 1% to 5% optimality gap. Conclusion: The proposed method can be integrated into a general algorithm for fast navigation of the ideal dose distribution Pareto surface and finding the treatment plan with the minimum number of beams, which corresponds to the delivery time, in epsilon-optimality range of the desired ideal plan. The project was supported by the Federal Share of program income earned by Massachusetts General Hospital on C06 CA059267, Proton Therapy Research and Treatment Center and partially by RaySearch Laboratories.« less
46 CFR 193.60-5 - Number required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 7 2014-10-01 2014-10-01 false Number required. 193.60-5 Section 193.60-5 Shipping... EQUIPMENT Fire Axes § 193.60-5 Number required. (a) All vessels shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 193.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 76.60-5 - Number required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 3 2012-10-01 2012-10-01 false Number required. 76.60-5 Section 76.60-5 Shipping COAST... § 76.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 76.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 76.60-5 - Number required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 3 2014-10-01 2014-10-01 false Number required. 76.60-5 Section 76.60-5 Shipping COAST... § 76.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 76.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 193.60-5 - Number required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 7 2013-10-01 2013-10-01 false Number required. 193.60-5 Section 193.60-5 Shipping... EQUIPMENT Fire Axes § 193.60-5 Number required. (a) All vessels shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 193.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 193.60-5 - Number required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Number required. 193.60-5 Section 193.60-5 Shipping... EQUIPMENT Fire Axes § 193.60-5 Number required. (a) All vessels shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 193.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 76.60-5 - Number required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 3 2013-10-01 2013-10-01 false Number required. 76.60-5 Section 76.60-5 Shipping COAST... § 76.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 76.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 193.60-5 - Number required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Number required. 193.60-5 Section 193.60-5 Shipping... EQUIPMENT Fire Axes § 193.60-5 Number required. (a) All vessels shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 193.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 76.60-5 - Number required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 3 2011-10-01 2011-10-01 false Number required. 76.60-5 Section 76.60-5 Shipping COAST... § 76.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 76.60-5(a) Gross tons Over Not over Number of axes...
46 CFR 193.60-5 - Number required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 7 2012-10-01 2012-10-01 false Number required. 193.60-5 Section 193.60-5 Shipping... EQUIPMENT Fire Axes § 193.60-5 Number required. (a) All vessels shall carry at least the minimum number of... necessary for the proper protection of the vessel. Table 193.60-5(a) Gross tons Over Not over Number of axes...
Egge, Elianne; Bittner, Lucie; Andersen, Tom; Audic, Stéphane; de Vargas, Colomban; Edvardsen, Bente
2013-01-01
Next generation sequencing of ribosomal DNA is increasingly used to assess the diversity and structure of microbial communities. Here we test the ability of 454 pyrosequencing to detect the number of species present, and assess the relative abundance in terms of cell numbers and biomass of protists in the phylum Haptophyta. We used a mock community consisting of equal number of cells of 11 haptophyte species and compared targeting DNA and RNA/cDNA, and two different V4 SSU rDNA haptophyte-biased primer pairs. Further, we tested four different bioinformatic filtering methods to reduce errors in the resulting sequence dataset. With sequencing depth of 11000–20000 reads and targeting cDNA with Haptophyta specific primers Hap454 we detected all 11 species. A rarefaction analysis of expected number of species recovered as a function of sampling depth suggested that minimum 1400 reads were required here to recover all species in the mock community. Relative read abundance did not correlate to relative cell numbers. Although the species represented with the largest biomass was also proportionally most abundant among the reads, there was generally a weak correlation between proportional read abundance and proportional biomass of the different species, both with DNA and cDNA as template. The 454 sequencing generated considerable spurious diversity, and more with cDNA than DNA as template. With initial filtering based only on match with barcode and primer we observed 100-fold more operational taxonomic units (OTUs) at 99% similarity than the number of species present in the mock community. Filtering based on quality scores, or denoising with PyroNoise resulted in ten times more OTU99% than the number of species. Denoising with AmpliconNoise reduced the number of OTU99% to match the number of species present in the mock community. Based on our analyses, we propose a strategy to more accurately depict haptophyte diversity using 454 pyrosequencing. PMID:24069303
NASA Astrophysics Data System (ADS)
Harrison, R. A.; Davies, J. A.; Barnes, D.; Byrne, J. P.; Perry, C. H.; Bothmer, V.; Eastwood, J. P.; Gallagher, P. T.; Kilpua, E. K. J.; Möstl, C.; Rodriguez, L.; Rouillard, A. P.; Odstrčil, D.
2018-05-01
We present a statistical analysis of coronal mass ejections (CMEs) imaged by the Heliospheric Imager (HI) instruments on board NASA's twin-spacecraft STEREO mission between April 2007 and August 2017 for STEREO-A and between April 2007 and September 2014 for STEREO-B. The analysis exploits a catalogue that was generated within the FP7 HELCATS project. Here, we focus on the observational characteristics of CMEs imaged in the heliosphere by the inner (HI-1) cameras, while following papers will present analyses of CME propagation through the entire HI fields of view. More specifically, in this paper we present distributions of the basic observational parameters - namely occurrence frequency, central position angle (PA) and PA span - derived from nearly 2000 detections of CMEs in the heliosphere by HI-1 on STEREO-A or STEREO-B from the minimum between Solar Cycles 23 and 24 to the maximum of Cycle 24; STEREO-A analysis includes a further 158 CME detections from the descending phase of Cycle 24, by which time communication with STEREO-B had been lost. We compare heliospheric CME characteristics with properties of CMEs observed at coronal altitudes, and with sunspot number. As expected, heliospheric CME rates correlate with sunspot number, and are not inconsistent with coronal rates once instrumental factors/differences in cataloguing philosophy are considered. As well as being more abundant, heliospheric CMEs, like their coronal counterparts, tend to be wider during solar maximum. Our results confirm previous coronagraph analyses suggesting that CME launch sites do not simply migrate to higher latitudes with increasing solar activity. At solar minimum, CMEs tend to be launched from equatorial latitudes, while at maximum, CMEs appear to be launched over a much wider latitude range; this has implications for understanding the CME/solar source association. Our analysis provides some supporting evidence for the systematic dragging of CMEs to lower latitude as they propagate outwards.
Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T
2015-01-01
Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.
AEGIS: a wildfire prevention and management information system
NASA Astrophysics Data System (ADS)
Kalabokidis, Kostas; Ager, Alan; Finney, Mark; Athanasis, Nikos; Palaiologou, Palaiologos; Vasilakos, Christos
2016-03-01
We describe a Web-GIS wildfire prevention and management platform (AEGIS) developed as an integrated and easy-to-use decision support tool to manage wildland fire hazards in Greece (http://aegis.aegean.gr). The AEGIS platform assists with early fire warning, fire planning, fire control and coordination of firefighting forces by providing online access to information that is essential for wildfire management. The system uses a number of spatial and non-spatial data sources to support key system functionalities. Land use/land cover maps were produced by combining field inventory data with high-resolution multispectral satellite images (RapidEye). These data support wildfire simulation tools that allow the users to examine potential fire behavior and hazard with the Minimum Travel Time fire spread algorithm. End-users provide a minimum number of inputs such as fire duration, ignition point and weather information to conduct a fire simulation. AEGIS offers three types of simulations, i.e., single-fire propagation, point-scale calculation of potential fire behavior, and burn probability analysis, similar to the FlamMap fire behavior modeling software. Artificial neural networks (ANNs) were utilized for wildfire ignition risk assessment based on various parameters, training methods, activation functions, pre-processing methods and network structures. The combination of ANNs and expected burned area maps are used to generate integrated output map of fire hazard prediction. The system also incorporates weather information obtained from remote automatic weather stations and weather forecast maps. The system and associated computation algorithms leverage parallel processing techniques (i.e., High Performance Computing and Cloud Computing) that ensure computational power required for real-time application. All AEGIS functionalities are accessible to authorized end-users through a web-based graphical user interface. An innovative smartphone application, AEGIS App, also provides mobile access to the web-based version of the system.
Nelson, Kenneth
2008-01-01
This article draws attention to the Europeanization of social policy and the development of minimum income protection in a large number of welfare democracies. The empirical analyses are based on unique institutional and comparative data on benefit levels from the Social Assistance and Minimum Income Protection Interim Dataset. There is some evidence of convergence in benefit levels among the European countries in the new millennium, but there is no clear proof of universal ambitions to fight poverty or of the existence of a single European social model. There are still welfare front-runners and those who lag behind in this regard, not only among industrial welfare democracies in general but also in Europe.
46 CFR 34.60-5 - Number required-T/ALL.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 1 2011-10-01 2011-10-01 false Number required-T/ALL. 34.60-5 Section 34.60-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS FIREFIGHTING EQUIPMENT Fire Axes § 34.60-5 Number required—T/ALL. (a) All tankships shall carry at least the minimum number of fire axes as set...
46 CFR 34.60-5 - Number required-T/ALL.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Number required-T/ALL. 34.60-5 Section 34.60-5 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY TANK VESSELS FIREFIGHTING EQUIPMENT Fire Axes § 34.60-5 Number required—T/ALL. (a) All tankships shall carry at least the minimum number of fire axes as set...
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2010 CFR
2010-04-01
... written. (2) Whenever a betting station is opened for wagering or turned over to a new writer/cashier, the... number), station number, the writer/cashier identifier, and the date and time. (3) A betting ticket shall...; (ii) Gaming operation name (or identification number) and station number; (iii) Race track, race...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Regional Director. The certification will include, at a minimum, the number of members who voted, the number of affirmative votes, and the number of negative votes. During the course of the voting period the..., and internet website posting. (3) Does not include communications intended to be read only by the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... Regional Director. The certification will include, at a minimum, the number of members who voted, the number of affirmative votes, and the number of negative votes. During the course of the voting period the..., and internet website posting. (3) Does not include communications intended to be read only by the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... Regional Director. The certification will include, at a minimum, the number of members who voted, the number of affirmative votes, and the number of negative votes. During the course of the voting period the..., and internet website posting. (3) Does not include communications intended to be read only by the...
76 FR 80393 - Notice of Inventory Completion: Field Museum of Natural History, Chicago, IL
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... the Remains In March 1901, human remains representing, at minimum, six individuals (catalog numbers... child's basket (catalog number 70830) and an abalone shell comprised of one larger piece of shell and...
Pseudo paths towards minimum energy states in network dynamics
NASA Astrophysics Data System (ADS)
Hedayatifar, L.; Hassanibesheli, F.; Shirazi, A. H.; Vasheghani Farahani, S.; Jafari, G. R.
2017-10-01
The dynamics of networks forming on Heider balance theory moves towards lower tension states. The condition derived from this theory enforces agents to reevaluate and modify their interactions to achieve equilibrium. These possible changes in network's topology can be considered as various paths that guide systems to minimum energy states. Based on this theory the final destination of a system could reside on a local minimum energy, ;jammed state;, or the global minimum energy, balanced states. The question we would like to address is whether jammed states just appear by chance? Or there exist some pseudo paths that bound a system towards a jammed state. We introduce an indicator to suspect the location of a jammed state based on the Inverse Participation Ratio method (IPR). We provide a margin before a local minimum where the number of possible paths dramatically drastically decreases. This is a condition that proves adequate for ending up on a jammed states.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) For each fiscal year covered by the plan, the Tribe's proposed minimum number of hours per week that... the hours of work participation, it must so indicate in its TFAP along with a definition of... the purposes of TANF; (3) The work activities that count towards these work requirements; (4) Any...
An evaluation of Appalachian Trail hikers' knowledge of minimum impact skills and practices
Peter Newman; Robert Manning; Jim Bacon; Alan Graefe; Gerard Kyle
2002-01-01
As the number of visitors to national parks and related areas continues to rise and the types of visitors and activities continue to diversify, educating visitors in minimum skills can help to protect parks and related areas. Educating visitors in these skills can be a challenge, especially on the Appalachian Trail (AT) that travels through state, federal, municipal...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-07
.... The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and the need... Rivers, MI, Three Rivers Muni Dr. Haines, NDB RWY 27, Amdt 7A, CANCELLED Brainerd, MN, Brainerd Lakes Rgnl, RNAV (GPS) RWY 5, Amdt 1 Brainerd, MN, Brainerd Lakes Rgnl, RNAV (GPS) RWY 12, Amdt 1 Brainerd...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... on 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex..., ILS OR LOC/DME RWY 24, Amdt 2B Faribault, MN, Faribault Muni, RNAV (GPS) RWY 12, Amdt1 Faribault, MN, Faribault Muni, RNAV (GPS) RWY 30, Amdt1 Minneapolis, MN, Minneapolis-St Paul Intl/Wold-Chamberlain, Takeoff...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-03
.... The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and the need.../Springfield, MA, Barnes Muni, RNAV (GPS) RWY 20, Amdt 1 Moose Lake, MN, Moose Lake Carlton County, GPS RWY 4, Orig, CANCELED Moose Lake, MN, Moose Lake Carlton County, RNAV (GPS) RWY 4, Orig Indianola, MS...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-14
... 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and... DP, Amdt 3A Ely, MN, Ely Muni, VOR-A, Orig, CANCELLED Paynesville, MN, Paynesville Muni, RNAV (GPS) RWY 11, Orig Paynesville, MN, Paynesville Muni, RNAV (GPS) RWY 29, Orig Kansas City, MO, Kansas City...
Schouls, Leo M.; van der Heide, Han G. J.; Vauterin, Luc; Vauterin, Paul; Mooi, Frits R.
2004-01-01
Bordetella pertussis, the causative agent of whooping cough, has remained endemic in The Netherlands despite extensive nationwide vaccination since 1953. In the 1990s, several epidemic periods have resulted in many cases of pertussis. We have proposed that strain variation has played a major role in the upsurges of this disease in The Netherlands. Therefore, molecular characterization of strains is important in identifying the causes of pertussis epidemiology. For this reason, we have developed a multiple-locus variable-number tandem repeat analysis (MLVA) typing system for B. pertussis. By combining the MLVA profile with the allelic profile based on multiple-antigen sequence typing, we were able to further differentiate strains. The relationships between the various genotypes were visualized by constructing a minimum spanning tree. MLVA of Dutch strains of B. pertussis revealed that the genotypes of the strains isolated in the prevaccination period were diverse and clearly distinct from the strains isolated in the 1990s. Furthermore, there was a decrease in diversity in the strains from the late 1990s, with a remarkable clonal expansion that coincided with the epidemic periods. Using this genotyping, we have been able to show that B. pertussis is much more dynamic than expected. PMID:15292152
Tunable thin film filters for intelligent WDM networks
NASA Astrophysics Data System (ADS)
Cahill, Michael; Bartolini, Glenn; Lourie, Mark; Domash, Lawrence
2006-08-01
Optical transmission systems have evolved rapidly in recent years with the emergence of new technologies for gain management, wavelength multiplexing, tunability, and switching. WDM networks are increasingly expected to be agile, flexible, and reconfigurable which in turn has led to a need for monitoring to be more widely distributed within the network. Automation of many actions performed on these networks, such as channel provisioning and power balancing, can only be realized by the addition of optical channel monitors (OCMs). These devices provide information about the optical transmission system including the number of optical channels, channel identification, wavelength, power, and in some cases optical signal-to-noise ratio (OSNR). Until recently OCMs were costly and bulky and thus the number of OCMs used in optical networks was often kept to a minimum. We describe a family of tunable thin film filters which have greatly reduced the cost and physical footprint of channel monitors, making possible 'monitoring everywhere' for intelligent optical networks which can serve long haul, metro and access requirements from a single technology platform. As examples of specific applications we discuss network issues such as auto provisioning, wavelength collision avoidance, power balancing, OSNR balancing, gain equalization, alien wavelength recognition, interoperability, and other requirements assigned to the emerging concept of an Optical Control Plane.
Yoon, J H; Feeney, D A; Jessen, C R; Walter, P A
2008-02-01
A retrospective analysis of survival times in dogs with intranasal tumors was performed comparing those treated using hypofractionated or full course Co-60 radiotherapy protocols alone or with surgical adjuvant therapy and those receiving no radiation treatment. One hundred thirty-nine dogs presented to the University of Minnesota Veterinary Medical Center for treatment of histologically-confirmed nasal neoplasia between July 1983 and October 2001 met the criteria for review. Statistically analyzed parameters included age at diagnosis, tumor histologic classification, fractionation schedule (number of treatments, and number of treatment days/week) (classified as hypofractionated if 2 or less treatments/week); calculated minimum tumor dose/fraction; calculated total minimum tumor dose (classified as hypofractionated if less than 37 Gy in six or fewer fractions); number of radiotherapy portals, a treatment gap of more than 7 days in a full course (3-5 treatments/week, 3-3.5 week treatment time) radiotherapy protocol, the influence of eye shields on survival following single portal DV fields, the survey radiographic extent of the disease, and the presence or absence of cytoreductive surgery. There was a significant relationship only between protocols using 3 or more treatments/week and at least 37 Gy cumulative minimum tumor dose and survival. However, there was no significant relationship between either total minimum tumor dose or dose/fraction and survival and there were no significant relationships between survival and any of the other variables analyzed including tumor histologic type.
Surveillance Range and Interference Impacts on Self-Separation Performance
NASA Technical Reports Server (NTRS)
Idris, Husni; Consiglio, Maria C.; Wing, David J.
2011-01-01
Self-separation is a concept of flight operations that aims to provide user benefits and increase airspace capacity by transferring traffic separation responsibility from ground-based controllers to the flight crew. Self-separation is enabled by cooperative airborne surveillance, such as that provided by the Automatic Dependent Surveillance-Broadcast (ADSB) system and airborne separation assistance technologies. This paper describes an assessment of the impact of ADS-B system performance on the performance of self-separation as a step towards establishing far-term ADS-B performance requirements. Specifically, the impacts of ADS-B surveillance range and interference limitations were analyzed under different traffic density levels. The analysis was performed using a batch simulation of aircraft performing self-separation assisted by NASA s Autonomous Operations Planner prototype flight-deck tool, in two-dimensional airspace. An aircraft detected conflicts within a look-ahead time of ten minutes and resolved them using strategic closed trajectories or tactical open maneuvers if the time to loss of separation was below a threshold. While a complex interaction was observed between the impacts of surveillance range and interference, as both factors are physically coupled, self-separation performance followed expected trends. An increase in surveillance range resulted in a decrease in the number of conflict detections, an increase in the average conflict detection lead time, and an increase in the percentage of conflict resolutions that were strategic. The majority of the benefit was observed when surveillance range was increased to a value corresponding to the conflict detection look-ahead time. The benefits were attenuated at higher interference levels. Increase in traffic density resulted in a significant increase in the number of conflict detections, as expected, but had no effect on the conflict detection lead time and the percentage of conflict resolutions that were strategic. With surveillance range corresponding to ADS-B minimum operational performance standards for Class A3 equipment and without background interference, a significant portion of conflict resolutions, 97 percent, were achieved in the preferred strategic mode. The majority of conflict resolutions, 71 percent, were strategic even with very high interference (over three times that expected in 2035).
Will Solar Cycles 25 and 26 Be Weaker than Cycle 24?
NASA Astrophysics Data System (ADS)
Javaraiah, J.
2017-11-01
The study of variations in solar activity is important for understanding the underlying mechanism of solar activity and for predicting the level of activity in view of the activity impact on space weather and global climate. Here we have used the amplitudes (the peak values of the 13-month smoothed international sunspot number) of Solar Cycles 1 - 24 to predict the relative amplitudes of the solar cycles during the rising phase of the upcoming Gleissberg cycle. We fitted a cosine function to the amplitudes and times of the solar cycles after subtracting a linear fit of the amplitudes. The best cosine fit shows overall properties (periods, maxima, minima, etc.) of Gleissberg cycles, but with large uncertainties. We obtain a pattern of the rising phase of the upcoming Gleissberg cycle, but there is considerable ambiguity. Using the epochs of violations of the Gnevyshev-Ohl rule (G-O rule) and the `tentative inverse G-O rule' of solar cycles during the period 1610 - 2015, and also using the epochs where the orbital angular momentum of the Sun is steeply decreased during the period 1600 - 2099, we infer that Solar Cycle 25 will be weaker than Cycle 24. Cycles 25 and 26 will have almost same strength, and their epochs are at the minimum between the current and upcoming Gleissberg cycles. In addition, Cycle 27 is expected to be stronger than Cycle 26 and weaker than Cycle 28, and Cycle 29 is expected to be stronger than both Cycles 28 and 30. The maximum of Cycle 29 is expected to represent the next Gleissberg maximum. Our analysis also suggests a much lower value (30 - 40) for the maximum amplitude of the upcoming Cycle 25.
Bounds on the minimum number of recombination events in a sample history.
Myers, Simon R; Griffiths, Robert C
2003-01-01
Recombination is an important evolutionary factor in many organisms, including humans, and understanding its effects is an important task facing geneticists. Detecting past recombination events is thus important; this article introduces statistics that give a lower bound on the number of recombination events in the history of a sample, on the basis of the patterns of variation in the sample DNA. Such lower bounds are appropriate, since many recombination events in the history are typically undetectable, so the true number of historical recombinations is unobtainable. The statistics can be calculated quickly by computer and improve upon the earlier bound of Hudson and Kaplan 1985. A method is developed to combine bounds on local regions in the data to produce more powerful improved bounds. The method is flexible to different models of recombination occurrence. The approach gives recombination event bounds between all pairs of sites, to help identify regions with more detectable recombinations, and these bounds can be viewed graphically. Under coalescent simulations, there is a substantial improvement over the earlier method (of up to a factor of 2) in the expected number of recombination events detected by one of the new minima, across a wide range of parameter values. The method is applied to data from a region within the lipoprotein lipase gene and the amount of detected recombination is substantially increased. Further, there is strong clustering of detected recombination events in an area near the center of the region. A program implementing these statistics, which was used for this article, is available from http://www.stats.ox.ac.uk/mathgen/programs.html. PMID:12586723
46 CFR 34.60-5 - Number required-T/ALL.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 1 2012-10-01 2012-10-01 false Number required-T/ALL. 34.60-5 Section 34.60-5 Shipping... Number required—T/ALL. (a) All tankships shall carry at least the minimum number of fire axes as set... protection of the tankship. Table 34.60-5(a) Gross tons Over Not over Number of axes 50 1 50 200 2 200 500 3...
Gallo, N D; Levin, L A
Oxygen minimum zones (OMZs) and oxygen limited zones (OLZs) are important oceanographic features in the Pacific, Atlantic, and Indian Ocean, and are characterized by hypoxic conditions that are physiologically challenging for demersal fish. Thickness, depth of the upper boundary, minimum oxygen levels, local temperatures, and diurnal, seasonal, and interannual oxycline variability differ regionally, with the thickest and shallowest OMZs occurring in the subtropics and tropics. Although most fish are not hypoxia-tolerant, at least 77 demersal fish species from 16 orders have evolved physiological, behavioural, and morphological adaptations that allow them to live under the severely hypoxic, hypercapnic, and at times sulphidic conditions found in OMZs. Tolerance to OMZ conditions has evolved multiple times in multiple groups with no single fish family or genus exploiting all OMZs globally. Severely hypoxic conditions in OMZs lead to decreased demersal fish diversity, but fish density trends are variable and dependent on region-specific thresholds. Some OMZ-adapted fish species are more hypoxia-tolerant than most megafaunal invertebrates and are present even when most invertebrates are excluded. Expansions and contractions of OMZs in the past have affected fish evolution and diversity. Current patterns of ocean warming are leading to ocean deoxygenation, causing the expansion and shoaling of OMZs, which is expected to decrease demersal fish diversity and alter trophic pathways on affected margins. Habitat compression is expected for hypoxia-intolerant species, causing increased susceptibility to overfishing for fisheries species. Demersal fisheries are likely to be negatively impacted overall by the expansion of OMZs in a warming world. © 2016 Elsevier Ltd. All rights reserved.
The minimum distance approach to classification
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1971-01-01
The work to advance the state-of-the-art of miminum distance classification is reportd. This is accomplished through a combination of theoretical and comprehensive experimental investigations based on multispectral scanner data. A survey of the literature for suitable distance measures was conducted and the results of this survey are presented. It is shown that minimum distance classification, using density estimators and Kullback-Leibler numbers as the distance measure, is equivalent to a form of maximum likelihood sample classification. It is also shown that for the parametric case, minimum distance classification is equivalent to nearest neighbor classification in the parameter space.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
NASA Astrophysics Data System (ADS)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita
2014-06-01
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.
Optimal estimation of the optomechanical coupling strength
NASA Astrophysics Data System (ADS)
Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André
2018-06-01
We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.
Keogh, Pauraic; Ray, Noel J; Lynch, Christopher D; Burke, Francis M; Hannigan, Ailish
2004-12-01
This investigation determined the minimum exposure times consistent with optimised surface microhardness parameters for a commercial resin composite cured using a "first-generation" light-emitting diode activation lamp. Disk specimens were exposed and surface microhardness numbers measured at the top and bottom surfaces for elapsed times of 1 hour and 24 hours. Bottom/top microhardness number ratios were also calculated. Most microhardness data increased significantly over the elapsed time interval but microhardness ratios (bottom/top) were dependent on exposure time only. A minimum exposure of 40 secs is appropriate to optimise microhardness parameters for the combination of resin composite and lamp investigated.
ERIC Educational Resources Information Center
Karazsia, Bryan T.; Smith, Lena
2016-01-01
In the present study, faculty who teach in clinical and counseling doctor of philosophy (PhD) or doctor of psychology (PsyD) programs completed surveys regarding preferences for prospective student preparations to graduate programs. Faculty expectations of minimum and ideal undergraduate training were highest for scientific methods, though…
A Tri-Reference Point Theory of Decision Making under Risk
ERIC Educational Resources Information Center
Wang, X. T.; Johnson, Joseph G.
2012-01-01
The tri-reference point (TRP) theory takes into account minimum requirements (MR), the status quo (SQ), and goals (G) in decision making under risk. The 3 reference points demarcate risky outcomes and risk perception into 4 functional regions: success (expected value of x greater than or equal to G), gain (SQ less than x less than G), loss (MR…
How Minimal Grade Goals and Self-Control Capacity Interact in Predicting Test Grades
ERIC Educational Resources Information Center
Bertrams, Alex
2012-01-01
The present research examined the prediction of school students' grades in an upcoming math test via their minimal grade goals (i.e., the minimum grade in an upcoming test one would be satisfied with). Due to its significance for initiating and maintaining goal-directed behavior, self-control capacity was expected to moderate the relation between…
Principles of Air Defense and Air Vehicle Penetration
2000-03-01
Range For reliable dateetien, the target signal must reach some minimum or threshold value called S . . When internal noise is the only interfer...analyze air defense and air vehicle penetration. Unique expected value models are developed with frequent numerical examples. Radar...penetrator in the presence of spurious returns from internal and external noise will be discussed. Tracking With sufficient sensor information to determine
Task Analyses for Difficult-to-Assess Collective Tasks
2014-02-01
FOR THE KLE MISSION Review and rehearse social nuances, customs, and etiquette of the host nation, e.g., gift exchange expectations. Practice... etiquette to foster rapport with the leader and demonstrate cultural awareness. Negotiate with the key leader in a manner that demonstrates...requirements for police applicants by establishing minimum entry requirements, physical fitness tests, literacy tests, and medical screening protocols
NASA Astrophysics Data System (ADS)
Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine
2018-05-01
The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.
NASA Astrophysics Data System (ADS)
Hidalgo, P.; Escribano, R.
2015-12-01
A shallow oxygen minimum zone (OMZ) is a critical component in the coastal upwelling ecosystem off Chile. This OMZ causes oxygen-deficient water entering the photic layer and affecting plankton communities having low tolerance to hypoxia. Variable, and usually species-dependent, responses of zooplankton to hypoxia condition can be found. Most dominant species avoid hypoxia by restricting their vertical distribution, while others can temporarily enter and even spent part of their life cycle within the OMZ. Whatever the case, low-oxygen conditions appear to affect virtually all vital rates of zooplankton, such as mortality, fecundity, development and growth and metabolism, and early developmental stages seem more sensitive, with significant consequences for population and community dynamics. For most study cases, these effects are negative at individual and population levels. Observations and predictions upon increasing upwelling intensity over the last 20-30 years indicate a gradual shoaling of the OMZ, and so that an expected enhancement of these negative effects of hypoxia on the zooplankton community. Unknown processes of adaptation and community-structure adjustments are expected to take place with uncertain consequences for the food web of this highly productive eastern boundary current ecosystem.
Quantum dynamics of the Einstein-Rosen wormhole throat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunstatter, Gabor; Peltola, Ari; Louko, Jorma
2011-02-15
We consider the polymer quantization of the Einstein wormhole throat theory for an eternal Schwarzschild black hole. We numerically solve the difference equation describing the quantum evolution of an initially Gaussian, semiclassical wave packet. As expected from previous work on loop quantum cosmology, the wave packet remains semiclassical until it nears the classical singularity at which point it enters a quantum regime in which the fluctuations become large. The expectation value of the radius reaches a minimum as the wave packet is reflected from the origin and emerges to form a near-Gaussian but asymmetrical semiclassical state at late times. Themore » value of the minimum depends in a nontrivial way on the initial mass/energy of the pulse, its width, and the polymerization scale. For wave packets that are sufficiently narrow near the bounce, the semiclassical bounce radius is obtained. Although the numerics become difficult to control in this limit, we argue that for pulses of finite width the bounce persists as the polymerization scale goes to zero, suggesting that in this model the loop quantum gravity effects mimicked by polymer quantization do not play a crucial role in the quantum bounce.« less
ERIC Educational Resources Information Center
Berent, Jerzy
This survey analysis compares fertility levels in the United States and European countries, discusses socioeconomic influences in ultimate expected family size, and examines birth rate trends. The average number of ultimately expected children varies from 2.13 children per woman in Bulgaria to 2.80 in Spain. Eighty to 90 percent of U.S. and…
46 CFR 95.60-5 - Number required.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Number required. 95.60-5 Section 95.60-5 Shipping COAST... EQUIPMENT Fire Axes § 95.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of fire axes as set forth in Table 95.60-5(a). Nothing in this paragraph shall be construed...
46 CFR 95.60-5 - Number required.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Number required. 95.60-5 Section 95.60-5 Shipping COAST... EQUIPMENT Fire Axes § 95.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of fire axes as set forth in Table 95.60-5(a). Nothing in this paragraph shall be construed...
46 CFR 95.60-5 - Number required.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Number required. 95.60-5 Section 95.60-5 Shipping COAST... EQUIPMENT Fire Axes § 95.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of fire axes as set forth in Table 95.60-5(a). Nothing in this paragraph shall be construed...
46 CFR 95.60-5 - Number required.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Number required. 95.60-5 Section 95.60-5 Shipping COAST... EQUIPMENT Fire Axes § 95.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of fire axes as set forth in Table 95.60-5(a). Nothing in this paragraph shall be construed...
46 CFR 95.60-5 - Number required.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Number required. 95.60-5 Section 95.60-5 Shipping COAST... EQUIPMENT Fire Axes § 95.60-5 Number required. (a) All vessels except barges shall carry at least the minimum number of fire axes as set forth in Table 95.60-5(a). Nothing in this paragraph shall be construed...
Predictions of Sunspot Cycle 24: A Comparison with Observations
NASA Astrophysics Data System (ADS)
Bhatt, N. J.; Jain, R.
2017-12-01
The space weather is largely affected due to explosions on the Sun viz. solar flares and CMEs, which, however, in turn depend upon the magnitude of the solar activity i e. number of sunspots and their magnetic configuration. Owing to these space weather effects, predictions of sunspot cycle are important. Precursor techniques, particularly employing geomagnetic indices, are often used in the prediction of the maximum amplitude of a sunspot cycle. Based on the average geomagnetic activity index aa (since 1868 onwards) for the year of the sunspot minimum and the preceding four years, Bhatt et al. (2009) made two predictions for sunspot cycle 24 considering 2008 as the year of sunspot minimum: (i) The annual maximum amplitude would be 92.8±19.6 (1-sigma accuracy) indicating a somewhat weaker cycle 24 as compared to cycles 21-23, and (ii) smoothed monthly mean sunspot number maximum would be in October 2012±4 months (1-sigma accuracy). However, observations reveal that the sunspot minima extended up to 2009, and the maximum amplitude attained is 79, with a monthly mean sunspot number maximum of 102.3 in February 2014. In view of the observations and particularly owing to the extended solar minimum in 2009, we re-examined our prediction model and revised the prediction results. We find that (i) The annual maximum amplitude of cycle 24 = 71.2 ± 19.6 and (ii) A smoothed monthly mean sunspot number maximum in January 2014±4 months. We discuss our failure and success aspects and present improved predictions for the maximum amplitude as well as for the timing, which are now in good agreement with the observations. Also, we present the limitations of our forecasting in the view of long term predictions. We show if year of sunspot minimum activity and magnitude of geomagnetic activity during sunspot minimum are taken correctly then our prediction method appears to be a reliable indicator to forecast the sunspot amplitude of the following solar cycle. References:Bhatt, N.J., Jain, R. & Aggarwal, M.: 2009, Sol. Phys. 260, 225
25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?
Code of Federal Regulations, 2011 CFR
2011-04-01
...; (ii) Gaming operation name (or identification number) and station number; (iii) Race track, race number, horse identification or event identification, as applicable; (iv) Type of bet(s), each bet amount... wagering on race events while on duty, including during break periods. (g) Computer reports standards. (1...
Lee, S H; Kang, J S; Min, J S; Yoon, K S; Strycharz, J P; Johnson, R; Mittapalli, O; Margam, V M; Sun, W; Li, H-M; Xie, J; Wu, J; Kirkness, E F; Berenbaum, M R; Pittendrigh, B R; Clark, J M
2010-10-01
The human body louse, Pediculus humanus humanus, has one of the smallest insect genomes, containing ∼10 775 annotated genes. Annotation of detoxification [cytochrome P450 monooxygenase (P450), glutathione-S-transferase (GST), esterase (Est) and ATP-binding cassette transporter (ABC transporter)] genes revealed that they are dramatically reduced in P. h. humanus compared to other insects except for Apis mellifera. There are 37 P450, 13 GST and 17 Est genes present in P. h. humanus, approximately half the number found in Drosophila melanogaster and Anopheles gambiae. The number of putatively functional ABC transporter genes in P. h. humanus and Ap. mellifera are the same (36) but both have fewer than An. gambiae (44) or Dr. melanogaster (65). The reduction of detoxification genes in P. h. humanus may be a result of this louse's simple life history, in which it does not encounter a wide variety of xenobiotics. Neuronal component genes are highly conserved across different insect species as expected because of their critical function. Although reduced in number, P. h. humanus still retains at least a minimum repertoire of genes known to confer metabolic or toxicokinetic resistance to xenobiotics (eg Cyp3 clade P450s, Delta GSTs, B clade Ests and B/C subfamily ABC transporters), suggestive of its high potential for resistance development. © 2010 The Authors. Insect Molecular Biology © 2010 The Royal Entomological Society.
Code of Federal Regulations, 2014 CFR
2014-01-01
... evaluation, the NSPM will make every attempt to notify the sponsor at least one (1) week, but in no case less... may change from one 12-month period to the next 12-month period as long as the sponsor sponsors and uses at least one FFS at least once during the prescribed period. No minimum number of hours or minimum...
Code of Federal Regulations, 2012 CFR
2012-01-01
... evaluation, the NSPM will make every attempt to notify the sponsor at least one (1) week, but in no case less... may change from one 12-month period to the next 12-month period as long as the sponsor sponsors and uses at least one FFS at least once during the prescribed period. No minimum number of hours or minimum...
Code of Federal Regulations, 2013 CFR
2013-01-01
... evaluation, the NSPM will make every attempt to notify the sponsor at least one (1) week, but in no case less... may change from one 12-month period to the next 12-month period as long as the sponsor sponsors and uses at least one FFS at least once during the prescribed period. No minimum number of hours or minimum...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-23
... on 8260-15A. The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex... 31, Amdt 14 CANCELLED Cook, MN, Cook Muni, RNAV (GPS) RWY 13, Orig Cook, MN, Cook Muni, RNAV (GPS) RWY 31, Amdt 1 Ely, MN, Ely Muni, RNAV (GPS) RWY 12, Amdt 1 Ely, MN, Ely Muni, RNAV (GPS) RWY 30, Amdt...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-17
.... The large number of SIAPs, Takeoff Minimums and ODPs, in addition to their complex nature and the need.... Part 97 is amended to read as follows: Effective 20 OCT 2011 Albert Lea, MN, Albert Lea Muni, RNAV (GPS) RWY 17, Amdt 2 Albert Lea, MN, Albert Lea Muni, RNAV (GPS) RWY 35, Amdt 1 Albert Lea, MN, Albert Lea...
NASA Astrophysics Data System (ADS)
Askin, Osman; Irmak, Riza; Avsever, Mustafa
2015-05-01
For the states with advanced technology, effective use of electronic warfare and cyber warfare will be the main determining factor of winning a war in the future's operational environment. The developed states will be able to finalize the struggles they have entered with a minimum of human casualties and minimum cost thanks to high-tech. Considering the increasing number of world economic problems, the development of human rights and humanitarian law it is easy to understand the importance of minimum cost and minimum loss of human. In this paper, cyber warfare and electronic warfare concepts are examined in conjunction with the historical development and the relationship between them is explained. Finally, assessments were carried out about the use of cyber electronic warfare in the coming years.
Present and Future Water Supply for Mammoth Cave National Park, Kentucky
Cushman, R.V.; Krieger, R.A.; McCabe, John A.
1965-01-01
The increase in the number of visitors during the past several years at Mammoth Cave National Park has rendered the present water supply inadequate. Emergency measures were necessary during August 1962 to supplement the available supply. The Green River is the largest potential source of water supply for Mammoth Cave. The 30-year minimum daily discharge is 40 mgd (million gallons per day) . The chemical quality is now good, but in the past the river has been contaminated by oil-field-brine wastes. By mixing it with water from the existing supply, Green River water could be diluted to provide water of satisfactory quality in the event of future brine pollution. The Nolin River is the next largest potential source of water (minimum releases from Nolin Reservoir, 97-129 mgd). The quality is satisfactory, but use of this source would require a 8-mile pipeline. The present water supply comes from springs draining a perched aquifer in the Haney Limestone Member of the Golconda Formation on Flint Ridge. Chemical quality is excellent but the minimum observed flow of all the springs on Flint Ridge plus Bransford well was only 121,700 gpd (gallons per day). This supply is adequate for present needs but not for future requirements; it could be augmented with water from the Green River. Wet Prong Buffalo Creek is the best of several small-stream supplies in the vicinity of Mammoth Cave. Minimum flow of the creek is probably about 300,000 gpd and the quality is good. The supply is about 5 miles from Mammoth Cave. This supply also may be utilized for a future separate development in the northern part of the park. The maximum recorded yield of wells drilled into the basal ground water in the Ste. Genevieve and St. Louis Limestone is 36 gpm (gallons per minute). Larger supplies may be developed if a large underground stream is struck. Quality can be expected to be good unless the well is drilled too far below the basal water table and intercepts poorer quality water at a lower level. This source of supply might be used to augment the present supply, but locating the trunk conduits might be difficult. Water in alluvium adjacent to the Green River and perched water in the Big Clifty Sandstone Member of the Golconda Formation and Girkin Formation have little potential as a water supply.
Trends in Middle East climate extreme indices from 1950 to 2003
NASA Astrophysics Data System (ADS)
Zhang, Xuebin; Aguilar, Enric; Sensoy, Serhat; Melkonyan, Hamlet; Tagiyeva, Umayra; Ahmed, Nader; Kutaladze, Nato; Rahimzadeh, Fatemeh; Taghipour, Afsaneh; Hantosh, T. H.; Albert, Pinhas; Semawi, Mohammed; Karam Ali, Mohammad; Said Al-Shabibi, Mansoor Halal; Al-Oulan, Zaid; Zatari, Taha; Al Dean Khelet, Imad; Hamoud, Saleh; Sagir, Ramazan; Demircan, Mesut; Eken, Mehmet; Adiguzel, Mustafa; Alexander, Lisa; Peterson, Thomas C.; Wallis, Trevor
2005-11-01
A climate change workshop for the Middle East brought together scientists and data for the region to produce the first area-wide analysis of climate extremes for the region. This paper reports trends in extreme precipitation and temperature indices that were computed during the workshop and additional indices data that became available after the workshop. Trends in these indices were examined for 1950-2003 at 52 stations covering 15 countries, including Armenia, Azerbaijan, Bahrain, Cyprus, Georgia, Iran, Iraq, Israel, Jordan, Kuwait, Oman, Qatar, Saudi Arabia, Syria, and Turkey. Results indicate that there have been statistically significant, spatially coherent trends in temperature indices that are related to temperature increases in the region. Significant, increasing trends have been found in the annual maximum of daily maximum and minimum temperature, the annual minimum of daily maximum and minimum temperature, the number of summer nights, and the number of days where daily temperature has exceeded its 90th percentile. Significant negative trends have been found in the number of days when daily temperature is below its 10th percentile and daily temperature range. Trends in precipitation indices, including the number of days with precipitation, the average precipitation intensity, and maximum daily precipitation events, are weak in general and do not show spatial coherence. The workshop attendees have generously made the indices data available for the international research community.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
1999-01-01
Recently, Ahluwalia reviewed the solar and geomagnetic data for the last 6 decades and remarked that these data "indicate the existence of a three-solar-activity-cycle quasiperiodicity in them." Furthermore, on the basis of this inferred quasiperiodicity, he asserted that cycle 23 represents the initial cycle in a new three-cycle string, implying that it "will be more modest (a la cycle 17) with an annual mean sunspot number count of 119.3 +/- 30 at the maximum", a prediction that is considerably below the consensus prediction of 160 +/- 30 by Joselin et al. and of similar predictions by others based on a variety of predictive techniques. Several major sticking points of Ahluwalia's presentation, however, must be readdressed, and these issues form the basis of this comment. First, Ahluwalia appears to have based his analysis on a data set of Ap index values that is erroneous. For example, he depicts for the interval of 1932-1997 the variation of the Ap index in terms of annual averages, contrasting them against annual averages of sunspot number (SSN), and he lists for cycles 17-23 the minimum and maximum value of each, as well as the years in which they occur and a quantity which he calls "Amplitude" (defined as the numeric difference between the maximum and minimum values). In particular, he identifies the minimum Ap index (i.e., the minimum value of the Ap index in the vicinity of sunspot cycle minimum, which usually occurs in the year following sunspot minimum and which will be called hereafter, simply, Ap min) and the year in which it occur for cycles 17 - 23 respectively.
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
NASA Astrophysics Data System (ADS)
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model.
Shinzato, Takashi
2015-01-01
In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.
ERIC Educational Resources Information Center
Schwartzman, Steven
1993-01-01
Discusses the surprising result that the expected number of marbles of one color drawn from a set of marbles of two colors after two draws without replacement is the same as the expected number of that color marble after two draws with replacement. Presents mathematical models to help explain this phenomenon. (MDH)
The emergence of a new chlorophytan system, and Dr. Kornmann's contribution thereto
NASA Astrophysics Data System (ADS)
van den Hoek, C.; Stam, W. T.; Olsen, J. L.
1988-09-01
In traditional chlorophytan systems the organizational level was the primary character for the distinction of main groups (classes and orders). For instance, in Fott (1971), the flagellate level corresponds with the Volvocales, the coccoid level with the Chlorococcales, the filamentous level with the Ulotrichales, the siphonocladous level with the Siphonocladales, and the siphonous level with the Bryopsidales. The new system presented here is an elaboration and emendation of recently proposed taxonomies and their underlying phylogenetic hypotheses, and it is mainly based on ultrastructural features which have become available over the last 15 years. The following criteria are used for the distinction of classes and orders: (1) architecture of the flagellate cell (flagellate cells are considered as the depositories of primitive characters); (2) type of mitosis-cytokinesis; (3) place of meiosis in the life history and, consequently, the sexual life history type; (4) organizational level and thallus architecture; (5) habitat type (marine versus feshwater and terrestrial); (6) chloroplast type. The following classes are presented: Prasinophyceae, Chlamydophyceae, Ulvophyceae (orders Codiolales, Ulvales, Cladophorales, Bryopsidales, Dasycladales), Pleurastrophyceae (?), Chlorophyceae s.s. (orders Cylindrocapsales, Oedogoniales, Chaetophorales), Zygnematophyceae, Trentepohliophyceae, Charophyceae (orders Klebsormidiales, Coleochaetales, Charales). The new system no longer reflects the traditional hypothesis of a stepwise evolutionary progression of organizational levels in which the flagellate level represents the most primitive lineage, the coccoid and sarcinoid levels lineages of intermediate derivation, and the filamentous, siphonocladous and siphonous levels the most derived lineages. Instead, it is now hypothesized that these levels have arisen over and over again in different chlorophytan lineages which are primarily characterized by their type of flagellate cell. The flagellate green algal classes Prasinophyceae (with organic body scales) and Chlamydophyceae probably represent bundles of highly conservative lineages that diverged very long ago. Consequently, extant genera and species in these classes can be expected to have emerged long ago. Fossil evidence points to a minimum age of 600 Ma of certain extant Prasinophycean genera, and molecular evidence to a minimum age of 400 500 Ma of a few Chlamydomonas species. On the contrary, the most derived “green algal” lineage, the Angiosperms, can be expected to consist of, on average, much younger genera and species. Fossil evidence points to a minimum age of genera of 5 60 Ma. Lineages of intermediate evolutionary derivation (Ulvophyceae, Chlorophyceae, Charophyceae) can be expected to encompass genera and species of intermediate age. Fossil and (limited) molecular evidence point to a minimum age of 230 70 Ma of extant genera in Bryopsidales, Dasycladales and Cladophorales (Ulvophyceae) and of 250 80 Ma of extant genera in Charales (Charophyceae).
Jiang, Zheyu; Ramapriya, Gautham Madenoor; Tawarmalani, Mohit; ...
2018-04-20
Heat and mass integration to consolidate distillation columns in a multicomponent distillation configuration can lead to a number of new energy efficient and cost effective configurations. In this paper, we identify a powerful and simple-to-use fact about heat and mass integration. The newly developed heat and mass integrated configurations, which we call as HMP configurations, involve first introducing thermal couplings to all intermediate transfer streams, followed by consolidating columns associated with a lighter pure product reboiler and a heavier pure product condenser. A systematic method of enumerating all HMP configurations is introduced. We compare the energy savings of HMP configurationsmore » with the well-known fully thermally coupled (FTC) configurations. We demonstrate that HMP configurations can have very similar and sometimes even the same minimum total vapor duty requirement as the FTC configuration, while using far less number of column sections, intermediate transfer streams, and thermal couplings than the FTC configurations.« less
NASA Technical Reports Server (NTRS)
Hoyt, Douglas V.; Schatten, Kenneth H.; Nesmes-Ribes, Elizabeth
1994-01-01
In the one hundred years since Wolf died, little effort has gone into research to see if improved reconstructions of sunspot numbers can be made. We have gathered more than 349,000 observations of daily sunspot group counts from more than 350 observers active from 1610 to 1993. Based upon group counts alone, it is possible to make an objective and homogeneous reconstruction of sunspot numbers. From our study, it appears that the Sun has steadily increased in activity since 1700 with the exception of a brief decrease in the Dalton Minimum (1795-1823). The significant results here are the greater depth of the Dalton Minimum, the generally lower activity throughout the 1700's, and the gradual rise in activity from the Maunder Minimum to the present day. This solar activity reconstruction is quite similar to those Wolf published before 1868 rather than the revised Wolf reconstructions after 1873 which used geomagnetic fluctuations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Zheyu; Ramapriya, Gautham Madenoor; Tawarmalani, Mohit
Heat and mass integration to consolidate distillation columns in a multicomponent distillation configuration can lead to a number of new energy efficient and cost effective configurations. In this paper, we identify a powerful and simple-to-use fact about heat and mass integration. The newly developed heat and mass integrated configurations, which we call as HMP configurations, involve first introducing thermal couplings to all intermediate transfer streams, followed by consolidating columns associated with a lighter pure product reboiler and a heavier pure product condenser. A systematic method of enumerating all HMP configurations is introduced. We compare the energy savings of HMP configurationsmore » with the well-known fully thermally coupled (FTC) configurations. We demonstrate that HMP configurations can have very similar and sometimes even the same minimum total vapor duty requirement as the FTC configuration, while using far less number of column sections, intermediate transfer streams, and thermal couplings than the FTC configurations.« less
Minimum triplet covers of binary phylogenetic X-trees.
Huber, K T; Moulton, V; Steel, M
2017-12-01
Trees with labelled leaves and with all other vertices of degree three play an important role in systematic biology and other areas of classification. A classical combinatorial result ensures that such trees can be uniquely reconstructed from the distances between the leaves (when the edges are given any strictly positive lengths). Moreover, a linear number of these pairwise distance values suffices to determine both the tree and its edge lengths. A natural set of pairs of leaves is provided by any 'triplet cover' of the tree (based on the fact that each non-leaf vertex is the median vertex of three leaves). In this paper we describe a number of new results concerning triplet covers of minimum size. In particular, we characterize such covers in terms of an associated graph being a 2-tree. Also, we show that minimum triplet covers are 'shellable' and thereby provide a set of pairs for which the inter-leaf distance values will uniquely determine the underlying tree and its associated branch lengths.
Trade-offs between driving nodes and time-to-control in complex networks
Pequito, Sérgio; Preciado, Victor M.; Barabási, Albert-László; Pappas, George J.
2017-01-01
Recent advances in control theory provide us with efficient tools to determine the minimum number of driving (or driven) nodes to steer a complex network towards a desired state. Furthermore, we often need to do it within a given time window, so it is of practical importance to understand the trade-offs between the minimum number of driving/driven nodes and the minimum time required to reach a desired state. Therefore, we introduce the notion of actuation spectrum to capture such trade-offs, which we used to find that in many complex networks only a small fraction of driving (or driven) nodes is required to steer the network to a desired state within a relatively small time window. Furthermore, our empirical studies reveal that, even though synthetic network models are designed to present structural properties similar to those observed in real networks, their actuation spectra can be dramatically different. Thus, it supports the need to develop new synthetic network models able to replicate controllability properties of real-world networks. PMID:28054597
Trade-offs between driving nodes and time-to-control in complex networks
NASA Astrophysics Data System (ADS)
Pequito, Sérgio; Preciado, Victor M.; Barabási, Albert-László; Pappas, George J.
2017-01-01
Recent advances in control theory provide us with efficient tools to determine the minimum number of driving (or driven) nodes to steer a complex network towards a desired state. Furthermore, we often need to do it within a given time window, so it is of practical importance to understand the trade-offs between the minimum number of driving/driven nodes and the minimum time required to reach a desired state. Therefore, we introduce the notion of actuation spectrum to capture such trade-offs, which we used to find that in many complex networks only a small fraction of driving (or driven) nodes is required to steer the network to a desired state within a relatively small time window. Furthermore, our empirical studies reveal that, even though synthetic network models are designed to present structural properties similar to those observed in real networks, their actuation spectra can be dramatically different. Thus, it supports the need to develop new synthetic network models able to replicate controllability properties of real-world networks.
The need for leadership training in long-term care settings.
Davis, Jullet A
2016-10-03
Purpose Globally, in 1980, approximately 5.8 per cent of the world population was 65 years old and older. By 2050, this number will more than triple to 16 per cent. From a leadership perspective, there is at least one challenge (among many others challenges) to consider. This paper (viewpoint) aims to provide support for the growing need for academically prepared managers. Design/methodology/approach This paper is a viewpoint which presents several characteristics of the long-term care (LTC) field that support the need for academically trained leaders. Findings LTC leaders in all countries must be sufficiently versed in numerous management areas to provide leadership when called on by those assigned to their care. Given local area variations in population needs present across all countries, it may be unwise to advocate for national, countrywide standardization of requirements. Yet, older adults accessing LTC services should expect a minimum level of knowledge from all of their providers - not just those who provide direct, hands-on care. However, similar to those who provide direct care, leaders should receive competency-based education with specific attention to effective communication skills, team-based approaches to care delivery, information technologies and population health. Originality/value Although much of the extant literature focuses on the delivery of care to older persons, there is a dearth of literature addressing the role of LTC leaders in light of global aging. Establishing a minimum level of academic training and increasing transparency focused on the positive experiences of elders residing in LTC facilities should help dispel the notion that placement in an LTC facility reflects filial failure.
NASA Astrophysics Data System (ADS)
Ozheredov, V. A.; Breus, T. K.; Obridko, V. N.
2012-12-01
As follows from the statement of the Third Official Solar Cycle 24 Prediction Panel created by the National Aeronautics and Space Administration (NASA), the National Oceanic and Atmospheric Administration (NOAA), and the International Space Environment Service (ISES) based on the results of an analysis of many solar cycle 24 predictions, there has been no consensus on the amplitude and time of the maximum. There are two different scenarios: 90 units and August 2012 or 140 units and October 2011. The aim of our study is to revise the solar cycle 24 predictions by a comparative analysis of data obtained by three different methods: the singular spectral method, the nonlinear neural-based method, and the precursor method. As a precursor for solar cycle 24, we used the dynamics of the solar magnetic fields forming solar spots with Wolf numbers Rz. According to the prediction on the basis of the neural-based approach, it was established that the maximum of solar cycle 24 is expected to be 70. The precursor method predicted 50 units for the amplitude and April of 2012 for the time of the maximum. In view of the fact that the data used in the precursor method were averaged over 4.4 years, the amplitude of the maximum can be 20-30% larger (i.e., around 60-70 units), which is close to the values predicted by the neural-based method. The protracted minimum of solar cycle 23 and predicted low values of the maximum of solar cycle 24 are reminiscent of the historical Dalton minimum.
State-Level Community Benefit Regulation and Nonprofit Hospitals' Provision of Community Benefits.
Singh, Simone R; Young, Gary J; Loomer, Lacey; Madison, Kristin
2018-04-01
Do nonprofit hospitals provide enough community benefits to justify their tax exemptions? States have sought to enhance nonprofit hospitals' accountability and oversight through regulation, including requirements to report community benefits, conduct community health needs assessments, provide minimum levels of community benefits, and adhere to minimum income eligibility standards for charity care. However, little research has assessed these regulations' impact on community benefits. Using 2009-11 Internal Revenue Service data on community benefit spending for more than eighteen hundred hospitals and the Hilltop Institute's data on community benefit regulation, we investigated the relationship between these four types of regulation and the level and types of hospital-provided community benefits. Our multivariate regression analyses showed that only community health needs assessments were consistently associated with greater community benefit spending. The results for reporting and minimum spending requirements were mixed, while minimum income eligibility standards for charity care were unrelated to community benefit spending. State adoption of multiple types of regulation was consistently associated with higher levels of hospital-provided community benefits, possibly because regulatory intensity conveys a strong signal to the hospital community that more spending is expected. This study can inform efforts to design regulations that will encourage hospitals to provide community benefits consistent with policy makers' goals. Copyright © 2018 by Duke University Press.
75 FR 21955 - Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-26
... only the minimum required burdens on credit unions, consumers, and the public; are appropriate for the... Administration--Completed Actions Regulation Sequence Title Identifier Number Number 412 Privacy of Consumer... Actions 412. PRIVACY OF CONSUMER FINANCIAL INFORMATION Legal Authority: 15 USC 6801 et seq Abstract: NCUA...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-08
... minimum of one individual was removed from an unknown location in Arkansas. The bone is perforated at the... as part of the Howe Collection (Catalog number A234). The bone was subsequently assigned Index number...
MODULATION OF GALACTIC COSMIC RAYS OBSERVED AT L1 IN SOLAR CYCLE 23
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fludra, A., E-mail: Andrzej.Fludra@stfc.ac.uk
2015-01-20
We analyze a unique 15 yr record of galactic cosmic-ray (GCR) measurements made by the SOHO Coronal Diagnostic Spectrometer NIS detectors, recording integrated GCR numbers with energies above 1.0 GeV between 1996 July and 2011 June. We are able to closely reproduce the main features of the SOHO/CDS GCR record using the modulation potential calculated from neutron monitor data by Usoskin et al. The GCR numbers show a clear solar cycle modulation: they decrease by 50% from the 1997 minimum to the 2000 maximum of the solar cycle, then return to the 1997 level in 2007 and continue to rise, in 2009 Decembermore » reaching a level 25% higher than in 1997. This 25% increase is in contrast with the behavior of Ulysses/KET GCR protons extrapolated to 1 AU in the ecliptic plane, showing the same level in 2008-2009 as in 1997. The GCR numbers are inversely correlated with the tilt angle of the heliospheric current sheet. In particular, the continued increase of SOHO/CDS GCRs from 2007 until 2009 is correlated with the decrease of the minimum tilt angle from 30° in mid-2008 to 5° in late 2009. The GCR level then drops sharply from 2010 January, again consistent with a rapid increase of the tilt angle to over 35°. This shows that the extended 2008 solar minimum was different from the 1997 minimum in terms of the structure of the heliospheric current sheet.« less
ERIC Educational Resources Information Center
Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene
2009-01-01
An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…
EPA has initiated a process to revise certain requirements in the WPS. By the end of FY2018, EPA expects to publish a Notice of Proposed Rulemaking to solicit public input on proposed revisions to the WPS requirements for minimum ages, designated represen
Minimum Expected Risk Estimation for Near-neighbor Classification
2006-04-01
We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors ( kNN ...estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant...the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-04
... part 51, and Sec. 97.20 of Title 14 of the Code of Federal Regulations. The large number of SIAPs... the airport and its location, the procedure and the amendment number. The Rule This amendment to 14... this amendment will not have a significant economic impact on a substantial number of small entities...
NASA Astrophysics Data System (ADS)
Singh, Priya; Sarkar, Subir K.; Bandyopadhyay, Pradipta
2014-07-01
We present the results of a high-statistics equilibrium study of the folding/unfolding transition for the 20-residue mini-protein Trp-cage (TC5b) in water. The ECEPP/3 force field is used and the interaction with water is treated by a solvent-accessible surface area method. A Wang-Landau type simulation is used to calculate the density of states and the conditional probabilities for the various values of the radius of gyration and the number of native contacts at fixed values of energy—along with a systematic check on their convergence. All thermodynamic quantities of interest are calculated from this information. The folding-unfolding transition corresponds to a peak in the temperature dependence of the computed specific heat. This is corroborated further by the structural signatures of folding in the distributions for radius of gyration and the number of native contacts as a function of temperature. The potentials of mean force are also calculated for these variables, both separately and jointly. A local free energy minimum, in addition to the global minimum, is found in a temperature range substantially below the folding temperature. The free energy at this second minimum is approximately 5 kBT higher than the value at the global minimum.
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Schmidt-number witnesses and bound entanglement
NASA Astrophysics Data System (ADS)
Sanpera, Anna; Bruß, Dagmar; Lewenstein, Maciej
2001-05-01
The Schmidt number of a mixed state characterizes the minimum Schmidt rank of the pure states needed to construct it. We investigate the Schmidt number of an arbitrary mixed state by studying Schmidt-number witnesses that detect it. We present a canonical form of such witnesses and provide constructive methods for their optimization. Finally, we present strong evidence that all bound entangled states with positive partial transpose in C3⊗C3 have Schmidt number 2.
National Credit Union Administration Semiannual Regulatory Agenda
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-26
... only the minimum required burdens on credit unions, consumers, and the public; are appropriate for the... Administration--Completed Actions Regulation Sequence Title Identifier Number Number 412 Privacy of Consumer... Actions 412. PRIVACY OF CONSUMER FINANCIAL INFORMATION Legal Authority: 15 USC 6801 et seq Abstract: NCUA...
Yilmaz, Sevgi; Toy, Süleyman; Demircioglu Yildiz, Nalan; Yilmaz, Hasan
2009-01-01
In the study, main purpose was to determine the effect of population growth along with the increase in urbanisation, motor vehicle use and green area amount on the temperature values using a 55-year data set in Erzurum, which is hardly industrialised, and one of the coldest cities with highest elevation in Turkey. Although the semi-decadal increases, means of which are 0.1 degrees C for mean, minimum and maximum temperatures, are not clear enough to make a strong comment even in the lights of figures or tables, it was found as the result of the statistical analysis that population growth and increases in the number of vehicles, the number of buildings and the green area amount in the city have no significant effect on mean temperatures. However, the relationships between population growth and maximum temperature; and the number of vehicles and minimum temperature were found to be statistically significant.
A case at last for age-phased reduction in equity.
Samuelson, P A
1989-01-01
Maximizing expected utility over a lifetime leads one who has constant relative risk aversion and faces random-walk securities returns to be "myopic" and hold the same fraction of portfolio in equities early and late in life--a defiance of folk wisdom and casual introspection. By assuming one needs to assure at retirement a minimum ("subsistence") level of wealth, the present analysis deduces a pattern of greater risk-taking when young than when old. When a subsistence minimum is needed at every period of life, the rentier paradoxically is least risk tolerant in youth--the Robert C. Merton paradox that traces to the decline with age of the present discounted value of the subsistence-consumption requirements. Conversely, the decline with age of capitalized human capital reverses the Merton effect. PMID:2813438
The roles of the trading time risks on stock investment return and risks in stock price crashes
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Dong, Zhi-Wei; Yang, Guo-Hui; Long, Chao
2017-03-01
The roles of the trading time risks (TTRs) on stock investment return and risks are investigated in the condition of stock price crashes with Hushen300 data (CSI300) and Dow Jones Industrial Average (ˆDJI), respectively. In order to describe the TTR, we employ the escape time that the stock price drops from the maximum to minimum value in a data window length (DWL). After theoretical and empirical research on probability density function of return, the results in both ˆDJI and CSI300 indicate that: (i) As increasing DWL, the expectation of returns and its stability are weakened. (ii) An optimal TTR is related to a maximum return and minimum risk of stock investment in stock price crashes.
Michael, Andrew J.; Ross, Stephanie L.; Stenner, Heidi D.
2002-01-01
The paucity of strong-motion stations near the 1999 Hector Mine earthquake makes it impossible to make instrumental studies of key questions about near-fault strong-motion patterns associated with this event. However, observations of displaced rocks allow a qualitative investigation of these problems. By observing the slope of the desert surface and the frictional coefficient between these rocks and the desert surface, we estimate the minimum horizontal acceleration needed to displace the rocks. Combining this information with observations of how many rocks were displaced in different areas near the fault, we infer the level of shaking. Given current empirical shaking attenuation relationships, the number of rocks that moved is slightly lower than expected; this implies that slightly lower than expected shaking occurred during the Hector Mine earthquake. Perhaps more importantly, stretches of the fault with 4 m of total displacement at the surface displaced few nearby rocks on 15?? slopes, suggesting that the horizontal accelerations were below 0.2g within meters of the fault scarp. This low level of shaking suggests that the shallow parts of this rupture did not produce strong accelerations. Finally, we did not observe an increased incidence of displaced rocks along the fault zone itself. This suggests that, despite observations of fault-zone-trapped waves generated by aftershocks of the Hector Mine earthquake, such waves were not an important factor in controlling peak ground acceleration during the mainshock.
The DEEP-South: Scheduling and Data Reduction Software System
NASA Astrophysics Data System (ADS)
Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team
2015-08-01
The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.
Minimizing inappropriate medications in older populations: a 10-step conceptual framework.
Scott, Ian A; Gray, Leonard C; Martin, Jennifer H; Mitchell, Charles A
2012-06-01
The increasing burden of harm resulting from the use of multiple drugs in older patient populations represents a major health problem in developed countries. Approximately 1 in 4 older patients admitted to hospitals are prescribed at least 1 inappropriate medication, and up to 20% of all inpatient deaths are attributable to potentially preventable adverse drug reactions. To minimize this drug-related iatrogenesis, we propose a quality use of medicine framework that comprises 10 sequential steps: 1) ascertain all current medications; 2) identify patients at high risk of or experiencing adverse drug reactions; 3) estimate life expectancy in high-risk patients; 4) define overall care goals in the context of life expectancy; 5) define and confirm current indications for ongoing treatment; 6) determine the time until benefit for disease-modifying medications; 7) estimate the magnitude of benefit versus harm in relation to each medication; 8) review the relative utility of different drugs; 9) identify drugs that may be discontinued; and 10) implement and monitor a drug minimization plan with ongoing reappraisal of drug utility and patient adherence by a single nominated clinician. The framework aims to reduce drug use in older patients to the minimum number of essential drugs, and its utility is demonstrated in reference to a hypothetic case study. Further studies are warranted in validating this framework as a means for assisting clinicians to make more appropriate prescribing decisions in at-risk older patients. Copyright © 2012 Elsevier Inc. All rights reserved.
Incidence of laryngeal cancer and exposure to acid mists.
Steenland, K; Schnorr, T; Beaumont, J; Halperin, W; Bloom, T
1988-01-01
To determine the relation between exposure to acid mist and laryngeal cancer, the smoking habits, drinking habits, and incidence of laryngeal cancer of 879 male steelworkers exposed to acid mists during pickling operations was ascertained. Sulphuric acid mist was the primary exposure for most men in this cohort. These men had all worked in a pickling operation for a minimum of six months before 1965, with an average duration of exposure of 9.5 years. Exposures to sulphuric acid in the 1970s averaged about 0.2 mg/m3, and earlier exposures were probably similar. Interviews were conducted with all cohort members or their next of kin in 1986 and medical records of decedents were reviewed. Nine workers were identified who had been diagnosed as having laryngeal cancer, using a conservative case definition that required medical record confirmation for any case among decedents and confirmation by a physician for any case among live individuals. Using data from national surveys of cancer incidence as referent rates, 3.44 laryngeal cancers would have been expected. Excess smoking by the exposed cohort compared with the United States population resulted in an upward adjustment of the expected number of cases of laryngeal cancer to 3.92. The standardised incidence rate ratio for laryngeal cancer was 2.30 (9/3.92), with a one sided p value of 0.01 (assuming a Poisson distribution). The finding of excess laryngeal cancer in this cohort is consistent with four other studies published since 1981. PMID:3203082
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Sunarsih; Kartono
2018-01-01
In this paper, a mathematical model in quadratic programming with fuzzy parameter is proposed to determine the optimal strategy for integrated inventory control and supplier selection problem with fuzzy demand. To solve the corresponding optimization problem, we use the expected value based fuzzy programming. Numerical examples are performed to evaluate the model. From the results, the optimal amount of each product that have to be purchased from each supplier for each time period and the optimal amount of each product that have to be stored in the inventory for each time period were determined with minimum total cost and the inventory level was sufficiently closed to the reference level.
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Mcdonald, G.
1982-01-01
An analysis of thermal cycle life data for four sets of eight thermal barrier coated specimens representing arc currents (plasma gun power) of 525, 600, 800, or 950 amps is presented. The ZrO2-8Y2O3/NiCrAlY plasma spray coated Rene 41 rods were thermal cycled to 1040 C in a Mach 0.3-Jet A/air burner flame. The experimental results indicate the existance of a minimum or threshold power level which coating life expectancy is less than 500 cycles. Above the threshold power level, coating life expectancy more than doubles and increases with arc current.
Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita
2014-06-19
Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less
NASA Astrophysics Data System (ADS)
Hendricks, R. C.; McDonald, G.
1982-02-01
An analysis of thermal cycle life data for four sets of eight thermal barrier coated specimens representing arc currents (plasma gun power) of 525, 600, 800, or 950 amps is presented. The ZrO2-8Y2O3/NiCrAlY plasma spray coated Rene 41 rods were thermal cycled to 1040 C in a Mach 0.3-Jet A/air burner flame. The experimental results indicate the existance of a minimum or threshold power level which coating life expectancy is less than 500 cycles. Above the threshold power level, coating life expectancy more than doubles and increases with arc current.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Wells, Brooke E; Starks, Tyrel J; Parsons, Jeffrey T; Golub, Sarit
2013-01-01
As the mechanisms of the associations between substance use and risky sex remain unclear, this study investigates the interactive roles of conflicts about casual sex and condom use and expectancies of the sexual effects of substances in those associations among gay men. Conflict interacted with expectancies to predict sexual behavior under the influence; low casual sex conflict coupled with high expectancies predicted the highest number of casual partners, and high condom use conflict and high expectancies predicted the highest number of unprotected sex acts. Results have implications for intervention efforts that aim to improve sexual decision-making and reduce sexual expectancies. PMID:23584507
NASA Technical Reports Server (NTRS)
Liu, David (Donghang)
2011-01-01
This paper reports reliability evaluation of BME ceramic capacitors for possible high reliability space-level applications. The study is focused on the construction and microstructure of BME capacitors and their impacts on the capacitor life reliability. First, the examinations of the construction and microstructure of commercial-off-the-shelf (COTS) BME capacitors show great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and approximately 0.5 micrometers, which is much less than that of most PME capacitors. The primary reasons that a BME capacitor can be fabricated with more internal electrode layers and less dielectric layer thickness is that it has a fine-grained microstructure and does not shrink much during ceramic sintering. This results in the BME capacitors a very high volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT) and regular life testing as per MIL-PRF-123. Most BME capacitors were found to fail· with an early dielectric wearout, followed by a rapid wearout failure mode during the HALT test. When most of the early wearout failures were removed, BME capacitors exhibited a minimum mean time-to-failure of more than 10(exp 5) years. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically between 10 and 20. This may suggest that the number of grains per dielectric layer is more critical than the thickness itself for determining the rated voltage and the life expectancy of the BME capacitor. Since BME capacitors have a much smaller grain size than PME capacitors, it is reasonable to predict that BME capacitors with thinner dielectric layers may have an equivalent life expectancy to that of PME capacitors with thicker dielectric layers.
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.
Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei
2017-01-20
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.
Morphology, nurse plants, and minimum apical temperatures for young Carnegiea gigantea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobel, P.S.
1980-06-01
The northern limit of Carnegiea gigantea (Engelm.) Britton and Rose apparently depends on minimum apical temperatures. Diameters, apical spine coverage, and effects of nurse plants on incoming long-wave (infrared (ir)) radiation, all of which affect apical temperatures, were therefore determined for stems of C. gigantea up to 4 m tall at four sites along a north-south transect in Arizona. A simulation model indicated that the increase in diameter accompanying stem growth raised the minimum apical temperature more than 3 C. Thus, plants with the shortest stems would be expected to be the most vulnerable to freezing damage; indeed, freezing damagemore » on stems <0.5 m tall without nurse plants was fairly common at the colder sites. Nurse plants obstructed a greater portion of the sky for C. gigantea at the colder sites; e.g., the effective environmental temperature for ir radiation at such locations was raised more than 10 C for stems under 1 m tall. If the northern limit of C. gigantea reflects wintertime survival of juveniles, nurse plants could extend the range by offering some protection against freezing.« less
Galactic Cosmic Ray Intensity in the Upcoming Minimum of the Solar Activity Cycle
NASA Astrophysics Data System (ADS)
Krainev, M. B.; Bazilevskaya, G. A.; Kalinin, M. S.; Svirzhevskaya, A. K.; Svirzhevskii, N. S.
2018-03-01
During the prolonged and deep minimum of solar activity between cycles 23 and 24, an unusual behavior of the heliospheric characteristics and increased intensity of galactic cosmic rays (GCRs) near the Earth's orbit were observed. The maximum of the current solar cycle 24 is lower than the previous one, and the decline in solar and, therefore, heliospheric activity is expected to continue in the next cycle. In these conditions, it is important for an understanding of the process of GCR modulation in the heliosphere, as well as for applied purposes (evaluation of the radiation safety of planned space flights, etc.), to estimate quantitatively the possible GCR characteristics near the Earth in the upcoming solar minimum ( 2019-2020). Our estimation is based on the prediction of the heliospheric characteristics that are important for cosmic ray modulation, as well as on numeric calculations of GCR intensity. Additionally, we consider the distribution of the intensity and other GCR characteristics in the heliosphere and discuss the intercycle variations in the GCR characteristics that are integral for the whole heliosphere (total energy, mean energy, and charge).
Simons, Kelsey; Connolly, Robert P; Bonifas, Robin; Allen, Priscilla D; Bailey, Kathleen; Downes, Deirdre; Galambos, Colleen
2012-02-01
The Minimum Data Set 3.0 has introduced a higher set of expectations for assessment of residents' psychosocial needs, including new interviewing requirements, new measures of depression and resident choice, and new discharge screening procedures. Social service staff are primary providers of psychosocial assessment and care in nursing homes; yet, research demonstrates that many do not possess the minimum qualifications, as specified in federal regulations, to effectively provide these services given the clinical complexity of this client population. Likewise, social service caseloads generally exceed manageable levels. This article addresses the need for enhanced training and support of social service and interdisciplinary staff in long term care facilities in light of the new Minimum Data Set 3.0 assessment procedures as well as new survey and certification guidelines emphasizing quality of life. A set of recommendations will be made with regard to training, appropriate role functions within the context of interdisciplinary care, and needs for more realistic staffing ratios. Copyright © 2012 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.
Linear Mode HgCdTe Avalanche Photodiodes for Photon Counting Applications
NASA Technical Reports Server (NTRS)
Sullivan, William, III; Beck, Jeffrey; Scritchfield, Richard; Skokan, Mark; Mitra, Pradip; Sun, Xiaoli; Abshire, James; Carpenter, Darren; Lane, Barry
2015-01-01
An overview of recent improvements in the understanding and maturity of linear mode photon counting with HgCdTe electron-initiated avalanche photodiodes is presented. The first HgCdTe LMPC 2x8 format array fabricated in 2011 with 64 micron pitch was a remarkable success in terms of demonstrating a high single photon signal to noise ratio of 13.7 with an excess noise factor of 1.3-1.4, a 7 ns minimum time between events, and a broad spectral response extending from 0.4 micron to 4.2 micron. The main limitations were a greater than 10x higher false event rate than expected of greater than 1 MHz, a 5-7x lower than expected APD gain, and a photon detection efficiency of only 50% when greater than 60% was expected. This paper discusses the reasons behind these limitations and the implementation of their mitigations with new results.
A Conceptual Model of Military Recruitment
2009-10-01
NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Research and...Technology Organisation (NATO) BP 25, F-92201 Neuilly-sur-Seine Cedex, France 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY...Hiring Expectancies – Expectancy (VIE) Theory ( Vroom , 1996) states individuals choose among a set of employment alternatives on the basis of the
Simulation of Collision of Arbitrary Shape Particles with Wall in a Viscous Fluid
NASA Astrophysics Data System (ADS)
Mohaghegh, Fazlolah; Udaykumar, H. S.
2016-11-01
Collision of finite size arbitrary shape particles with wall in a viscous flow is modeled using immersed boundary method. A potential function indicating the distance from the interface is introduced for the particles and the wall. The potential can be defined by using either an analytical expression or level set method. The collision starts when the indicator potentials of the particle and wall are overlapping based on a minimum cut off. A simplified mass spring model is used in order to apply the collision forces. Instead of using a dashpot in order to damp the energy, the spring stiffness is adjusted during the bounce. The results for the case of collision of a falling sphere with the bottom wall agrees well with the experiments. Moreover, it is shown that the results are independent from the minimum collision cut off distance value. Finally, when the particle's shape is ellipsoidal, the rotation of the particle after the collision becomes important and noticeable: At low Stokes number values, the particle almost adheres to the wall in one side and rotates until it reaches the minimum gravitational potential. At high Stokes numbers, the particle bounces and loses the energy until it reaches a situation with low Stokes number.
Bit error rate tester using fast parallel generation of linear recurring sequences
Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.
2003-05-06
A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.
European Scientific Notes. Volume 36, Number 4.
1982-03-30
de of gray cast iron, Mampaey demonstrated that Recherches Scientifiques et Techniques de the minimum riser dimensions needed to obtain I’lndustrie...through the nonprofit He is now making a similar study of ’nodular Industrial association Fabrimetal (Fdlration des cast iron. entreprises de I’industrie...SCIENTIFIC NOTES April 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR(&) II. CONTRACT OR GRANT NUMBER(e) 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10
Drop-Weight Impact Test on U-Shape Concrete Specimens with Statistical and Regression Analyses
Zhu, Xue-Chao; Zhu, Han; Li, Hao-Ran
2015-01-01
According to the principle and method of drop-weight impact test, the impact resistance of concrete was measured using self-designed U-shape specimens and a newly designed drop-weight impact test apparatus. A series of drop-weight impact tests were carried out with four different masses of drop hammers (0.875, 0.8, 0.675 and 0.5 kg). The test results show that the impact resistance results fail to follow a normal distribution. As expected, U-shaped specimens can predetermine the location of the cracks very well. It is also easy to record the cracks propagation during the test. The maximum of coefficient of variation in this study is 31.2%; it is lower than the values obtained from the American Concrete Institute (ACI) impact tests in the literature. By regression analysis, the linear relationship between the first-crack and ultimate failure impact resistance is good. It can suggested that a minimum number of specimens is required to reliably measure the properties of the material based on the observed levels of variation. PMID:28793540
Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger
2013-08-21
A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.
Eriksen, Marcus; Lebreton, Laurent C M; Carson, Henry S; Thiel, Martin; Moore, Charles J; Borerro, Jose C; Galgani, Francois; Ryan, Peter G; Reisser, Julia
2014-01-01
Plastic pollution is ubiquitous throughout the marine environment, yet estimates of the global abundance and weight of floating plastics have lacked data, particularly from the Southern Hemisphere and remote regions. Here we report an estimate of the total number of plastic particles and their weight floating in the world's oceans from 24 expeditions (2007-2013) across all five sub-tropical gyres, costal Australia, Bay of Bengal and the Mediterranean Sea conducting surface net tows (N = 680) and visual survey transects of large plastic debris (N = 891). Using an oceanographic model of floating debris dispersal calibrated by our data, and correcting for wind-driven vertical mixing, we estimate a minimum of 5.25 trillion particles weighing 268,940 tons. When comparing between four size classes, two microplastic <4.75 mm and meso- and macroplastic >4.75 mm, a tremendous loss of microplastics is observed from the sea surface compared to expected rates of fragmentation, suggesting there are mechanisms at play that remove <4.75 mm plastic particles from the ocean surface.
Eriksen, Marcus; Lebreton, Laurent C. M.; Carson, Henry S.; Thiel, Martin; Moore, Charles J.; Borerro, Jose C.; Galgani, Francois; Ryan, Peter G.; Reisser, Julia
2014-01-01
Plastic pollution is ubiquitous throughout the marine environment, yet estimates of the global abundance and weight of floating plastics have lacked data, particularly from the Southern Hemisphere and remote regions. Here we report an estimate of the total number of plastic particles and their weight floating in the world's oceans from 24 expeditions (2007–2013) across all five sub-tropical gyres, costal Australia, Bay of Bengal and the Mediterranean Sea conducting surface net tows (N = 680) and visual survey transects of large plastic debris (N = 891). Using an oceanographic model of floating debris dispersal calibrated by our data, and correcting for wind-driven vertical mixing, we estimate a minimum of 5.25 trillion particles weighing 268,940 tons. When comparing between four size classes, two microplastic <4.75 mm and meso- and macroplastic >4.75 mm, a tremendous loss of microplastics is observed from the sea surface compared to expected rates of fragmentation, suggesting there are mechanisms at play that remove <4.75 mm plastic particles from the ocean surface. PMID:25494041
Bioinformatic mining of EST-SSR loci in the Pacific oyster, Crassostrea gigas.
Wang, Y; Ren, R; Yu, Z
2008-06-01
A set of expressed sequence tag-simple sequence repeat (EST-SSR) markers of the Pacific oyster, Crassostrea gigas, was developed through bioinformatic mining of the GenBank public database. As of June 30, 2007, a total of 5132 EST sequences from GenBank were downloaded and screened for di-, tri- and tetra-nucleotide repeats, with criteria set at a minimum of 5, 4 and 4 repeats for the three categories of SSRs respectively. Seventeen polymorphic microsatellite markers were characterized. Allele numbers ranged from 3 to 10, and the observed and expected heterozygosity values varied from 0.125 to 0.770 and from 0.113 to 0.732 respectively. Eleven loci were at Hardy-Weinberg equilibrium (HWE); the other six loci showed significant departure from HWE (P < 0.01), suggesting possible presence of null alleles. Pairwise check of linkage disequilibrium (LD) indicated that 11 of 136 pairs of loci showed significant LD (P < 0.01), likely due to HWE present in single markers. Cross-species amplification was examined for five other Crassostrea species and reasonable results were obtained, promising usefulness of these markers in oyster genetics.
Micrometeoroid and Orbital Debris Risk Assessment With Bumper 3
NASA Technical Reports Server (NTRS)
Hyde, J.; Bjorkman, M.; Christiansen, E.; Lear, D.
2017-01-01
The Bumper 3 computer code is the primary tool used by NASA for micrometeoroid and orbital debris (MMOD) risk analysis. Bumper 3 (and its predecessors) have been used to analyze a variety of manned and unmanned spacecraft. The code uses NASA's latest micrometeoroid (MEM-R2) and orbital debris (ORDEM 3.0) environment definition models and is updated frequently with ballistic limit equations that describe the hypervelocity impact performance of spacecraft materials. The Bumper 3 program uses these inputs along with a finite element representation of spacecraft geometry to provide a deterministic calculation of the expected number of failures. The Bumper 3 software is configuration controlled by the NASA/JSC Hypervelocity Impact Technology (HVIT) Group. This paper will demonstrate MMOD risk assessment techniques with Bumper 3 used by NASA's HVIT Group. The Permanent Multipurpose Module (PMM) was added to the International Space Station in 2011. A Bumper 3 MMOD risk assessment of this module will show techniques used to create the input model and assign the property IDs. The methodology used to optimize the MMOD shielding for minimum mass while still meeting structural penetration requirements will also be demonstrated.
Performance Report: A timeline for the synchrotron calibration of AXAF
NASA Technical Reports Server (NTRS)
Tananbaum, H. D.; Graessle, D.
1994-01-01
Presented herein are the known elements of the timeline for synchrotron reflectance calibrations of HRMA witness samples (Section 2). In Section 3, lists of measurements to be done on each witness flat are developed. The elements are then arranged into timelines for the three beamlines we expect to employ in covering the full 50-12,000 eV energy range (Section 4). Although the required AXAF operational range is only 0.1-10 keV, we must calibrate the extent to which radiation just outside this band may contaminate our in-band response. In Section 5, we describe the working relationships which exist with each of the beamlines, and estimate the time available for AXAF measurements on each. From the timelines and the available time, we calculate the number of flats which could be measured in full detail over the duration of the program for each beamline. A suggestion is made regarding a minimum required baselines of witness flats from each element coating run or qualification run to be used in the calibration. We intend that this suggestion open discussion of the issue of witness flat deployment.
Performance measurements of hybrid PIN diode arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, J.G.; Arens, J.F.; Kramer, G.
We report on the successful effort to develop hybrid PIN diode arrays and to demonstrate their potential as components of vertex detectors. Hybrid pixel arrays have been fabricated by the Hughes Aircraft Co. by bump bonding readout chips developed by Hughes to an array of PIN diodes manufactured by Micron Semiconductor Inc. These hybrid pixel arrays were constructed in two configurations. One array format having 10 {times} 64 pixels, each 120 {mu}m square, and the other format having 256 {times} 256 pixels, each 30 {mu}m square. In both cases, the thickness of the PIN diode layer is 300 {mu}m. Measurementsmore » of detector performance show that excellent position resolution can be achieved by interpolation. By determining the centroid of the charge cloud which spreads charge into a number of neighboring pixels, a spatial resolution of a few microns has been attained. The noise has been measured to be about 300 electrons (rms) at room temperature, as expected from KTC and dark current considerations, yielding a signal-to-noise ratio of about 100 for minimum ionizing particles. 4 refs., 13 figs.« less
Modeling distortion of HIT by an Actuator Disk in a periodic domain
NASA Astrophysics Data System (ADS)
Ghate, Aditya; Ghaisas, Niranjan; Lele, Sanjiva
2017-11-01
We study the distortion of incompressible, homogeneous isotropic turbulence (HIT) by a dragging actuator disk with a fixed thrust coefficient (under the large Reynolds number limit), using Large Eddy Simulation (LES). The HIT inflow is tailored to ensure that the largest length scales in the flow are smaller than the actuator disk diameter in order to minimize the meandering of the turbulent wake and isolate the length scales that undergo distortion. The numerical scheme (Fourier collocation with dealiasing) and the SGS closure (anisotropic minimum dissipation model) are carefully selected to minimize numerical artifacts expected due to the inviscid assumption. The LES is used to characterize the following 3 properties of the flow a) distortion of HIT due to the expanding streamtube resulting in strong anisotropy, b) turbulent pressure modulation across the actuator disk, and the c) turbulent wake state. Finally, we attempt to model the initial distortion and the pressure modulation using a WKB variant of RDT solved numerically using a set of discrete Gabor modes. Funding provided by Precourt Institute for Energy at Stanford University.
Saheli, P T; Rowe, R K; Petersen, E J; O'Carroll, D M
2017-05-01
The new applications for carbon nanotubes (CNTs) in various fields and consequently their greater production volume have increased their potential release to the environment. Landfills are one of the major locations where carbon nanotubes are expected to be disposed and it is important to ensure that they can limit the release of CNTs. Diffusion of multiwall carbon nanotubes (MWCNTs) dispersed in an aqueous media through a high-density polyethylene (HDPE) geomembrane (as a part of the landfill barrier system) was examined. Based on the laboratory tests, the permeation coefficient was estimated to be less than 5.1×10 -15 m 2 /s. The potential performance of a HDPE geomembrane and geosynthetic clay liner (GCL) as parts of a composite liner in containing MWCNTs was modelled for six different scenarios. The results suggest that the low value of permeation coefficient of an HDPE geomembrane makes it an effective diffusive barrier for MWCNTs and by keeping the geomembrane defects to minimum during the construction (e.g., number of holes and length of wrinkles) a composite liner commonly used in municipal solid waste landfills will effectively contain MWCNTs.
Overview and Evaluation of Bluetooth Low Energy: An Emerging Low-Power Wireless Technology
Gomez, Carles; Oller, Joaquim; Paradells, Josep
2012-01-01
Bluetooth Low Energy (BLE) is an emerging low-power wireless technology developed for short-range control and monitoring applications that is expected to be incorporated into billions of devices in the next few years. This paper describes the main features of BLE, explores its potential applications, and investigates the impact of various critical parameters on its performance. BLE represents a trade-off between energy consumption, latency, piconet size, and throughput that mainly depends on parameters such as connInterval and connSlaveLatency. According to theoretical results, the lifetime of a BLE device powered by a coin cell battery ranges between 2.0 days and 14.1 years. The number of simultaneous slaves per master ranges between 2 and 5,917. The minimum latency for a master to obtain a sensor reading is 676 μs, although simulation results show that, under high bit error rate, average latency increases by up to three orders of magnitude. The paper provides experimental results that complement the theoretical and simulation findings, and indicates implementation constraints that may reduce BLE performance.
NASA Astrophysics Data System (ADS)
Delmelle, Eric M.; Thill, Jean-Claude; Peeters, Dominique; Thomas, Isabelle
2014-07-01
In rapidly growing urban areas, it is deemed vital to expand (or contract) an existing network of public facilities to meet anticipated changes in the level of demand. We present a multi-period capacitated median model for school network facility location planning that minimizes transportation costs, while functional costs are subject to a budget constraint. The proposed Vintage Flexible Capacitated Location Problem (ViFCLP) has the flexibility to account for a minimum school-age closing requirement, while the maximum capacity of each school can be adjusted by the addition of modular units. Non-closest assignments are controlled by the introduction of a parameter penalizing excess travel. The applicability of the ViFCLP is illustrated on a large US school system (Charlotte-Mecklenburg, North Carolina) where high school demand is expected to grow faster with distance to the city center. Higher school capacities and greater penalty on travel impedance parameter reduce the number of non-closest assignments. The proposed model is beneficial to policy makers seeking to improve the provision and efficiency of public services over a multi-period planning horizon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zang, Qing; Zhao, Junyu; Chen, Hui
2013-09-15
The detector circuit is the core component of filter polychromator which is used for scattering light analysis in Thomson scattering diagnostic, and is responsible for the precision and stability of a system. High signal-to-noise and stability are primary requirements for the diagnostic. Recently, an upgraded detector circuit for weak light detecting in Experimental Advanced Superconducting Tokamak (EAST) edge Thomson scattering system has been designed, which can be used for the measurement of large electron temperature (T{sub e}) gradient and low electron density (n{sub e}). In this new circuit, a thermoelectric-cooled avalanche photodiode with the aid circuit is involved for increasingmore » stability and enhancing signal-to-noise ratio (SNR), especially the circuit will never be influenced by ambient temperature. These features are expected to improve the accuracy of EAST Thomson diagnostic dramatically. Related mechanical construction of the circuit is redesigned as well for heat-sinking and installation. All parameters are optimized, and SNR is dramatically improved. The number of minimum detectable photons is only 10.« less
2012-01-01
Background Symmetry and regularity of gait are essential outcomes of gait retraining programs, especially in lower-limb amputees. This study aims presenting an algorithm to automatically compute symmetry and regularity indices, and assessing the minimum number of strides for appropriate evaluation of gait symmetry and regularity through autocorrelation of acceleration signals. Methods Ten transfemoral amputees (AMP) and ten control subjects (CTRL) were studied. Subjects wore an accelerometer and were asked to walk for 70 m at their natural speed (twice). Reference values of step and stride regularity indices (Ad1 and Ad2) were obtained by autocorrelation analysis of the vertical and antero-posterior acceleration signals, excluding initial and final strides. The Ad1 and Ad2 coefficients were then computed at different stages by analyzing increasing portions of the signals (considering both the signals cleaned by initial and final strides, and the whole signals). At each stage, the difference between Ad1 and Ad2 values and the corresponding reference values were compared with the minimum detectable difference, MDD, of the index. If that difference was less than MDD, it was assumed that the portion of signal used in the analysis was of sufficient length to allow reliable estimation of the autocorrelation coefficient. Results All Ad1 and Ad2 indices were lower in AMP than in CTRL (P < 0.0001). Excluding initial and final strides from the analysis, the minimum number of strides needed for reliable computation of step symmetry and stride regularity was about 2.2 and 3.5, respectively. Analyzing the whole signals, the minimum number of strides increased to about 15 and 20, respectively. Conclusions Without the need to identify and eliminate the phases of gait initiation and termination, twenty strides can provide a reasonable amount of information to reliably estimate gait regularity in transfemoral amputees. PMID:22316184