* Minimum # Experimental Samples DNA Volume (ul) Genomic DNA Concentration (ng/ul) Low Input DNA Volume (ul . **Please inquire about additional cost for low input option. Genotyping Minimum # Experimental Samples DNA sample quality. If you do submit WGA samples, you should anticipate a higher non-random missing data rate
Aseptic minimum volume vitrification technique for porcine parthenogenetically activated blastocyst.
Lin, Lin; Yu, Yutao; Zhang, Xiuqing; Yang, Huanming; Bolund, Lars; Callesen, Henrik; Vajta, Gábor
2011-01-01
Minimum volume vitrification may provide extremely high cooling and warming rates if the sample and the surrounding medium contacts directly with the respective liquid nitrogen and warming medium. However, this direct contact may result in microbial contamination. In this work, an earlier aseptic technique was applied for minimum volume vitrification. After equilibration, samples were loaded on a plastic film, immersed rapidly into factory derived, filter-sterilized liquid nitrogen, and sealed into sterile, pre-cooled straws. At warming, the straw was cut, the filmstrip was immersed into a 39 degree C warming medium, and the sample was stepwise rehydrated. Cryosurvival rates of porcine blastocysts produced by parthenogenetical activation did not differ from control, vitrified blastocysts with Cryotop. This approach can be used for minimum volume vitrification methods and may be suitable to overcome the biological dangers and legal restrictions that hamper the application of open vitrification techniques.
40 CFR 63.1385 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable emission limits: (1) Method 1 (40 CFR part 60, appendix A) for the selection of the sampling port location and number of sampling ports; (2) Method 2 (40 CFR part 60, appendix A) for volumetric flow rate.... Each run shall consist of a minimum run time of 2 hours and a minimum sample volume of 60 dry standard...
Information-Theoretic Assessment of Sample Imaging Systems
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Park, Stephen K.; Rahman, Zia-ur
1999-01-01
By rigorously extending modern communication theory to the assessment of sampled imaging systems, we develop the formulations that are required to optimize the performance of these systems within the critical constraints of image gathering, data transmission, and image display. The goal of this optimization is to produce images with the best possible visual quality for the wide range of statistical properties of the radiance field of natural scenes that one normally encounters. Extensive computational results are presented to assess the performance of sampled imaging systems in terms of information rate, theoretical minimum data rate, and fidelity. Comparisons of this assessment with perceptual and measurable performance demonstrate that (1) the information rate that a sampled imaging system conveys from the captured radiance field to the observer is closely correlated with the fidelity, sharpness and clarity with which the observed images can be restored and (2) the associated theoretical minimum data rate is closely correlated with the lowest data rate with which the acquired signal can be encoded for efficient transmission.
77 FR 22463 - Importation of Clementines From Spain; Amendment to Inspection Provisions
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-16
... adjust the sampling rate and thereby detect pests that might otherwise go undetected prior to treatment... distribution. The commenter recommended that, for those reasons, 200 fruit per consignment be the minimum... concern that an increase in the sampling rate would require more time for APHIS inspectors to sample the...
The Minimum Wage and the Employment of Teenagers. Recent Research.
ERIC Educational Resources Information Center
Fallick, Bruce; Currie, Janet
A study used individual-level data from the National Longitudinal Study of Youth to examine the effects of changes in the federal minimum wage on teenage employment. Individuals in the sample were classified as either likely or unlikely to be affected by these increases in the federal minimum wage on the basis of their wage rates and industry of…
Quantization noise in digital speech. M.S. Thesis- Houston Univ.
NASA Technical Reports Server (NTRS)
Schmidt, O. L.
1972-01-01
The amount of quantization noise generated in a digital-to-analog converter is dependent on the number of bits or quantization levels used to digitize the analog signal in the analog-to-digital converter. The minimum number of quantization levels and the minimum sample rate were derived for a digital voice channel. A sample rate of 6000 samples per second and lowpass filters with a 3 db cutoff of 2400 Hz are required for 100 percent sentence intelligibility. Consonant sounds are the first speech components to be degraded by quantization noise. A compression amplifier can be used to increase the weighting of the consonant sound amplitudes in the analog-to-digital converter. An expansion network must be installed at the output of the digital-to-analog converter to restore the original weighting of the consonant sounds. This technique results in 100 percent sentence intelligibility for a sample rate of 5000 samples per second, eight quantization levels, and lowpass filters with a 3 db cutoff of 2000 Hz.
Research on Abnormal Detection Based on Improved Combination of K - means and SVDD
NASA Astrophysics Data System (ADS)
Hao, Xiaohong; Zhang, Xiaofeng
2018-01-01
In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.
Chappell, Nick A; Jones, Timothy D; Tych, Wlodek
2017-10-15
Insufficient temporal monitoring of water quality in streams or engineered drains alters the apparent shape of storm chemographs, resulting in shifted model parameterisations and changed interpretations of solute sources that have produced episodes of poor water quality. This so-called 'aliasing' phenomenon is poorly recognised in water research. Using advances in in-situ sensor technology it is now possible to monitor sufficiently frequently to avoid the onset of aliasing. A systems modelling procedure is presented allowing objective identification of sampling rates needed to avoid aliasing within strongly rainfall-driven chemical dynamics. In this study aliasing of storm chemograph shapes was quantified by changes in the time constant parameter (TC) of transfer functions. As a proportion of the original TC, the onset of aliasing varied between watersheds, ranging from 3.9-7.7 to 54-79 %TC (or 110-160 to 300-600 min). However, a minimum monitoring rate could be identified for all datasets if the modelling results were presented in the form of a new statistic, ΔTC. For the eight H + , DOC and NO 3 -N datasets examined from a range of watershed settings, an empirically-derived threshold of 1.3(ΔTC) could be used to quantify minimum monitoring rates within sampling protocols to avoid artefacts in subsequent data analysis. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
NASA Astrophysics Data System (ADS)
Mottram, Catherine M.; Parrish, Randall R.; Regis, Daniele; Warren, Clare J.; Argles, Tom W.; Harris, Nigel B. W.; Roberts, Nick M. W.
2015-07-01
Quantitative constraints on the rates of tectonic processes underpin our understanding of the mechanisms that form mountains. In the Sikkim Himalaya, late structural doming has revealed time-transgressive evidence of metamorphism and thrusting that permit calculation of the minimum rate of movement on a major ductile fault zone, the Main Central Thrust (MCT), by a novel methodology. U-Th-Pb monazite ages, compositions, and metamorphic pressure-temperature determinations from rocks directly beneath the MCT reveal that samples from 50 km along the transport direction of the thrust experienced similar prograde, peak, and retrograde metamorphic conditions at different times. In the southern, frontal edge of the thrust zone, the rocks were buried to conditions of 550°C and 0.8 GPa between 21 and 18 Ma along the prograde path. Peak metamorphic conditions of 650°C and 0.8-1.0 GPa were subsequently reached as this footwall material was underplated to the hanging wall at 17-14 Ma. This same process occurred at analogous metamorphic conditions between 18-16 Ma and 14.5-13 Ma in the midsection of the thrust zone and between 13 Ma and 12 Ma in the northern, rear edge of the thrust zone. Northward younging muscovite 40Ar/39Ar ages are consistently 4 Ma younger than the youngest monazite ages for equivalent samples. By combining the geochronological data with the >50 km minimum distance separating samples along the transport axis, a minimum average thrusting rate of 10 ± 3 mm yr-1 can be calculated. This provides a minimum constraint on the amount of Miocene India-Asia convergence that was accommodated along the MCT.
ERIC Educational Resources Information Center
Yu, Jiang; Williford, William R.
1991-01-01
Used sample from New York State Driver License File to mathematically extend dimension of file so that data purging procedure exerts minimum influence on calculation of drinking-driving recidivism. Examined impact of dimension of data on recidivism rate and mathematically extended file until impact of data dimension was minimum. Calculated New…
40 CFR 86.137-94 - Dynamometer test run, gaseous and particulate emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., if applicable), the temperature recorder, the vehicle cooling fan, and the heated THC analysis... diesel-cycle THC analyzer continuous sample line and filter, methanol-fueled vehicle THC, methanol and... measuring devices to zero. (i) For gaseous bag samples (except THC samples), the minimum flow rate is 0.17...
40 CFR 86.137-94 - Dynamometer test run, gaseous and particulate emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., if applicable), the temperature recorder, the vehicle cooling fan, and the heated THC analysis... diesel-cycle THC analyzer continuous sample line and filter, methanol-fueled vehicle THC, methanol and... measuring devices to zero. (i) For gaseous bag samples (except THC samples), the minimum flow rate is 0.17...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the raw exhaust flow rate based on the measured intake air molar flow rate and the chemical balance..., fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b) Determine...) and dilute exhaust corrected for any removed water. (c) Use good engineering judgment to develop your...
9 CFR 130.30 - Hourly rate and minimum user fees.
Code of Federal Regulations, 2011 CFR
2011-01-01
... covered by a flat rate user fee in § 130.7. (14) Export-related bird banding for identification. (15..., except those services covered by flat rate user fees elsewhere in this part, will be calculated at the... activities covered in § 130.11. (3) Obtaining samples required to be tested, either to obtain import permits...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2012 CFR
2012-07-01
... chemical balance terms as given in § 1065.655(e). You may determine the raw exhaust flow rate based on the measured intake air and dilute exhaust molar flow rates and the dilute exhaust chemical balance terms as... air, fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2013 CFR
2013-07-01
... chemical balance terms as given in § 1065.655(e). You may determine the raw exhaust flow rate based on the measured intake air and dilute exhaust molar flow rates and the dilute exhaust chemical balance terms as... air, fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b...
40 CFR 1065.546 - Verification of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2014 CFR
2014-07-01
... chemical balance terms as given in § 1065.655(e). You may determine the raw exhaust flow rate based on the measured intake air and dilute exhaust molar flow rates and the dilute exhaust chemical balance terms as... air, fuel rate measurements, and fuel properties, consistent with good engineering judgment. (b...
40 CFR 1066.125 - Data updating, recording, and control.
Code of Federal Regulations, 2014 CFR
2014-07-01
... minimum recording frequency, such as for sample flow rates from a CVS that does not have a heat exchanger... exhaust flow rate from a CVS with a heat exchanger upstream of the flow measurement 1 Hz. 40 CFR 1065.545§ 1066.425 Diluted exhaust flow rate from a CVS without a heat exchanger upstream of the flow measurement...
NASA Technical Reports Server (NTRS)
Corbett, Lee B.; Bierman, Paul R.; Graly, Joseph A.; Neumann, Thomas A.; Rood, Dylan H.
2013-01-01
High-latitude landscape evolution processes have the potential to preserve old, relict surfaces through burial by cold-based, nonerosive glacial ice. To investigate landscape history and age in the high Arctic, we analyzed in situ cosmogenic Be(sup 10) and Al (sup 26) in 33 rocks from Upernavik, northwest Greenland. We sampled adjacent bedrock-boulder pairs along a 100 km transect at elevations up to 1000 m above sea level. Bedrock samples gave significantly older apparent exposure ages than corresponding boulder samples, and minimum limiting ages increased with elevation. Two-isotope calculations Al(sup26)/B(sup 10) on 20 of the 33 samples yielded minimum limiting exposure durations up to 112 k.y., minimum limiting burial durations up to 900 k.y., and minimum limiting total histories up to 990 k.y. The prevalence of BE(sup 10) and Al(sup 26) inherited from previous periods of exposure, especially in bedrock samples at high elevation, indicates that these areas record long and complex surface exposure histories, including significant periods of burial with little subglacial erosion. The long total histories suggest that these high elevation surfaces were largely preserved beneath cold-based, nonerosive ice or snowfields for at least the latter half of the Quaternary. Because of high concentrations of inherited nuclides, only the six youngest boulder samples appear to record the timing of ice retreat. These six samples suggest deglaciation of the Upernavik coast at 11.3 +/- 0.5 ka (average +/- 1 standard deviation). There is no difference in deglaciation age along the 100 km sample transect, indicating that the ice-marginal position retreated rapidly at rates of approx.120 m yr(sup-1).
Energy landscapes and properties of biomolecules.
Wales, David J
2005-11-09
Thermodynamic and dynamic properties of biomolecules can be calculated using a coarse-grained approach based upon sampling stationary points of the underlying potential energy surface. The superposition approximation provides an overall partition function as a sum of contributions from the local minima, and hence functions such as internal energy, entropy, free energy and the heat capacity. To obtain rates we must also sample transition states that link the local minima, and the discrete path sampling method provides a systematic means to achieve this goal. A coarse-grained picture is also helpful in locating the global minimum using the basin-hopping approach. Here we can exploit a fictitious dynamics between the basins of attraction of local minima, since the objective is to find the lowest minimum, rather than to reproduce the thermodynamics or dynamics.
Rate dependent deformation of porous sandstone across the brittle-ductile transition
NASA Astrophysics Data System (ADS)
Jefferd, M.; Brantut, N.; Mitchell, T. M.; Meredith, P. G.
2017-12-01
Porous sandstones transition from dilatant, brittle deformation at low pressure, to compactant, ductile deformation at high pressure. Both deformation modes are driven by microcracking, and are expected to exhibit a time dependency due to chemical interactions between the pore fluid and the rock matrix. In the brittle regime, time-dependent failure and brittle creep are well documented. However, much less is understood in the ductile regime. We present results from a series of triaxial deformation experiments, performed in the brittle-ductile transition zone of fluid saturated Bleurswiller sandstone (initial porosity = 23%). Samples were deformed at 40 MPa effective pressure, to 4% axial strain, under either constant strain rate (10-5 s-1) or constant stress (creep) conditions. In addition to stress, axial strain and pore volume change, P wave velocities and acoustic emission were monitored throughout. During constant stress tests, the strain rate initially decreased with increasing strain, before reaching a minimum and accelerating to a constant level beyond 2% axial strain. When plotted against axial strain, the strain rate evolution under constant stress conditions, mirrors the stress evolution during the constant strain rate tests; where strain hardening occurs prior to peak stress, which is followed by strain softening and an eventual plateau. In all our tests, the minimum strain rate during creep occurs at the same inelastic strain as the peak stress during constant strain tests, and strongly decreases with decreasing applied stress. The microstructural state of the rock, as interpreted from similar volumetric strain curves, as well as the P-wave velocity evolution and AE production rate, appears to be solely a function of the total inelastic strain, and is independent of the length of time required to reach said strain. We tested the sensitivity of fluid chemistry on the time dependency, through a series of experiments performed under similar stress conditions, but with chemically inert decane instead of water as the pore fluid. Under the same applied stress, decane saturated samples reached a minimum strain rate 2 orders of magnitude lower than the water saturated samples. This is consistent with a mechanism of subcritical crack growth driven by chemical interactions between the pore fluid and the rock.
Two dimensional eye tracking: Sampling rate of forcing function
NASA Technical Reports Server (NTRS)
Hornseth, J. P.; Monk, D. L.; Porterfield, J. L.; Mcmurry, R. L.
1978-01-01
A study was conducted to determine the minimum update rate of a forcing function display required for the operator to approximate the tracking performance obtained on a continuous display. In this study, frequency analysis was used to determine whether there was an associated change in the transfer function characteristics of the operator. It was expected that as the forcing function display update rate was reduced, from 120 to 15 samples per second, the operator's response to the high frequency components of the forcing function would show a decrease in gain, an increase in phase lag, and a decrease in coherence.
Schelleman-Offermans, Karen; Roodbeen, Ruud T J; Lemmens, Paul H H M
2017-11-01
As of January 2014, the Dutch minimum legal age for the sale and purchase of all alcoholic beverages has increased from 16 to 18 years of age. The effectiveness of a minimum legal age policy in controlling the availability of alcohol for adolescents depends on the extent to which this minimum legal age is complied with in the field. The main aim of the current study is to investigate, for a country with a West-European drinking culture, whether raising the minimum legal age for the sale of alcohol has influenced compliance rates among Dutch alcohol vendors. A total of 1770 alcohol purchase attempts by 15-year-old mystery shoppers were conducted in three independent Dutch representative samples of on- and off-premise alcohol outlets in 2013 (T0), 2014 (T1), and 2016 (T2). The effect of the policy change was estimated controlling for gender and age of the vendor. Mean alcohol sellers' compliance rates significantly increased for 15-year-olds from 46.5% before to 55.7% one year and to 73.9% two years after the policy change. Two years after the policy change, alcohol vendors were up to 3 times more likely to comply with the alcohol age limit policy. After the policy change, mean alcohol compliance rates significantly increased when 15-year-olds attempted to purchase alcohol, an effect which seems to increase over time. Nevertheless, a rise in the compliance rate was already present in the years preceding the introduction of the new minimum legal age. This perhaps signifies a process in which a lowering in the general acceptability of juvenile drinking already started before the increased minimum legal age was introduced and alcohol vendors might have been anticipating this formal legal change. Copyright © 2017 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... wages become effective and what is the special minimum wage rate? 520.409 Section 520.409 Labor... apprentices special minimum wages become effective and what is the special minimum wage rate? (a) An... Division. (b) The wage rate specified by the apprenticeship program becomes the special minimum wage rate...
A new approach to importance sampling for the simulation of false alarms. [in radar systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1987-01-01
In this paper a modified importance sampling technique for improving the convergence of Importance Sampling is given. By using this approach to estimate low false alarm rates in radar simulations, the number of Monte Carlo runs can be reduced significantly. For one-dimensional exponential, Weibull, and Rayleigh distributions, a uniformly minimum variance unbiased estimator is obtained. For Gaussian distribution the estimator in this approach is uniformly better than that of previously known Importance Sampling approach. For a cell averaging system, by combining this technique and group sampling, the reduction of Monte Carlo runs for a reference cell of 20 and false alarm rate of lE-6 is on the order of 170 as compared to the previously known Importance Sampling approach.
40 CFR 86.1437 - Test run-manufacturer.
Code of Federal Regulations, 2010 CFR
2010-07-01
... pipes. Exhaust gas concentrations from vehicle engines equipped with multiple exhaust pipes must be... apply. (1) Exhaust gas sampling algorithm. The analysis of exhaust gas concentrations begins ten seconds after the applicable test mode begins. Exhaust gas concentrations must be analyzed at a minimum rate of...
Protein side chain rotational isomerization: A minimum perturbation mapping study
NASA Astrophysics Data System (ADS)
Haydock, Christopher
1993-05-01
A theory of the rotational isomerization of the indole side chain of tryptophan-47 of variant-3 scorpion neurotoxin is presented. The isomerization potential energy, entropic part of the isomerization free energy, isomer probabilities, transition state theory reaction rates, and indole order parameters are calculated from a minimum perturbation mapping over tryptophan-47 χ1×χ2 torsion space. A new method for calculating the fluorescence anisotropy from molecular dynamics simulations is proposed. The method is based on an expansion that separates transition dipole orientation from chromophore dynamics. The minimum perturbation potential energy map is inverted and applied as a bias potential for a 100 ns umbrella sampling simulation. The entropic part of the isomerization free energy as calculated by minimum perturbation mapping and umbrella sampling are in fairly close agreement. Throughout, the approximation is made that two glutamine and three tyrosine side chains neighboring tryptophan-47 are truncated at the Cβ atom. Comparison with the previous combination thermodynamic perturbation and umbrella sampling study suggests that this truncated neighbor side chain approximation leads to at least a qualitatively correct theory of tryptophan-47 rotational isomerization in the wild type variant-3 scorpion neurotoxin. Analysis of van der Waals interactions in a transition state region indicates that for the simulation of barrier crossing trajectories a linear combination of three specially defined dihedral angles will be superior to a simple side chain dihedral reaction coordinate.
Harless, David W.; Pink, George H.; Spetz, Joanne; Mark, Barbara
2010-01-01
This study assesses whether California’s minimum nurse staffing legislation affected the amount of uncompensated care provided by California hospitals. Using data from California’s Office of Statewide Health Planning and Development, the American Hospital Association Annual Survey and InterStudy, we divide hospitals into quartiles based on pre-regulation staffing levels. Controlling for other factors, we estimate changes in the growth rate of uncompensated care in the three lowest staffing quartiles relative to the quartile of hospitals with the highest staffing level. Our sample includes short-term general hospitals over the period 1999 to 2006. We find that growth rates in uncompensated care are lower in the first three staffing quartiles as compared to the highest quartile; however, results are statistically significant only for county and for-profit hospitals in quartiles one and three. We conclude that minimum nurse staffing ratios may lead some hospitals to limit uncompensated care, likely due to increased financial pressure. PMID:21156707
NASA Astrophysics Data System (ADS)
Larionov, G. A.; Bushueva, O. G.; Gorobets, A. V.; Dobrovol'skaya, N. G.; Kiryukhina, Z. P.; Krasnov, S. F.; Kobylchenko Kuksina, L. V.; Litvin, L. F.; Sudnitsyn, I. I.
2018-02-01
It has been shown in experiments in a hydraulic flume with a knee-shaped bend that the rate of soil erosion more than doubles at the flow impact angles to the channel side from 0° to 50°. At higher channel bends, the experiment could not be performed because of backwater. Results of erosion by water stream approaching the sample surface at angles between 2° and 90° are reported. It has been found that the maximum erosion rate is observed at flow impact angles of about 45°, and the minimum rate at 90°. The minimum soil erosion rate is five times lower than the maximum erosion rate. This is due to the difference in the rate of free water penetration into the upper soil layer, and the impact of the hydrodynamic pressure, which is maximum at the impact angle of 90°. The penetration of water into the interaggregate space results in the breaking of bonds between aggregates, which is the main condition for the capture of particles by the flow.
Treatment of caries in relation to lesion severity: implications for minimum intervention dentistry.
Brennan, D S; Balasubramanian, M; Spencer, A J
2015-01-01
To date there is little evidence of minimum intervention in relation to treatment patterns, particularly for initial carious lesions. The objective of this study was to investigate treatment provided to patients with a main diagnosis of coronal caries in relation to the severity of the caries lesion. A random sample of Australian dentists was surveyed by mailed questionnaires in 2009-2010 (response rate 67%). Data on services, patient characteristics and main diagnosis were collected from a service log. Models of service rates adjusted for age, sex, insurance status and reason for visit showed that compared to the reference category of gross caries lesions, there were higher rates [rate ratio, 95% CI] of restorative services for initial [1.63, 1.31-2.03] and cavitated [1.69, 1.39-2.05] lesions, higher rates of prophylaxis for initial [3.77, 2.09-6.79] and cavitated [3.88, 2.29-6.58] lesions, lower rates of endodontic services for initial [0.07, 0.02-0.30] and cavitated [0.11, 0.04-0.30] lesions, and lower rates of extraction for initial [0.15, 0.06-0.34] and cavitated [0.15, 0.07-0.31] lesions. Treatment of coronal caries was characterized by high rates of restorative services, but gross lesions had lower restorative rates and higher rates of endodontic and extraction services. There was little differentiation in treatment of coronal caries between initial and cavitated lesions, suggesting scope for increased management of initial carious lesions by the adoption of more minimum intervention approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2013 CFR
2013-01-01
... basic pay does not fall below the minimum rate of their band. 9701.324 Section 9701.324 Administrative... basic pay does not fall below the minimum rate of their band. An employee who does not receive a pay... fall below the minimum rate of his or her band as a result of that rating will receive such an increase...
Code of Federal Regulations, 2011 CFR
2011-01-01
... basic pay does not fall below the minimum rate of their band. 9701.324 Section 9701.324 Administrative... basic pay does not fall below the minimum rate of their band. An employee who does not receive a pay... fall below the minimum rate of his or her band as a result of that rating will receive such an increase...
Code of Federal Regulations, 2012 CFR
2012-01-01
... basic pay does not fall below the minimum rate of their band. 9701.324 Section 9701.324 Administrative... basic pay does not fall below the minimum rate of their band. An employee who does not receive a pay... fall below the minimum rate of his or her band as a result of that rating will receive such an increase...
Code of Federal Regulations, 2014 CFR
2014-01-01
... basic pay does not fall below the minimum rate of their band. 9701.324 Section 9701.324 Administrative... basic pay does not fall below the minimum rate of their band. An employee who does not receive a pay... fall below the minimum rate of his or her band as a result of that rating will receive such an increase...
New stimulation pattern design to improve P300-based matrix speller performance at high flash rate
NASA Astrophysics Data System (ADS)
Polprasert, Chantri; Kukieattikool, Pratana; Demeechai, Tanee; Ritcey, James A.; Siwamogsatham, Siwaruk
2013-06-01
Objective. We propose a new stimulation pattern design for the P300-based matrix speller aimed at increasing the minimum target-to-target interval (TTI). Approach. Inspired by the simplicity and strong performance of the conventional row-column (RC) stimulation, the proposed stimulation is obtained by modifying the RC stimulation through alternating row and column flashes which are selected based on the proposed design rules. The second flash of the double-flash components is then delayed for a number of flashing instants to increase the minimum TTI. The trade-off inherited in this approach is the reduced randomness within the stimulation pattern. Main results. We test the proposed stimulation pattern and compare its performance in terms of selection accuracy, raw and practical bit rates with the conventional RC flashing paradigm over several flash rates. By increasing the minimum TTI within the stimulation sequence, the proposed stimulation has more event-related potentials that can be identified compared to that of the conventional RC stimulations, as the flash rate increases. This leads to significant performance improvement in terms of the letter selection accuracy, the raw and practical bit rates over the conventional RC stimulation. Significance. These studies demonstrate that significant performance improvement over the RC stimulation is obtained without additional testing or training samples to compensate for low P300 amplitude at high flash rate. We show that our proposed stimulation is more robust to reduced signal strength due to the increased flash rate than the RC stimulation.
Jiang, Junli; Wang, Bin; Zhu, Zhaoqiong; Yang, Jun; Liu, Jin; Zhang, Wensheng
2017-01-01
Because etomidate induces prolonged adrenal suppression, even following a single bolus, its use as an infused anesthetic is limited. Our previous study indicated that a single administration of the novel etomidate analog methoxyethyletomidate hydrochloride (ET-26-HCl) shows little suppression of adrenocortical function. The aims of the present study were to (1) determine the minimum infusion rate of ET-26-HCl and compare it with those for etomidate and cyclopropyl-methoxycarbonylmetomidate (CPMM), a rapidly metabolized etomidate analog that is currently in clinical trials and (2) to evaluate adrenocortical function after a continuous infusion of ET-26-HCl as part of a broader study investigating whether this etomidate analog is suitable for long infusion in the maintenance of anesthesia. The up-and-down method was used to determine the minimum infusion rates for ET-26-HCl, etomidate and CPMM. Sprague-Dawley rats ( n = 32) were then randomly divided into four groups: etomidate, ET-26-HCl, CPMM, and vehicle control. Rats in each group were infused for 60 min with one of the drugs at its predetermined minimum infusion rate. Blood samples were drawn initially and then every 30 min after drug infusion to determine the adrenocorticotropic hormone-stimulated concentration of serum corticosterone as a measure of adrenocortical function. The minimum infusion rates for etomidate, ET-26-HCl and CPMM were 0.29, 0.62, and 0.95 mg/kg/min, respectively. Compared with controls, etomidate decreased serum corticosterone, as expected, whereas serum corticosterone concentrations following infusion with the etomidate analogs ET-26-HCl or CPMM were not significantly different from those in the control group. The corticosterone concentrations tended to be reduced for the first hour following ET-26-HCl infusion (as compared to vehicle infusion); however, this reduction did not reach statistical significance. Thus, further studies are warranted examining the practicability of using ET-26-HCl as an infused anesthetic.
NASA Astrophysics Data System (ADS)
Rezaei, M.; Kermanpur, A.; Sadeghi, F.
2018-03-01
Fabrication of single crystal (SC) Ni-based gas turbine blades with a minimum crystal misorientation has always been a challenge in gas turbine industry, due to its significant influence on high temperature mechanical properties. This paper reports an experimental investigation and numerical simulation of the SC solidification process of a Ni-based superalloy to study effects of withdrawal rate and starter block size on crystal orientation. The results show that the crystal misorientation of the sample with 40 mm starter block height is decreased with increasing withdrawal rate up to about 9 mm/min, beyond which the amount of misorientation is increased. It was found that the withdrawal rate, height of the starter block and temperature gradient are completely inter-dependent and indeed achieving a SC specimen with a minimum misorientation needs careful optimization of these process parameters. The height of starter block was found to have higher impact on crystal orientation compared to the withdrawal rate. A suitable withdrawal rate regime along with a sufficient starter block height was proposed to produce SC parts with the lowest misorientation.
Dynamic Positron Emission Tomography [PET] in Man Using Small Bismuth Germanate Crystals
DOE R&D Accomplishments Database
Derenzo, S. E.; Budinger, T. F.; Huesman, R. H.; Cahoon, J. L.
1982-04-01
Primary considerations for the design of positron emission tomographs for medical studies in humans are the need for high imaging sensitivity, whole organ coverage, good spatial resolution, high maximum data rates, adequate spatial sampling with minimum mechanical motion, shielding against out of plane activity, pulse height discrimination against scattered photons, and timing discrimination against accidental coincidences. We discuss the choice of detectors, sampling motion, shielding, and electronics to meet these objectives.
5 CFR 532.249 - Minimum rates for hard-to-fill positions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 5 Administrative Personnel 1 2012-01-01 2012-01-01 false Minimum rates for hard-to-fill positions. 532.249 Section 532.249 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.249 Minimum rates for hard-to-fill...
5 CFR 532.249 - Minimum rates for hard-to-fill positions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 5 Administrative Personnel 1 2013-01-01 2013-01-01 false Minimum rates for hard-to-fill positions. 532.249 Section 532.249 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.249 Minimum rates for hard-to-fill...
5 CFR 532.249 - Minimum rates for hard-to-fill positions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 5 Administrative Personnel 1 2014-01-01 2014-01-01 false Minimum rates for hard-to-fill positions. 532.249 Section 532.249 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.249 Minimum rates for hard-to-fill...
Minimum prices for alcohol and educational disparities in alcohol-related mortality.
Herttua, Kimmo; Mäkelä, Pia; Martikainen, Pekka
2015-05-01
Minimum price of alcohol is one of the proposed set of alcohol policies in many high-income countries. However, the extent to which alcohol-related harm is associated with minimum prices across socioeconomic groups is not known. Using Finnish national registers in 1988-2007, we investigated, by means of time-series analysis, the association between minimum prices for alcohol overall, as well as for various types of alcoholic beverages, and alcohol-related mortality, among men and women ages 30-79 years across three educational groups. We defined quarterly aggregations of alcohol-related deaths, based on a sample including 80% of all deaths, in accordance with information on both underlying and contributory causes of death. About 62,500 persons died from alcohol-related causes during the 20-year follow-up. The alcohol-related mortality rate was more than threefold higher among those with a basic education than among those with a tertiary education. Among men with a basic education, an increase of 1% in the minimum price of alcohol was associated with a decrease of 0.03% (95% confidence interval = 0.01, 0.04%) in deaths per 100,000 person-years. Changes in the minimum prices of distilled spirits, intermediate products, and strong beer were also associated with changes in the opposite direction among men with a basic education and among women with a secondary education, whereas among the most highly educated there were no associations between the minimum prices of any beverages and mortality. Moreover, we found no evidence of an association between lower minimum prices for wine and higher rates of alcohol-related mortality in any of the population sub-groups. The results reveal associations between higher minimum prices and lower alcohol-related mortality among men with a basic education and women with a secondary education for all beverage types except wine.
Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun
2015-09-01
A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.
Code of Federal Regulations, 2011 CFR
2011-01-01
... falls below the minimum adjusted rate of their band. 9701.337 Section 9701.337 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.337 Treatment of employees whose rate of pay falls... (including a locality or special rate supplement) falls below the minimum adjusted rate of his or her band as...
Code of Federal Regulations, 2012 CFR
2012-01-01
... falls below the minimum adjusted rate of their band. 9701.337 Section 9701.337 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.337 Treatment of employees whose rate of pay falls... (including a locality or special rate supplement) falls below the minimum adjusted rate of his or her band as...
Code of Federal Regulations, 2014 CFR
2014-01-01
... falls below the minimum adjusted rate of their band. 9701.337 Section 9701.337 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.337 Treatment of employees whose rate of pay falls... (including a locality or special rate supplement) falls below the minimum adjusted rate of his or her band as...
Code of Federal Regulations, 2013 CFR
2013-01-01
... falls below the minimum adjusted rate of their band. 9701.337 Section 9701.337 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.337 Treatment of employees whose rate of pay falls... (including a locality or special rate supplement) falls below the minimum adjusted rate of his or her band as...
Code of Federal Regulations, 2010 CFR
2010-10-01
... conference tariff or at the stated minimum level or floor rate for an open-rated commodity published in a..., stated minimum level, or floor rate has at least one foreign-flag carrier as a voting member, or (b) At a rate or tariff agreement rate, or at the stated minimum level or floor rate for an open-rated commodity...
Noninferiority trial designs for odds ratios and risk differences.
Hilton, Joan F
2010-04-30
This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
Contaminant levels, source strengths, and ventilation rates in California retail stores.
Chan, W R; Cohn, S; Sidheswaran, M; Sullivan, D P; Fisk, W J
2015-08-01
This field study measured ventilation rates and indoor air quality in 21 visits to retail stores in California. Three types of stores, such as grocery, furniture/hardware stores, and apparel, were sampled. Ventilation rates measured using a tracer gas decay method exceeded the minimum requirement of California's Title 24 Standard in all but one store. Concentrations of volatile organic compounds (VOCs), ozone, and carbon dioxide measured indoors and outdoors were analyzed. Even though there was adequate ventilation according to standard, concentrations of formaldehyde and acetaldehyde exceeded the most stringent chronic health guidelines in many of the sampled stores. The whole-building emission rates of VOCs were estimated from the measured ventilation rates and the concentrations measured indoor and outdoor. Estimated formaldehyde emission rates suggest that retail stores would need to ventilate at levels far exceeding the current Title 24 requirement to lower indoor concentrations below California's stringent formaldehyde reference level. Given the high costs of providing ventilation, effective source control is an attractive alternative. Field measurements suggest that California retail stores were well ventilated relative to the minimum ventilation rate requirement specified in the Building Energy Efficiency Standards Title 24. Concentrations of formaldehyde found in retail stores were low relative to levels found in homes but exceeded the most stringent chronic health guideline. Looking ahead, California is mandating zero energy commercial buildings by 2030. To reduce the energy use from building ventilation while maintaining or even lowering formaldehyde in retail stores, effective formaldehyde source control measures are vitally important. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Thomas-Gibson, Siwan; Bugajski, Marek; Bretthauer, Michael; Rees, Colin J; Dekker, Evelien; Hoff, Geir; Jover, Rodrigo; Suchanek, Stepan; Ferlitsch, Monika; Anderson, John; Roesch, Thomas; Hultcranz, Rolf; Racz, Istvan; Kuipers, Ernst J; Garborg, Kjetil; East, James E; Rupinski, Maciej; Seip, Birgitte; Bennett, Cathy; Senore, Carlo; Minozzi, Silvia; Bisschops, Raf; Domagk, Dirk; Valori, Roland; Spada, Cristiano; Hassan, Cesare; Dinis-Ribeiro, Mario; Rutter, Matthew D
2017-01-01
The European Society of Gastrointestinal Endoscopy and United European Gastroenterology present a short list of key performance measures for lower gastrointestinal endoscopy. We recommend that endoscopy services across Europe adopt the following seven key performance measures for lower gastrointestinal endoscopy for measurement and evaluation in daily practice at a center and endoscopist level: 1 rate of adequate bowel preparation (minimum standard 90%); 2 cecal intubation rate (minimum standard 90%); 3 adenoma detection rate (minimum standard 25%); 4 appropriate polypectomy technique (minimum standard 80%); 5 complication rate (minimum standard not set); 6 patient experience (minimum standard not set); 7 appropriate post-polypectomy surveillance recommendations (minimum standard not set). Other identified performance measures have been listed as less relevant based on an assessment of their importance, scientific acceptability, feasibility, usability, and comparison to competing measures. PMID:28507745
Code of Federal Regulations, 2010 CFR
2010-04-01
... RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE When... annuity rate under the overall minimum. A spouse's inclusion in the computation of the overall minimum...
Code of Federal Regulations, 2010 CFR
2010-01-01
... minimum wage requirements in determining prevailing rates. 532.205 Section 532.205 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PREVAILING RATE SYSTEMS Prevailing Rate Determinations § 532.205 The use of Federal, State, and local minimum wage requirements in determining prevailing...
77 FR 75896 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2013
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-26
...-11213, Notice No. 16] Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2013... from FRA's Management Information System, the rail industry's random drug testing positive rate has... therefore determined that the minimum annual random drug testing rate for the period January 1, 2013...
The Effect of Minimum Wage Rates on High School Completion
ERIC Educational Resources Information Center
Warren, John Robert; Hamrock, Caitlin
2010-01-01
Does increasing the minimum wage reduce the high school completion rate? Previous research has suffered from (1. narrow time horizons, (2. potentially inadequate measures of states' high school completion rates, and (3. potentially inadequate measures of minimum wage rates. Overcoming each of these limitations, we analyze the impact of changes in…
Persistence and efficacy of termiticides used in preconstruction treatments to soil in Mississippi
J.E. Mulrooney; M.K. Davis; T.L. Wagner; R.L. Ingram
2006-01-01
Laboratory and field studies were conducted to determine the persistence and efficacy of termiticides used as preconstruction treatments against subterranean termites. Bifenthrin (0.067%), chlorpyrifos (0.75%), and imidacloprid (0.05%) ( [AI]; wt:wt) were applied to soil beneath a monolithic concrete slab at their minimum labeled rates. Soil samples were taken from...
Code of Federal Regulations, 2012 CFR
2012-01-01
... fall below the minimum adjusted rate of their band. 9701.336 Section 9701.336 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.336 Treatment of employees whose pay does not fall... or special rate supplement) does not fall below the minimum adjusted rate of his or her band as a...
Code of Federal Regulations, 2013 CFR
2013-01-01
... fall below the minimum adjusted rate of their band. 9701.336 Section 9701.336 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.336 Treatment of employees whose pay does not fall... or special rate supplement) does not fall below the minimum adjusted rate of his or her band as a...
Code of Federal Regulations, 2014 CFR
2014-01-01
... fall below the minimum adjusted rate of their band. 9701.336 Section 9701.336 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.336 Treatment of employees whose pay does not fall... or special rate supplement) does not fall below the minimum adjusted rate of his or her band as a...
Code of Federal Regulations, 2011 CFR
2011-01-01
... fall below the minimum adjusted rate of their band. 9701.336 Section 9701.336 Administrative Personnel... Administration Locality and Special Rate Supplements § 9701.336 Treatment of employees whose pay does not fall... or special rate supplement) does not fall below the minimum adjusted rate of his or her band as a...
NASA Astrophysics Data System (ADS)
Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.
2013-03-01
We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...
Abrams, Thad E; Lund, Brian C; Alexander, Bruce; Bernardy, Nancy C; Friedman, Matthew J
2015-01-01
Posttraumatic stress disorder (PTSD) is a high-priority treatment area for the Veterans Health Administration (VHA), and dissemination patterns of innovative, efficacious therapies can inform areas for potential improvement of diffusion efforts and quality prescribing. In this study, we replicated a prior examination of the period prevalence of prazosin use as a function of distance from Puget Sound, Washington, where prazosin was first tested as an effective treatment for PTSD and where prazosin use was previously shown to be much greater than in other parts of the United States. We tested the following three hypotheses related to prazosin geographic diffusion: (1) a positive geographical correlation exists between the distance from Puget Sound and the proportion of users treated according to a guideline recommended minimum therapeutic target dose (>/=6 mg/d), (2) an inverse geographic correlation exists between prazosin and benzodiazepine use, and (3) no geographical correlation exists between prazosin use and serotonin reuptake inhibitor/serotonin norepinephrine reuptake inhibitor (SSRI/SNRI) use. Among a national sample of veterans with PTSD, overall prazosin utilization increased from 5.5 to 14.8% from 2006 to 2012. During this time period, rates at the Puget Sound VHA location declined from 34.4 to 29.9%, whereas utilization rates at locations a minimum of 2,500 miles away increased from 3.0 to 12.8%. Rates of minimum target dosing fell from 42.6 to 34.6% at the Puget Sound location. In contrast, at distances of at least 2,500 miles from Puget Sound, minimum threshold dosing rates remained stable (range, 18.6 to 17.7%). No discernible association was demonstrated between SSRI/SNRI or benzodiazepine utilization and the geographic distance from Puget Sound. Minimal threshold dosing of prazosin correlated positively with increased diffusion of prazosin use, but there was still a distance diffusion gradient. Although prazosin adoption has improved, geographic differences persist in both prescribing rates and minimum target dosing. Importantly, these regional disparities appear to be limited to prazosin prescribing and are not meaningfully correlated with SSRI/SNRI and benzodiazepine use as indicators of PTSD prescribing quality.
40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...
Kurz, M.D.; Colodner, D.; Trull, T.W.; Moore, R.B.; O'Brien, K.
1990-01-01
In an effort to determine the in situ production rate of spallation-produced cosmogenic 3He, and evaluate its use as a surface exposure chronometer, we have measured cosmogenic helium contents in a suite of Hawaiian radiocarbon-dated lava flows. The lava flows, ranging in age from 600 to 13,000 years, were collected from Hualalai and Mauna Loa volcanoes on the island of Hawaii. Because cosmic ray surface-exposure dating requires the complete absence of erosion or soil cover, these lava flows were selected specifically for this purpose. The 3He production rate, measured within olivine phenocrysts, was found to vary significantly, ranging from 47 to 150 atoms g-1 yr-1 (normalized to sea level). Although there is considerable scatter in the data, the samples younger than 10,000 years are well-preserved and exposed, and the production rate variations are therefore not related to erosion or soil cover. Data averaged over the past 2000 years indicate a sea-level 3He production rate of 125 ?? 30 atoms g-1 yr-1, which agrees well with previous estimates. The longer record suggests a minimum in sea level normalized 3He production rate between 2000 and 7000 years (55 ?? 15 atoms g-1 yr-1), as compared to samples younger than 2000 years (125 ?? 30 atoms g-1 yr-1), and those between 7000 and 10,000 years (127 ?? 19 atoms g-1 yr-1). The minimum in production rate is similar in age to that which would be produced by variations in geomagnetic field strength, as indicated by archeomagnetic data. However, the production rate variations (a factor of 2.3 ?? 0.8) are poorly determined due to the large uncertainties in the youngest samples and questions of surface preservation for the older samples. Calculations using the atmospheric production model of O'Brien (1979) [35], and the method of Lal and Peters (1967) [11], predict smaller production rate variations for similar variation in dipole moment (a factor of 1.15-1.65). Because the production rate variations, archeomagnetic data, and theoretical estimates are not well determined at present, the relationship between dipole moment and production rate will require further study. Precise determination of the production rate is an important uncertainty in the surface-exposure technique, but the data demonstrate that it is feasible to date samples as young as 600 years of age providing that there has been no erosion or soil cover. Therefore, the technique will have important applications for volcanology, glacial geology, geomorphology and archaeology. ?? 1990.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
Code of Federal Regulations, 2010 CFR
2010-04-01
... annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate... second month after the month the child's disability ends, if the child is 18 years old or older, and not...
NASA Astrophysics Data System (ADS)
Hasse, T. R.; Schook, D. M.
2017-12-01
Geochronometers at centennial scales can aid our understanding of process rates in fluvial geomorphology. Plains cottonwood trees (Populus deltoides ssp. Monilifera) in the high plains of the United States are known to germinate on freshly created deposits such as point bars adjacent to rivers. As the trees mature they may be partially buried (up to a few meters) by additional flood deposits. Cottonwood age gives a minimum age estimate of the stratigraphic surface where the tree germinated and a maximum age estimate for overlying sediments, providing quantitative data on rates of river migration and sediment accumulation. Optically Stimulated Luminescence (OSL) of sand grains can be used to estimate the time since the sand grains were last exposed to sunlight, also giving a minimum age estimate of sediment burial. Both methods have disadvantages: Browsing, partial burial, and other damage to young cottonwoods can increase the time required for the tree to reach a height where it can be sampled with a tree corer, making the germination point a few years to a few decades older than the measured tree age; fluvial OSL samples can have inherited age (when the OSL age is older than the burial age) if the sediment was not completely bleached prior to burial. We collected OSL samples at 8 eroding banks of the Powder River Montana, and tree cores at breast height (±1.2 m) from cottonwood trees growing on the floodplain adjacent to the OSL sample locations. Using the Minimum Age Model (MAM) we found that OSL ages appear to be 500 to 1,000 years older than the adjacent cottonwood trees which range in age (at breast height) from 60 to 185 years. Three explanations for this apparent anomaly in ages are explored. Samples for OSL could be below a stratigraphic unconformity relative to the cottonwood germination elevation. Shallow samples for OSL could be affected by anthropogenic mixing of sediments due to plowing and leveling of hay fields. The OSL samples could have significant inherited ages due to partial bleaching during sediment transport in this high plains river with high suspended sediment loads. The dendrochronology of the adjacent cottonwood trees then offers an independent measurement of the inherited age of the OSL samples.
20 CFR 10.406 - What are the maximum and minimum rates of compensation in disability cases?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false What are the maximum and minimum rates of... Impairment § 10.406 What are the maximum and minimum rates of compensation in disability cases? (a... monthly pay does not include locality adjustments.) Compensation for Death ...
20 CFR 10.406 - What are the maximum and minimum rates of compensation in disability cases?
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true What are the maximum and minimum rates of... Impairment § 10.406 What are the maximum and minimum rates of compensation in disability cases? (a... monthly pay does not include locality adjustments.) Compensation for Death ...
20 CFR 10.406 - What are the maximum and minimum rates of compensation in disability cases?
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true What are the maximum and minimum rates of... Impairment § 10.406 What are the maximum and minimum rates of compensation in disability cases? (a... monthly pay does not include locality adjustments.) Compensation for Death ...
20 CFR 10.406 - What are the maximum and minimum rates of compensation in disability cases?
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false What are the maximum and minimum rates of... Impairment § 10.406 What are the maximum and minimum rates of compensation in disability cases? (a... monthly pay does not include locality adjustments.) Compensation for Death ...
20 CFR 10.406 - What are the maximum and minimum rates of compensation in disability cases?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false What are the maximum and minimum rates of... Impairment § 10.406 What are the maximum and minimum rates of compensation in disability cases? (a... monthly pay does not include locality adjustments.) Compensation for Death ...
Duan, Yifan; Yang, Zhenyu; Lai, Jianqiang; Yu, Dongmei; Chang, Suying; Pang, Xuehong; Jiang, Shan; Zhang, Huanmei; Bi, Ye; Wang, Jie; Scherpbier, Robert W; Zhao, Liyun; Yin, Shian
2018-02-22
Appropriate infant and young child feeding could reduce morbidity and mortality and could improve cognitive development of children. However, nationwide data on exclusive breastfeeding and complementary feeding status in China are scarce. The aim of this study was to assess current exclusive breastfeeding and complementary feeding status in China. A national representative survey (Chinese National Nutrition and Health Survey) of children aged under 6 years was done in 2013. Stratified multistage cluster sampling was used to select study participants. World Health Organization (WHO) infant and young child feeding indicators were firstly used to assess exclusive breastfeeding and complementary feeding practice nationwide. In total, 14,458 children aged under two years (0 to <730 days) were studied from 55 counties in 30 provinces in China. The crude exclusive breastfeeding rate under 6 months was 20.7% (908/4381) and the weighted exclusive breastfeeding rate was 18.6%. The crude prevalence of minimum dietary diversity, minimum meal frequency and minimum acceptable diet were 52.5% (5286/10,071), 69.8% (7027/10,071), and 27.4% (2764/10,071) among children aged 6-23 months, respectively. The weighted rate was 53.7%, 69.1%, and 25.1%, respectively. Residential area, household income and maternal education were positively associated with the three complementary feeding indicators. The exclusive breastfeeding rate under 6 months was low and complementary feeding practice was not optimal in China. Residential area, household income and maternal education might be used to target infants and young children to improve complementary feeding practice.
Duan, Yifan; Yang, Zhenyu; Lai, Jianqiang; Yu, Dongmei; Chang, Suying; Pang, Xuehong; Jiang, Shan; Zhang, Huanmei; Bi, Ye; Wang, Jie; Scherpbier, Robert W.; Zhao, Liyun; Yin, Shian
2018-01-01
Appropriate infant and young child feeding could reduce morbidity and mortality and could improve cognitive development of children. However, nationwide data on exclusive breastfeeding and complementary feeding status in China are scarce. The aim of this study was to assess current exclusive breastfeeding and complementary feeding status in China. A national representative survey (Chinese National Nutrition and Health Survey) of children aged under 6 years was done in 2013. Stratified multistage cluster sampling was used to select study participants. World Health Organization (WHO) infant and young child feeding indicators were firstly used to assess exclusive breastfeeding and complementary feeding practice nationwide. In total, 14,458 children aged under two years (0 to <730 days) were studied from 55 counties in 30 provinces in China. The crude exclusive breastfeeding rate under 6 months was 20.7% (908/4381) and the weighted exclusive breastfeeding rate was 18.6%. The crude prevalence of minimum dietary diversity, minimum meal frequency and minimum acceptable diet were 52.5% (5286/10,071), 69.8% (7027/10,071), and 27.4% (2764/10,071) among children aged 6–23 months, respectively. The weighted rate was 53.7%, 69.1%, and 25.1%, respectively. Residential area, household income and maternal education were positively associated with the three complementary feeding indicators. The exclusive breastfeeding rate under 6 months was low and complementary feeding practice was not optimal in China. Residential area, household income and maternal education might be used to target infants and young children to improve complementary feeding practice. PMID:29470415
NASA Astrophysics Data System (ADS)
Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter
1987-03-01
The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.
Occurrence of Listeria spp. in retail meat and dairy products in the area of Addis Ababa, Ethiopia.
Derra, Firehiwot Abera; Karlsmose, Susanne; Monga, Dharam P; Mache, Abebe; Svendsen, Christina Aaby; Félix, Benjamin; Granier, Sophie A; Geyid, Abera; Taye, Girum; Hendriksen, Rene S
2013-06-01
Listeriosis, a bacterial disease in humans and animals, is mostly caused by ingestion of Listeria monocytogenes via contaminated food and/or water, or by a zoonotic infection. Globally, listeriosis has in general a low incidence but a high case fatality rate. The objective of this study was to investigate the occurrence, antimicrobial profiles, and genetic relatedness of L. monocytogenes from raw meat and dairy products (raw milk, cottage cheese, cream cake), collected from the capital and five neighboring towns in Ethiopia. Two hundred forty food samples were purchased from July to December 2006 from food vendors, shops, and supermarkets, using a cross-sectional study design. L. monocytogenes were isolated and subjected to molecular serotyping. The genetic relatedness and antimicrobial susceptibility patterns were investigated using pulsed-field gel electrophoresis (PFGE) and minimum inhibitory concentration determinations. Of 240 food samples tested, 66 (27.5%) were positive for Listeria species. Of 59 viable isolates, 10 (4.1%) were L. monocytogenes. Nine were serotype 4b and one was 2b. Minimum inhibitory concentration determination and PFGE of the 10 L. monocytogenes isolates showed low occurrence of antimicrobial resistance among eight different PFGE types. The findings in this study correspond to similar research undertaken in Ethiopia by detecting L. monocytogenes with similar prevalence rates. Public education is crucial as regards the nature of this organism and relevant prevention measures. Moreover, further research in clinical samples should be carried out to estimate the prevalence and carrier rate in humans, and future investigations on foodborne outbreaks must include L. monocytogenes.
ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
13 CFR 301.4 - Investment rates.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Investment rates. 301.4 Section... ELIGIBILITY, INVESTMENT RATE AND PROPOSAL AND APPLICATION REQUIREMENTS Investment Rates and Matching Share Requirements § 301.4 Investment rates. (a) Minimum Investment Rate. There is no minimum Investment Rate for a...
13 CFR 301.4 - Investment rates.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Investment rates. 301.4 Section... ELIGIBILITY, INVESTMENT RATE AND APPLICATION REQUIREMENTS Investment Rates and Matching Share Requirements § 301.4 Investment rates. (a) Minimum Investment Rate. There is no minimum Investment Rate for a Project...
13 CFR 301.4 - Investment rates.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Investment rates. 301.4 Section... ELIGIBILITY, INVESTMENT RATE AND APPLICATION REQUIREMENTS Investment Rates and Matching Share Requirements § 301.4 Investment rates. (a) Minimum Investment Rate. There is no minimum Investment Rate for a Project...
13 CFR 301.4 - Investment rates.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Investment rates. 301.4 Section... ELIGIBILITY, INVESTMENT RATE AND APPLICATION REQUIREMENTS Investment Rates and Matching Share Requirements § 301.4 Investment rates. (a) Minimum Investment Rate. There is no minimum Investment Rate for a Project...
13 CFR 301.4 - Investment rates.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Investment rates. 301.4 Section... ELIGIBILITY, INVESTMENT RATE AND APPLICATION REQUIREMENTS Investment Rates and Matching Share Requirements § 301.4 Investment rates. (a) Minimum Investment Rate. There is no minimum Investment Rate for a Project...
Rotstein, Arie; Dotan, Raffy; Zigel, Levana; Greenberg, Tally; Benyamini, Yael; Falk, Bareket
2007-12-01
The purpose of this study was to investigate the effect of pre-test carbohydrate (CHO) ingestion on anaerobic-threshold assessment using the lactate-minimum test (LMT). Fifteen competitive male distance runners capable of running 10 km in 33.5-43 min were used as subjects. LMT was performed following CHO (2x300 mL, 7% solution) or comparable placebo (Pl) ingestion, in a double-blind, randomized order. The LMT consisted of two high-intensity 1 min treadmill runs (17-21 km.h(-1)), followed by an 8 min recovery period. Subsequently, subjects performed 5 min running stages, incremented by 0.6 km.h(-1) and separated by 1 min blood-sampling intervals. Tests were terminated after 3 consecutive increases in blood-lactate concentration ([La]) had been observed. Finger-tip capillary blood was sampled for [La] and blood-glucose determination 30 min before the test's onset, during the recovery phase following the 2 high-intensity runs, and following each of the subsequent 5 min stages. Heart rate (HR) and rating of perceived exertion (RPE) were recorded after each stage. The lactate-minimum speed (LMS) was determined from the individual [La]-velocity plots and was considered reflective of the anaerobic threshold. Pre-test CHO ingestion had no effect on LMS (13.19+/-1.12 km.h(-1) vs. 13.17+/-1.08 km.h(-1) in CHO and Pl, respectively), nor on [La] and glucose concentration at that speed, or on HR and RPE responses. Pre-test CHO ingestion therefore does not affect LMS or the LMT-estimated anaerobic threshold.
20 CFR 10.411 - What are the maximum and minimum rates of compensation in death cases?
Code of Federal Regulations, 2011 CFR
2011-04-01
... maximum and minimum rates of compensation in death cases? (a) Compensation for death may not exceed the... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false What are the maximum and minimum rates of compensation in death cases? 10.411 Section 10.411 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS...
20 CFR 10.411 - What are the maximum and minimum rates of compensation in death cases?
Code of Federal Regulations, 2013 CFR
2013-04-01
... maximum and minimum rates of compensation in death cases? (a) Compensation for death may not exceed the... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true What are the maximum and minimum rates of compensation in death cases? 10.411 Section 10.411 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS...
20 CFR 10.411 - What are the maximum and minimum rates of compensation in death cases?
Code of Federal Regulations, 2012 CFR
2012-04-01
... maximum and minimum rates of compensation in death cases? (a) Compensation for death may not exceed the... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false What are the maximum and minimum rates of compensation in death cases? 10.411 Section 10.411 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS...
20 CFR 10.411 - What are the maximum and minimum rates of compensation in death cases?
Code of Federal Regulations, 2014 CFR
2014-04-01
... maximum and minimum rates of compensation in death cases? (a) Compensation for death may not exceed the... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true What are the maximum and minimum rates of compensation in death cases? 10.411 Section 10.411 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS...
20 CFR 10.411 - What are the maximum and minimum rates of compensation in death cases?
Code of Federal Regulations, 2010 CFR
2010-04-01
... maximum and minimum rates of compensation in death cases? (a) Compensation for death may not exceed the... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false What are the maximum and minimum rates of compensation in death cases? 10.411 Section 10.411 Employees' Benefits OFFICE OF WORKERS' COMPENSATION PROGRAMS...
GEOS-2 C-band radar system project. Spectral analysis as related to C-band radar data analysis
NASA Technical Reports Server (NTRS)
1972-01-01
Work performed on spectral analysis of data from the C-band radars tracking GEOS-2 and on the development of a data compaction method for the GEOS-2 C-band radar data is described. The purposes of the spectral analysis study were to determine the optimum data recording and sampling rates for C-band radar data and to determine the optimum method of filtering and smoothing the data. The optimum data recording and sampling rate is defined as the rate which includes an optimum compromise between serial correlation and the effects of frequency folding. The goal in development of a data compaction method was to reduce to a minimum the amount of data stored, while maintaining all of the statistical information content of the non-compacted data. A digital computer program for computing estimates of the power spectral density function of sampled data was used to perform the spectral analysis study.
Kelly, Brian P.; Rydlund, Jr., Paul H.
2006-01-01
Riverbank filtration substantially improves the source-water quality of the Independence, Missouri well field. Coliform bacteria, Cryptosporidium, Giardia, viruses and selected constituents were analyzed in water samples from the Missouri River, two vertical wells, and a collector well. Total coliform bacteria, Cryptosporidium, Giardia, and total culturable viruses were detected in the Missouri River, but were undetected in samples from wells. Using minimum reporting levels for non-detections in well samples, minimum log removals were 4.57 for total coliform bacteria, 1.67 for Cryptosporidium, 1.67 for Giardia, and 1.15 for total culturable virus. Ground-water flow rates between the Missouri River and wells were calculated from water temperature profiles and ranged between 1.2 and 6.7 feet per day. Log removals based on sample pairs separated by the traveltime between the Missouri River and wells were infinite for total coliform bacteria (minimum detection level equal to zero), between 0.8 and 3.5 for turbidity, between 1.5 and 2.1 for Giardia, and between 0.4 and 2.6 for total culturable viruses. Cryptosporidium was detected once in the Missouri River but no corresponding well samples were available. No clear relation was evident between changes in water quality in the Missouri River and in wells for almost all constituents. Results of analyses for organic wastewater compounds and the distribution of dissolved oxygen, specific conductance, and temperature in the Missouri River indicate water quality on the south side of the river was moderately influenced by the south bank inflows to the river upstream from the Independence well field.
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2014 CFR
2014-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
NASA Astrophysics Data System (ADS)
Cheng, Jiarui; Cao, Yinping; Dou, Yihua; Li, Zhen
2017-10-01
A lab experiment was carried out to study the effects of pipe flow rate, particle concentration and pipe inner diameter ratio on proppant erosion of the reducing wall in hydraulic fracturing. The results show that the erosion rate and erosion distribution are different not only in radial direction but also in circumferential direction of the sample. The upper part of sample always has a minimum erosion rate and erosion area. Besides, the erosion rate of reducing wall is most affected by fluid flow velocity, and the erosion area is most sensitive to the change in the diameter ratio. Meanwhile, the erosion rate of reducing wall in crosslinked fracturing fluid is mainly determined by the fluid flowing state due to the high viscosity of the liquid. In general, the increase in flow velocity and diameter ratio not only cause the expansion of erosion-affected flow region in sudden contraction section, but also lead to more particles impact the wall.
Jayakumar, Amal; Chang, Bonnie X; Widner, Brittany; Bernhardt, Peter; Mulholland, Margaret R; Ward, Bess B
2017-10-01
Biological nitrogen fixation (BNF) was investigated above and within the oxygen-depleted waters of the oxygen-minimum zone of the Eastern Tropical North Pacific Ocean. BNF rates were estimated using an isotope tracer method that overcame the uncertainty of the conventional bubble method by directly measuring the tracer enrichment during the incubations. Highest rates of BNF (~4 nM day -1 ) occurred in coastal surface waters and lowest detectable rates (~0.2 nM day -1 ) were found in the anoxic region of offshore stations. BNF was not detectable in most samples from oxygen-depleted waters. The composition of the N 2 -fixing assemblage was investigated by sequencing of nifH genes. The diazotrophic assemblage in surface waters contained mainly Proteobacterial sequences (Cluster I nifH), while both Proteobacterial sequences and sequences with high identities to those of anaerobic microbes characterized as Clusters III and IV type nifH sequences were found in the anoxic waters. Our results indicate modest input of N through BNF in oxygen-depleted zones mainly due to the activity of proteobacterial diazotrophs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... scrubber followed by fabric filter Wet scrubber Dry scrubber followed by fabric filter and wet scrubber... flow rate Hourly 1×hour ✔ ✔ Minimum pressure drop across the wet scrubber or minimum horsepower or amperage to wet scrubber Continuous 1×minute ✔ ✔ Minimum scrubber liquor flow rate Continuous 1×minute...
The Effect of Minimum Wages on Adolescent Fertility: A Nationwide Analysis.
Bullinger, Lindsey Rose
2017-03-01
To investigate the effect of minimum wage laws on adolescent birth rates in the United States. I used a difference-in-differences approach and vital statistics data measured quarterly at the state level from 2003 to 2014. All models included state covariates, state and quarter-year fixed effects, and state-specific quarter-year nonlinear time trends, which provided plausibly causal estimates of the effect of minimum wage on adolescent birth rates. A $1 increase in minimum wage reduces adolescent birth rates by about 2%. The effects are driven by non-Hispanic White and Hispanic adolescents. Nationwide, increasing minimum wages by $1 would likely result in roughly 5000 fewer adolescent births annually.
Pulsar statistics and their interpretations
NASA Technical Reports Server (NTRS)
Arnett, W. D.; Lerche, I.
1981-01-01
It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.
40 CFR 60.37e - Compliance, performance testing, and monitoring guidelines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... requirements: (1) Establish maximum charge rate and minimum secondary chamber temperature as site-specific... above the maximum charge rate or below the minimum secondary chamber temperature measured as 3-hour... below the minimum secondary chamber temperature shall constitute a violation of the established...
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
Screening for tinea unguium by Dermatophyte Test Strip.
Tsunemi, Y; Takehara, K; Miura, Y; Nakagami, G; Sanada, H; Kawashima, M
2014-02-01
The direct microscopy, fungal culture and histopathology that are necessary for the definitive diagnosis of tinea unguium are disadvantageous in that detection sensitivity is affected by the level of skill of the person who performs the testing, and the procedures take a long time. The Dermatophyte Test Strip, which was developed recently, can simply and easily detect filamentous fungi in samples in a short time, and there are expectations for its use as a method for tinea unguium screening. With this in mind, we examined the detection capacity of the Dermatophyte Test Strip for tinea unguium. The presence or absence of fungal elements was judged by direct microscopy and Dermatophyte Test Strip in 165 nail samples obtained from residents in nursing homes for the elderly. Moreover, the minimum sample amount required for positive determination was estimated using 32 samples that showed positive results by Dermatophyte Test Strip. The Dermatophyte Test Strip showed 98% sensitivity, 78% specificity, 84·8% positive predictive value, 97% negative predictive value and a positive and negative concordance rate of 89·1%. The minimum sample amount required for positive determination was 0·002-0·722 mg. The Dermatophyte Test Strip showed very high sensitivity and negative predictive value, and was considered a potentially useful method for tinea unguium screening. Positive determination was considered to be possible with a sample amount of about 1 mg. © 2013 British Association of Dermatologists.
SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy
Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui
2014-01-01
Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063
Ji, Baochao; Xu, Enjie; Cao, Li; Yang, Desheng; Xu, Boyong; Guo, Wentao; Aili, Rehei
2015-02-01
To analyze the results of pathogenic bacteria culture on chronic periprosthetic joint infection after total knee arthroplasty (TKA) and total hip arthroplasty (THA). The medical data of 23 patients with chronic periprosthetic joint infection after TKA or THA from September 2010 to March 2014 were reviewed. Fifteen cases of TKA and 8 cases of THA were included in this study. There were 12 male and 11 female patients with the mean age of 62 years (range from 32 to 79 years), and among them 9 patients with sinus. All patients discontinued antibiotic therapy for a minimum of 2 weeks before arthrocentesis, taking pathogenic bacteria culture and antimicrobial susceptibility test by using synovial fluid taken preoperatively and intraoperatively of revision. Common pathogenic bacteria culture and pathological biopsy were taken on tissues intraoperatively of revision. Culture-negative specimens were prolonged the period of incubation for 2 weeks. The overall culture-positive rate of all 23 patients for 1 week before revision was 30.4% (7/23), and the positive rate of culture-negative samples which prolonged for 2 weeks was 39.1% (9/23). The overall culture-positive rate of patients for 1 week intraoperatively of revision was 60.9% (14/23), and the positive rate of culture-negative samples which prolonged for 2 weeks was 82.6% (19/23). The incubation results of 7 cases (30.4%) preoperatively conformed to that of intraoperation. The culture-positive rate of pathogenic bacteria culture can be increased evidently by discontinuing antimicrobial therapy for a minimum of 2 weeks prior to the definite diagnosis.
NASA Astrophysics Data System (ADS)
Zhao, Liang; Ge, Jian-Hua
2012-12-01
Single-carrier (SC) transmission with frequency-domain equalization (FDE) is today recognized as an attractive alternative to orthogonal frequency-division multiplexing (OFDM) for communication application with the inter-symbol interference (ISI) caused by multi-path propagation, especially in shallow water channel. In this paper, we investigate an iterative receiver based on minimum mean square error (MMSE) decision feedback equalizer (DFE) with symbol rate and fractional rate samplings in the frequency domain (FD) and serially concatenated trellis coded modulation (SCTCM) decoder. Based on sound speed profiles (SSP) measured in the lake and finite-element ray tracking (Bellhop) method, the shallow water channel is constructed to evaluate the performance of the proposed iterative receiver. Performance results show that the proposed iterative receiver can significantly improve the performance and obtain better data transmission than FD linear and adaptive decision feedback equalizers, especially in adopting fractional rate sampling.
How do minimum cigarette price laws affect cigarette prices at the retail level?
Feighery, E; Ribisl, K; Schleicher, N; Zellers, L; Wellington, N
2005-01-01
Objectives: Half of US states have minimum cigarette price laws that were originally passed to protect small independent retailers from unfair price competition with larger retailers. These laws prohibit cigarettes from being sold below a minimum price that is set by a formula. Many of these laws allow cigarette company promotional incentives offered to retailers, such as buydowns and master-type programmes, to be calculated into the formula. Allowing this provision has the potential to lower the allowable minimum price. This study assesses whether stores in states with minimum price laws have higher cigarette prices and lower rates of retailer participation in cigarette company promotional incentive programmes. Design: Retail cigarette prices and retailer participation in cigarette company incentive programmes in 2001 were compared in eight states with minimum price laws and seven states without them. New York State had the most stringent minimum price law at the time of the study because it excluded promotional incentive programmes in its price setting formula; cigarette prices in New York were compared to all other states included in the study. Results: Cigarette prices were not significantly different in our sample of US states with and without cigarette minimum price laws. Cigarette prices were significantly higher in New York stores than in the 14 other states combined. Conclusions: Most existing minimum cigarette price laws appear to have little impact on the retail price of cigarettes. This may be because they allow the use of promotional programmes, which are used by manufacturers to reduce cigarette prices. New York's strategy to disallow these types of incentive programmes may result in higher minimum cigarette prices, and should also be explored as a potential policy strategy to control cigarette company marketing practices in stores. Strict cigarette minimum price laws may have the potential to reduce cigarette consumption by decreasing demand through increased cigarette prices and reduced promotional activities at retail outlets. PMID:15791016
How do minimum cigarette price laws affect cigarette prices at the retail level?
Feighery, E C; Ribisl, K M; Schleicher, N C; Zellers, L; Wellington, N
2005-04-01
Half of US states have minimum cigarette price laws that were originally passed to protect small independent retailers from unfair price competition with larger retailers. These laws prohibit cigarettes from being sold below a minimum price that is set by a formula. Many of these laws allow cigarette company promotional incentives offered to retailers, such as buydowns and master-type programmes, to be calculated into the formula. Allowing this provision has the potential to lower the allowable minimum price. This study assesses whether stores in states with minimum price laws have higher cigarette prices and lower rates of retailer participation in cigarette company promotional incentive programmes. Retail cigarette prices and retailer participation in cigarette company incentive programmes in 2001 were compared in eight states with minimum price laws and seven states without them. New York State had the most stringent minimum price law at the time of the study because it excluded promotional incentive programmes in its price setting formula; cigarette prices in New York were compared to all other states included in the study. Cigarette prices were not significantly different in our sample of US states with and without cigarette minimum price laws. Cigarette prices were significantly higher in New York stores than in the 14 other states combined. Most existing minimum cigarette price laws appear to have little impact on the retail price of cigarettes. This may be because they allow the use of promotional programmes, which are used by manufacturers to reduce cigarette prices. New York's strategy to disallow these types of incentive programmes may result in higher minimum cigarette prices, and should also be explored as a potential policy strategy to control cigarette company marketing practices in stores. Strict cigarette minimum price laws may have the potential to reduce cigarette consumption by decreasing demand through increased cigarette prices and reduced promotional activities at retail outlets.
C.M. Free; R.M. Landis; J. Grogan; M.D. Schulze; M. Lentini; O. Dunisch; NO-VALUE
2014-01-01
Knowledge of tree age-size relationships is essential towards evaluating the sustainability of harvest regulations that include minimum diameter cutting limits and fixed-length cutting cycles. Although many tropical trees form annual growth rings and can be aged from discs or cores, destructive sampling is not always an option for valuable or threatened species. We...
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
Turner, Alan H; Pritchard, Adam C; Matzke, Nicholas J
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a 'smoothed' timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches.
Turner, Alan H.; Pritchard, Adam C.; Matzke, Nicholas J.
2017-01-01
Estimating divergence times on phylogenies is critical in paleontological and neontological studies. Chronostratigraphically-constrained fossils are the only direct evidence of absolute timing of species divergence. Strict temporal calibration of fossil-only phylogenies provides minimum divergence estimates, and various methods have been proposed to estimate divergences beyond these minimum values. We explore the utility of simultaneous estimation of tree topology and divergence times using BEAST tip-dating on datasets consisting only of fossils by using relaxed morphological clocks and birth-death tree priors that include serial sampling (BDSS) at a constant rate through time. We compare BEAST results to those from the traditional maximum parsimony (MP) and undated Bayesian inference (BI) methods. Three overlapping datasets were used that span 250 million years of archosauromorph evolution leading to crocodylians. The first dataset focuses on early Sauria (31 taxa, 240 chars.), the second on early Archosauria (76 taxa, 400 chars.) and the third on Crocodyliformes (101 taxa, 340 chars.). For each dataset three time-calibrated trees (timetrees) were calculated: a minimum-age timetree with node ages based on earliest occurrences in the fossil record; a ‘smoothed’ timetree using a range of time added to the root that is then averaged over zero-length internodes; and a tip-dated timetree. Comparisons within datasets show that the smoothed and tip-dated timetrees provide similar estimates. Only near the root node do BEAST estimates fall outside the smoothed timetree range. The BEAST model is not able to overcome limited sampling to correctly estimate divergences considerably older than sampled fossil occurrence dates. Conversely, the smoothed timetrees consistently provide node-ages far older than the strict dates or BEAST estimates for morphologically conservative sister-taxa when they sit on long ghost lineages. In this latter case, the relaxed-clock model appears to be correctly moderating the node-age estimate based on the limited morphological divergence. Topologies are generally similar across analyses, but BEAST trees for crocodyliforms differ when clades are deeply nested but contain very old taxa. It appears that the constant-rate sampling assumption of the BDSS tree prior influences topology inference by disfavoring long, unsampled branches. PMID:28187191
Peel, Joanne R; Mandujano, María del Carmen
2014-12-01
The queen conch Strombus gigas represents one of the most important fishery resources of the Caribbean but heavy fishing pressure has led to the depletion of stocks throughout the region, causing the inclusion of this species into CITES Appendix II and IUCN's Red-List. In Mexico, the queen conch is managed through a minimum fishing size of 200 mm shell length and a fishing quota which usually represents 50% of the adult biomass. The objectives of this study were to determine the intrinsic population growth rate of the queen conch population of Xel-Ha, Quintana Roo, Mexico, and to assess the effects of a regulated fishing impact, simulating the extraction of 50% adult biomass on the population density. We used three different minimum size criteria to demonstrate the effects of minimum catch size on the population density and discuss biological implications. Demographic data was obtained through capture-mark-recapture sampling, collecting all animals encountered during three hours, by three divers, at four different sampling sites of the Xel-Ha inlet. The conch population was sampled each month between 2005 and 2006, and bimonthly between 2006 and 2011, tagging a total of 8,292 animals. Shell length and lip thickness were determined for each individual. The average shell length for conch with formed lip in Xel-Ha was 209.39 ± 14.18 mm and the median 210 mm. Half of the sampled conch with lip ranged between 200 mm and 219 mm shell length. Assuming that the presence of the lip is an indicator for sexual maturity, it can be concluded that many animals may form their lip at greater shell lengths than 200 mm and ought to be considered immature. Estimation of relative adult abundance and densities varied greatly depending on the criteria employed for adult classification. When using a minimum fishing size of 200 mm shell length, between 26.2% and up to 54.8% of the population qualified as adults, which represented a simulated fishing impact of almost one third of the population. When conch extraction was simulated using a classification criteria based on lip thickness, it had a much smaller impact on the population density. We concluded that the best management strategy for S. gigas is a minimum fishing size based on a lip thickness, since it has lower impact on the population density, and given that selective fishing pressure based on size may lead to the appearance of small adult individuals with reduced fecundity. Furthermore, based on the reproductive biology and the results of the simulated fishing, we suggest a minimum lip thickness of ≥ 15 mm, which ensures the protection of reproductive stages, reduces the risk of overfishing, leading to non-viable density reduction.
The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.
ERIC Educational Resources Information Center
Yuen, Terence
2003-01-01
Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)
Code of Federal Regulations, 2011 CFR
2011-01-01
... interest rate and foreign exchange rate contracts are computed on the basis of the credit equivalent amounts of such contracts. Credit equivalent amounts are computed for each of the following off-balance... Equivalent Amounts a. The minimum capital components for interest rate and foreign exchange rate contracts...
76 FR 80781 - Alcohol and Drug Testing: Determination of Minimum Random Testing Rates for 2012
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-27
...-11213, Notice No. 15] RIN 2130-AA81 Alcohol and Drug Testing: Determination of Minimum Random Testing.... According to data from FRA's Management Information System, the rail industry's random drug testing [[Page... Administrator (Administrator) has therefore determined that the minimum annual random drug testing rate for the...
Conroy, M.J.; Nichols, J.D.
1984-01-01
Several important questions in evolutionary biology and paleobiology involve sources of variation in extinction rates. In all cases of which we are aware, extinction rates have been estimated from data in which the probability that an observation (e.g., a fossil taxon) will occur is related both to extinction rates and to what we term encounter probabilities. Any statistical method for analyzing fossil data should at a minimum permit separate inferences on these two components. We develop a method for estimating taxonomic extinction rates from stratigraphic range data and for testing hypotheses about variability in these rates. We use this method to estimate extinction rates and to test the hypothesis of constant extinction rates for several sets of stratigraphic range data. The results of our tests support the hypothesis that extinction rates varied over the geologic time periods examined. We also present a test that can be used to identify periods of high or low extinction probabilities and provide an example using Phanerozoic invertebrate data. Extinction rates should be analyzed using stochastic models, in which it is recognized that stratigraphic samples are random varlates and that sampling is imperfect
Practical remarks on the heart rate and saturation measurement methodology
NASA Astrophysics Data System (ADS)
Kowal, M.; Kubal, S.; Piotrowski, P.; Staniec, K.
2017-05-01
A surface reflection-based method for measuring heart rate and saturation has been introduced as one having a significant advantage over legacy methods in that it lends itself for use in special applications such as those where a person’s mobility is of prime importance (e.g. during a miner’s work) and excluding the use of traditional clips. Then, a complete ATmega1281-based microcontroller platform has been described for performing computational tasks of signal processing and wireless transmission. In the next section remarks have been provided regarding the basic signal processing rules beginning with raw voltage samples of converted optical signals, their acquisition, storage and smoothing. This chapter ends with practical remarks demonstrating an exponential dependence between the minimum measurable heart rate and the readout resolution at different sampling frequencies for different cases of averaging depth (in bits). The following section is devoted strictly to the heart rate and hemoglobin oxygenation (saturation) measurement with the use of the presented platform, referenced to measurements obtained with a stationary certified pulsoxymeter.
Autonomous Sensor Motes Employing Liquid-Bearing Rotary Stages
2014-03-06
breaks off (Fig. 27d) as shown in the sudden change in force, indicating rotor pull off. The minimum of each curve indicates the maximum tensile load...configuration, with marks on the curves at the minimum energy positions are shown in Fig. 39. The minimum energy positions from Fig. 39are plotted as...rates between 5 and 17 Hz rotation rate plotted vs. rotor eccentricity. The minimum energy positions are indicated on each curve . 3.3 Discussion
Macharia, T N; Ochola, S; Mutua, M K; Kimani-Murage, E W
2018-02-01
Studies in urban informal settlements show widespread inappropriate infant and young child feeding (IYCF) practices and high rates of food insecurity. This study assessed the association between household food security and IYCF practices in two urban informal settlements in Nairobi, Kenya. The study adopted a longitudinal design that involved a census sample of 1110 children less than 12 months of age and their mothers aged between 12 and 49 years. A questionnaire was used to collect information on: IYCF practices and household food security. Logistic regression was used to determine the association between food insecurity and IYFC practices. The findings showed high household food insecurity; only 19.5% of the households were food secure based on Household Insecurity Access Score. Infant feeding practices were inappropriate: 76% attained minimum meal frequency; 41% of the children attained a minimum dietary diversity; and 27% attained minimum acceptable diet. With the exception of the minimum meal frequency, infants living in food secure households were significantly more likely to achieve appropriate infant feeding practices than those in food insecure households: minimum meal frequency (adjusted odds ratio (AOR)=1.26, P=0.530); minimum dietary diversity (AOR=1.84, P=0.046) and minimum acceptable diet (AOR=2.35, P=0.008). The study adds to the existing body of knowledge by demonstrating an association between household food security and infant feeding practices in low-income settings. The findings imply that interventions aimed at improving infant feeding practices and ultimately nutritional status need to also focus on improving household food security.
Van Dyke, Miriam E; Komro, Kelli A; Shah, Monica P; Livingston, Melvin D; Kramer, Michael R
2018-07-01
Despite substantial declines since the 1960's, heart disease remains the leading cause of death in the United States (US) and geographic disparities in heart disease mortality have grown. State-level socioeconomic factors might be important contributors to geographic differences in heart disease mortality. This study examined the association between state-level minimum wage increases above the federal minimum wage and heart disease death rates from 1980 to 2015 among 'working age' individuals aged 35-64 years in the US. Annual, inflation-adjusted state and federal minimum wage data were extracted from legal databases and annual state-level heart disease death rates were obtained from CDC Wonder. Although most minimum wage and health studies to date use conventional regression models, we employed marginal structural models to account for possible time-varying confounding. Quasi-experimental, marginal structural models accounting for state, year, and state × year fixed effects estimated the association between increases in the state-level minimum wage above the federal minimum wage and heart disease death rates. In models of 'working age' adults (35-64 years old), a $1 increase in the state-level minimum wage above the federal minimum wage was on average associated with ~6 fewer heart disease deaths per 100,000 (95% CI: -10.4, -1.99), or a state-level heart disease death rate that was 3.5% lower per year. In contrast, for older adults (65+ years old) a $1 increase was on average associated with a 1.1% lower state-level heart disease death rate per year (b = -28.9 per 100,000, 95% CI: -71.1, 13.3). State-level economic policies are important targets for population health research. Copyright © 2018 Elsevier Inc. All rights reserved.
Real time flight simulation methodology
NASA Technical Reports Server (NTRS)
Parrish, E. A.; Cook, G.; Mcvey, E. S.
1976-01-01
An example sensitivity study is presented to demonstrate how a digital autopilot designer could make a decision on minimum sampling rate for computer specification. It consists of comparing the simulated step response of an existing analog autopilot and its associated aircraft dynamics to the digital version operating at various sampling frequencies and specifying a sampling frequency that results in an acceptable change in relative stability. In general, the zero order hold introduces phase lag which will increase overshoot and settling time. It should be noted that this solution is for substituting a digital autopilot for a continuous autopilot. A complete redesign could result in results which more closely resemble the continuous results or which conform better to original design goals.
Investigation of wear land and rate of locally made HSS cutting tool
NASA Astrophysics Data System (ADS)
Afolalu, S. A.; Abioye, A. A.; Dirisu, J. O.; Okokpujie, I. P.; Ajayi, O. O.; Adetunji, O. R.
2018-04-01
Production technology and machining are inseparable with cutting operation playing important roles. Investigation of wear land and rate of cutting tool developed locally (C=0.56%) with an HSS cutting tool (C=0.65%) as a control was carried out. Wear rate test was carried out using Rotopol -V and Impact tester. The samples (12) of locally made cutting tools and one (1) sample of a control HSS cutting tool were weighed to get the initial weight and grit was fixed at a point for the sample to revolve at a specific time of 10 mins interval. Approach of macro transfer particles that involved mechanism of abrasion and adhesion which was termed as mechanical wear to handle abrasion adhesion processes was used in developing equation for growth wear at flank. It was observed from the wear test that best minimum wear rate of 1.09 × 10-8 and 2.053 × 10-8 for the tools developed and control were measured. MATLAB was used to simulate the wear land and rate under different conditions. Validated results of both the experimental and modeling showed that cutting speed has effect on wear rate while cutting time has predicted measure on wear land. Both experimental and modeling result showed best performances of tools developed over the control.
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
7 CFR 51.1416 - Optional determinations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... throughout the lot. (a) Edible kernel content. A minimum sample of at least 500 grams of in-shell pecans shall be used for determination of edible kernel content. After the sample is weighed and shelled... determine edible kernel content for the lot. (b) Poorly developed kernel content. A minimum sample of at...
78 FR 4855 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-23
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2009-0973] Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2013 minimum random drug testing rate at 25 percent of...
76 FR 79204 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-21
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2009-0973] Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2012 minimum random drug testing rate at 50 percent of...
76 FR 1448 - Random Drug Testing Rate for Covered Crewmembers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
... DEPARTMENT OF HOMELAND SECURITY Coast Guard [Docket No. USCG-2009-0973] Random Drug Testing Rate for Covered Crewmembers AGENCY: Coast Guard, DHS. ACTION: Notice of minimum random drug testing rate. SUMMARY: The Coast Guard has set the calendar year 2011 minimum random drug testing rate at 50 percent of...
Chlorination of alumina in kaolinitic clay
NASA Astrophysics Data System (ADS)
Grob, B.; Richarz, W.
1984-09-01
The chlorination of alumina in kaolinitic clay with Cl2 and CO gas mixtures was studied gravimetrically. The effects of the calcination method and of NaCl addition on the reactivity of the clay were examined. Fast reaction rates were achieved only with samples previously exposed to a sulfating treatment. Optimum conditions, with maximum yield and selectivity to A1C13 and minimum SiO2 conversion, were found between 770 and 970 K. At higher temperatures the SiCl4 formed poisons the reactive alumina surface by selective chemisorption with a marked decrease of the reaction rate.
The effect of the learner license Graduated Driver Licensing components on teen drivers' crashes.
Ehsani, Johnathon Pouya; Bingham, C Raymond; Shope, Jean T
2013-10-01
Most studies evaluating the effectiveness of Graduated Driver Licensing (GDL) have focused on the overall system. Studies examining individual components have rarely accounted for the confounding of multiple, simultaneously implemented components. The purpose of this paper is to quantify the effects of a required learner license duration and required hours of supervised driving on teen driver fatal crashes. States that introduced a single GDL component independent of any other during the period 1990-2009 were identified. Monthly and quarterly fatal crash rates per 100,000 population of 16- and 17-year-old drivers were analyzed using single-state time series analysis, adjusting for adult crash rates and gasoline prices. Using the parameter estimates from each state's time series model, the pooled effect of each GDL component on 16- and 17-year-old drivers' fatal crashes was estimated using a random effects meta-analytic model to combine findings across states. In three states, a six-month minimum learner license duration was associated with a significant decline in combined 16- and 17-year-old drivers' fatal crash rates. The pooled effect of the minimum learner license duration across all states in the sample was associated with a significant change in combined 16- and 17-year-old driver fatal crash rates of -.07 (95% Confidence Interval [CI] -.11, -.03). Following the introduction of 30 h of required supervised driving in one state, novice drivers' fatal crash rates increased 35%. The pooled effect across all states in the study sample of having a supervised driving hour requirement was not significantly different from zero (.04, 95% CI -.15, .22). These findings suggest that a learner license duration of at least six-months may be necessary to achieve a significant decline in teen drivers' fatal crash rates. Evidence of the effect of required hours of supervised driving on teen drivers' fatal crash rates was mixed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2014 CFR
2014-01-01
... basic pay falls below the minimum rate of their band. 9701.325 Section 9701.325 Administrative Personnel... Administration Setting and Adjusting Rate Ranges § 9701.325 Treatment of employees whose rate of basic pay falls... under § 9701.323 because of an unacceptable rating of record and whose rate of basic pay falls below the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... basic pay falls below the minimum rate of their band. 9701.325 Section 9701.325 Administrative Personnel... Administration Setting and Adjusting Rate Ranges § 9701.325 Treatment of employees whose rate of basic pay falls... under § 9701.323 because of an unacceptable rating of record and whose rate of basic pay falls below the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... basic pay falls below the minimum rate of their band. 9701.325 Section 9701.325 Administrative Personnel... Administration Setting and Adjusting Rate Ranges § 9701.325 Treatment of employees whose rate of basic pay falls... under § 9701.323 because of an unacceptable rating of record and whose rate of basic pay falls below the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... basic pay falls below the minimum rate of their band. 9701.325 Section 9701.325 Administrative Personnel... Administration Setting and Adjusting Rate Ranges § 9701.325 Treatment of employees whose rate of basic pay falls... under § 9701.323 because of an unacceptable rating of record and whose rate of basic pay falls below the...
Investigating the Sensitivity of Model Intraseasonal Variability to Minimum Entrainment
NASA Astrophysics Data System (ADS)
Hannah, W. M.; Maloney, E. D.
2008-12-01
Previous studies have shown that using a Relaxed Arakawa-Schubert (RAS) convective parameterization with appropriate convective triggers and assumptions about rain re-evaporation produces realistic intraseasonal variability. RAS represents convection with an ensemble of clouds detraining at different heights, each with different entrainment rate, the highest clouds having the lowest entrainment rates. If tropospheric temperature gradients are weak and boundary layer moist static energy is relatively constant, then by limiting the minimum entrainment rate deep convection is suppressed in the presence of dry tropospheric air. This allows moist static energy to accumulate and be discharged during strong intraseasonal convective events, which is consistent with the discharge/recharge paradigm. This study will examine the sensitivity of intra-seasonal variability to changes in minimum entrainment rate in the NCAR-CAM3 with the RAS scheme. Simulations using several minimum entrainment rate thresholds will be investigated. A frequency-wavenumber analysis will show the improvement of the MJO signal as minimum entrainment rate is increased. The spatial and vertical structure of MJO-like disturbances will be examined, including an analysis of the time evolution of vertical humidity distribution for each simulation. Simulated results will be compared to observed MJO events in NCEP-1 reanalysis and CMAP precipitation.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Behavioral and physiological significance of minimum resting metabolic rate in king penguins.
Halsey, L G; Butler, P J; Fahlman, A; Woakes, A J; Handrich, Y
2008-01-01
Because fasting king penguins (Aptenodytes patagonicus) need to conserve energy, it is possible that they exhibit particularly low metabolic rates during periods of rest. We investigated the behavioral and physiological aspects of periods of minimum metabolic rate in king penguins under different circumstances. Heart rate (f(H)) measurements were recorded to estimate rate of oxygen consumption during periods of rest. Furthermore, apparent respiratory sinus arrhythmia (RSA) was calculated from the f(H) data to determine probable breathing frequency in resting penguins. The most pertinent results were that minimum f(H) achieved (over 5 min) was higher during respirometry experiments in air than during periods ashore in the field; that minimum f(H) during respirometry experiments on water was similar to that while at sea; and that RSA was apparent in many of the f(H) traces during periods of minimum f(H) and provides accurate estimates of breathing rates of king penguins resting in specific situations in the field. Inferences made from the results include that king penguins do not have the capacity to reduce their metabolism to a particularly low level on land; that they can, however, achieve surprisingly low metabolic rates at sea while resting in cold water; and that during respirometry experiments king penguins are stressed to some degree, exhibiting an elevated metabolism even when resting.
29 CFR 511.2 - Initiation of proceedings; notices of hearings.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., giving notice of hearings by industry committees to recommend the minimum rate or rates of wages to be... the minimum rate or rates of wages for all industry in American Samoa. All such orders will make... Section 511.2 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR...
Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias
2011-12-07
The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLaren, Joyce; Davidson, Carolyn; Miller, John
Utilities are proposing changes to residential rate structures to address concerns about lost revenue due to increased adoption of distributed solar generation. An investigation of the impacts of increased fixed charges, minimum bills and residential demand charges on PV and non-PV customer bills suggests that minimum bills more accurately capture utilities' revenue requirement than fixed charges, while not acting as a disincentive to efficiency or negatively impacting low-income customers.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
Bavarnegin, E; Fathabadi, N; Vahabi Moghaddam, M; Vasheghani Farahani, M; Moradi, M; Babakhni, A
2013-03-01
Radon exhalation rates from building materials used in high background radiation areas (HBRA) of Ramsar were measured using an active radon gas analyzer with an emanation container. Radon exhalation rates from these samples varied from below the lower detection limit up to 384 Bq.m(-2) h(-1). The (226)Ra, (232)Th and (40)K contents were also measured using a high resolution HPGe gamma- ray spectrometer system. The activity concentration of (226)Ra, (232)Th and (40)K content varied from below the minimum detection limit up to 86,400 Bq kg(-1), 187 Bq kg(-1) and 1350 Bq kg(-1), respectively. The linear correlation coefficient between radon exhalation rate and radium concentration was 0.90. The result of this survey shows that radon exhalation rate and radium content in some local stones used as basements are extremely high and these samples are main sources of indoor radon emanation as well as external gamma radiation from uranium series. Copyright © 2012 Elsevier Ltd. All rights reserved.
Liu, Zhaojun; Zhou, Jing; Gu, Liankun; Deng, Dajun
2016-08-30
Methylation changes of CpG islands can be determined using PCR-based assays. However, the exact impact of the amount of input templates (TAIT) on DNA methylation analysis has not been previously recognized. Using COL2A1 gene as an input reference, TAIT difference between human tissues with methylation-positive and -negative detection was calculated for two representative genes GFRA1 and P16. Results revealed that TAIT in GFRA1 methylation-positive frozen samples (n = 332) was significantly higher than the methylation-negative ones (n = 44) (P < 0.001). Similar difference was found in P16 methylation analysis. The TAIT-related effect was also observed in methylation-specific PCR (MSP) and denatured high performance liquid chromatography (DHPLC) analysis. Further study showed that the minimum TAIT for a successful MethyLight PCR reaction should be ≥ 9.4 ng (CtCOL2A1 ≤ 29.3), when the cutoff value of the methylated-GFRA1 proportion for methylation-positive detection was set at 1.6%. After TAIT of the methylation non-informative frozen samples (n = 94; CtCOL2A1 > 29.3) was increased above the minimum TAIT, the methylation-positive rate increased from 72.3% to 95.7% for GFRA1 and 26.6% to 54.3% for P16, respectively (Ps < 0.001). Similar results were observed in the FFPE samples. In conclusion, TAIT critically affects results of various PCR-based DNA methylation analyses. Characterization of the minimum TAIT for target CpG islands is essential to avoid false-negative results.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Minimum wage. 551.301 Section 551.301... FAIR LABOR STANDARDS ACT Minimum Wage Provisions Basic Provision § 551.301 Minimum wage. (a)(1) Except... employees wages at rates not less than the minimum wage specified in section 6(a)(1) of the Act for all...
The Danish National Health Survey 2010. Study design and respondent characteristics.
Christensen, Anne Illemann; Ekholm, Ola; Glümer, Charlotte; Andreasen, Anne Helms; Hvidberg, Michael Falk; Kristensen, Peter Lund; Larsen, Finn Breinholt; Ortiz, Britta; Juel, Knud
2012-06-01
In 2010 the five Danish regions and the National Institute of Public Health at the University of Southern Denmark conducted a national representative health survey among the adult population in Denmark. This paper describes the study design and the sample and study population as well as the content of the questionnaire. The survey was based on five regional stratified random samples and one national random sample. The samples were mutually exclusive. A total of 298,550 individuals (16 years or older) were invited to participate. Information was collected using a mixed mode approach (paper and web questionnaires). A questionnaire with a minimum of 52 core questions was used in all six subsamples. Calibrated weights were computed in order to take account of the complex survey design and reduce non-response bias. In all, 177,639 individuals completed the questionnaire (59.5%). The response rate varied from 52.3% in the Capital Region of Denmark sample to 65.5% in the North Denmark Region sample. The response rate was particularly low among young men, unmarried people and among individuals with a different ethnic background than Danish. The survey was a result of extensive national cooperation across sectors, which makes it unique in its field of application, e.g. health surveillance, planning and prioritizing public health initiatives and research. However, the low response rate in some subgroups of the study population can pose problems in generalizing data, and efforts to increase the response rate will be important in the forthcoming surveys.
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
Determining Aliasing in Isolated Signal Conditioning Modules
NASA Technical Reports Server (NTRS)
2009-01-01
The basic concept of aliasing is this: Converting analog data into digital data requires sampling the signal at a specific rate, known as the sampling frequency. The result of this conversion process is a new function, which is a sequence of digital samples. This new function has a frequency spectrum, which contains all the frequency components of the original signal. The Fourier transform mathematics of this process show that the frequency spectrum of the sequence of digital samples consists of the original signal s frequency spectrum plus the spectrum shifted by all the harmonics of the sampling frequency. If the original analog signal is sampled in the conversion process at a minimum of twice the highest frequency component contained in the analog signal, and if the reconstruction process is limited to the highest frequency of the original signal, then the reconstructed signal accurately duplicates the original analog signal. It is this process that can give birth to aliasing.
Reliability of risk-adjusted outcomes for profiling hospital surgical quality.
Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B
2014-05-01
Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.
Teuber, Lena; Schukat, Anna; Hagen, Wilhelm; Auel, Holger
2013-01-01
Oxygen minimum zones (OMZs) affect distribution patterns, community structure and metabolic processes of marine organisms. Due to the prominent role of zooplankton, especially copepods, in the marine carbon cycle and the predicted intensification and expansion of OMZs, it is essential to understand the effects of hypoxia on zooplankton distribution and ecophysiology. For this study, calanoid copepods were sampled from different depths (0-1800 m) at eight stations in the eastern tropical Atlantic (3 °47'N to 18 °S) during three expeditions in 2010 and 2011. Their horizontal and vertical distribution was determined and related to the extent and intensity of the OMZ, which increased from north to south with minimum O2 concentrations (12.7 µmol kg(-1)) in the southern Angola Gyre. Calanoid copepod abundance was highest in the northeastern Angola Basin and decreased towards equatorial regions as well as with increasing depth. Maximum copepod biodiversity was observed in the deep waters of the central Angola Basin. Respiration rates and enzyme activities were measured to reveal species-specific physiological adaptations. Enzyme activities of the electron transport system (ETS) and lactate dehydrogenase (LDH) served as proxies for aerobic and anaerobic metabolic activity, respectively. Mass-specific respiration rates and ETS activities decreased with depth of occurrence, consistent with vertical changes in copepod body mass and ambient temperature. Copepods of the families Eucalanidae and Metridinidae dominated within the OMZ. Several of these species showed adaptive characteristics such as lower metabolic rates, additional anaerobic activity and diel vertical migration that enable them to successfully inhabit hypoxic zones.
Teuber, Lena; Schukat, Anna; Hagen, Wilhelm; Auel, Holger
2013-01-01
Oxygen minimum zones (OMZs) affect distribution patterns, community structure and metabolic processes of marine organisms. Due to the prominent role of zooplankton, especially copepods, in the marine carbon cycle and the predicted intensification and expansion of OMZs, it is essential to understand the effects of hypoxia on zooplankton distribution and ecophysiology. For this study, calanoid copepods were sampled from different depths (0–1800 m) at eight stations in the eastern tropical Atlantic (3°47′N to 18°S) during three expeditions in 2010 and 2011. Their horizontal and vertical distribution was determined and related to the extent and intensity of the OMZ, which increased from north to south with minimum O2 concentrations (12.7 µmol kg−1) in the southern Angola Gyre. Calanoid copepod abundance was highest in the northeastern Angola Basin and decreased towards equatorial regions as well as with increasing depth. Maximum copepod biodiversity was observed in the deep waters of the central Angola Basin. Respiration rates and enzyme activities were measured to reveal species-specific physiological adaptations. Enzyme activities of the electron transport system (ETS) and lactate dehydrogenase (LDH) served as proxies for aerobic and anaerobic metabolic activity, respectively. Mass-specific respiration rates and ETS activities decreased with depth of occurrence, consistent with vertical changes in copepod body mass and ambient temperature. Copepods of the families Eucalanidae and Metridinidae dominated within the OMZ. Several of these species showed adaptive characteristics such as lower metabolic rates, additional anaerobic activity and diel vertical migration that enable them to successfully inhabit hypoxic zones. PMID:24223716
20 CFR 225.15 - Overall Minimum PIA.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Security Act based on combined railroad and social security earnings. The Overall Minimum PIA is used in computing the social security overall minimum guaranty amount. The overall minimum guaranty rate annuity... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Employee, Spouse and Divorced Spouse Annuities § 225...
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Lehrer, Paul; Karavidas, Maria; Lu, Shou-En; Vaschillo, Evgeny; Vaschillo, Bronya; Cheng, Andrew
2010-05-01
Seven professional airplane pilots participated in a one-session test in a Boeing 737-800 simulator. Mental workload for 18 flight tasks was rated by experienced test pilots (hereinafter called "expert ratings") and by study participants' self-report on NASA's Task Load Index (TLX) scale. Pilot performance was rated by a check pilot. The standard deviation of R-R intervals (SDNN) significantly added 3.7% improvement over the TLX in distinguishing high from moderate-load tasks and 2.3% improvement in distinguishing high from combined moderate and low-load tasks. Minimum RRI in the task significantly discriminated high- from medium- and low-load tasks, but did not add significant predictive variance to the TLX. The low-frequency/high-frequency (LF:HF) RRI ratio based on spectral analysis of R-R intervals, and ventricular relaxation time were each negatively related to pilot performance ratings independently of TLX values, while minimum and average RRI were positively related, showing added contribution of these cardiac measures for predicting performance. Cardiac results were not affected by controlling either for respiration rate or motor activity assessed by accelerometry. The results suggest that cardiac assessment can be a useful addition to self-report measures for determining flight task mental workload and risk for performance decrements. Replication on a larger sample is needed to confirm and extend the results. Copyright 2010 Elsevier B.V. All rights reserved.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaetsu, I.; Ito, A.; Hayashi, K.
1973-06-01
The effect of homogeneity of polymerization phase and monomer concentration on the temperature dependence of initial polymerization rate was studied in the radiation-induced radical polymerization of binary systems consisting of glass-forming monomer and solvent. In the polymerization of a completely homogeneous system such as HEMA-propylene glycol, a maximum and a minimum in polymerization rates as a function of temperature, characteristic of the polymerization in glass-forming systems, were observed for all monomer concentrations. However, in the heterogeneous polymerization systems such as HEMA-triacetin and HEMAisoamyl acetate, maximum and minimum rates were observed in monomer-rich compositions but not at low monomer concentrations. Furthermore,more » in the HEMA-dioctyl phthalate polymerization system, which is extremely heterogeneous, no maximum and minimum rates were observed at any monomer concentration. The effect of conversion on the temperature dependence of polymerization rate in homogeneous bulk polymerization of HEMA and GMA was investigated. Maximum and minimum rates were observed clearly in conversions less than 10% in the case of HEMA and less than 50% in the case of GMA, but the maximum and minimum changed to a mere inflection in the curve at higher conversions. A similar effect of polymer concentration on the temperature dependence of polymerization rate in the GMA-poly(methyl methacrylate) system was also observed. It is deduced that the change in temperature dependence of polymerization rate is attributed to the decrease in contribution of mutual termination reaction of growing chain radicals to the polymerization rate. (auth)« less
Changes of Photochemical Properties of Dissolved Organic Matter During a Hydrological Year
NASA Astrophysics Data System (ADS)
Porcal, P.; Dillon, P. J.
2009-05-01
The fate of dissolved organic matter (DOM) in lakes and streams is significantly affected by photochemical transformation of DOM. A series of laboratory photochemical experiments has been conducted to describe long term changes in photochemical properties of DOM. The stream samples used in this study originated from three different watersheds in Dorset area (Ontario, Canada), the first watershed has predominantly coniferous cove, the second one is dominated by maple and birch, and a large wetland dominates to the third one. The first order kinetic constant rate was used as a suitable characteristic of photochemical properties of DOM. The higher rates were observed in samples from watershed dominated by coniferous forest while the lower rates were determined in deciduous forest. Kinetic rates from all three watersheds showed sinusoidal pattern during the hydrological year. The rates increased steadily during autumn and winter and decreased during spring and summer. The highest values were observed during the spring melt events when the fresh DOM was flushed out from terrestrial sources. The minimum rate constants were in summer when the discharge was lower. The photochemical properties of DOM changes during the hydrological year and correspond to the seasonal cycles of terrestrial organic matter.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES... PHYSICAL AND DIGITAL PHONORECORDS Interactive Streaming and Limited Downloads § 385.13 Minimum royalty...
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
14 CFR 29.49 - Performance at minimum operating speed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Performance at minimum operating speed. 29... minimum operating speed. (a) For each Category A helicopter, the hovering performance must be determined... than helicopters, the steady rate of climb at the minimum operating speed must be determined over the...
14 CFR 29.49 - Performance at minimum operating speed.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Performance at minimum operating speed. 29... minimum operating speed. (a) For each Category A helicopter, the hovering performance must be determined... than helicopters, the steady rate of climb at the minimum operating speed must be determined over the...
Morris, Roisin; MacNeela, Padraig; Scott, Anne; Treacy, Pearl; Hyde, Abbey; O'Brien, Julian; Lehwaldt, Daniella; Byrne, Anne; Drennan, Jonathan
2008-04-01
In a study to establish the interrater reliability of the Irish Nursing Minimum Data Set (I-NMDS) for mental health difficulties relating to the choice of reliability test statistic were encountered. The objective of this paper is to highlight the difficulties associated with testing interrater reliability for an ordinal scale using a relatively homogenous sample and the recommended kw statistic. One pair of mental health nurses completed the I-NMDS for mental health for a total of 30 clients attending a mental health day centre over a two-week period. Data was analysed using the kw and percentage agreement statistics. A total of 34 of the 38 I-NMDS for mental health variables with lower than acceptable levels of kw reliability scores achieved acceptable levels of reliability according to their percentage agreement scores. The study findings implied that, due to the homogeneity of the sample, low variability within the data resulted in the 'base rate problem' associated with the use of kw statistic. Conclusions point to the interpretation of kw in tandem with percentage agreement scores. Suggestions that kw scores were low due to chance agreement and that one should strive to use a study sample with known variability are queried.
Antarctic meteor observations using the Davis MST and meteor radars
NASA Astrophysics Data System (ADS)
Holdsworth, David A.; Murphy, Damian J.; Reid, Iain M.; Morris, Ray J.
2008-07-01
This paper presents the meteor observations obtained using two radars installed at Davis (68.6°S, 78.0°E), Antarctica. The Davis MST radar was installed primarily for observation of polar mesosphere summer echoes, with additional transmit and receive antennas installed to allow all-sky interferometric meteor radar observations. The Davis meteor radar performs dedicated all-sky interferometric meteor radar observations. The annual count rate variation for both radars peaks in mid-summer and minimizes in early Spring. The height distribution shows significant annual variation, with minimum (maximum) peak heights and maximum (minimum) height widths in early Spring (mid-summer). Although the meteor radar count rate and height distribution variations are consistent with a similar frequency meteor radar operating at Andenes (69.3°N), the peak heights show a much larger variation than at Andenes, while the count rate maximum-to-minimum ratios show a much smaller variation. Investigation of the effects of the temporal sampling parameters suggests that these differences are consistent with the different temporal sampling strategies used by the Davis and Andenes meteor radars. The new radiant mapping procedure of [Jones, J., Jones, W., Meteor radiant activity mapping using single-station radar observations, Mon. Not. R. Astron. Soc., 367(3), 1050-1056, doi: 10.1111/j.1365-2966.2006.10025.x, 2006] is investigated. The technique is used to detect the Southern delta-Aquarid meteor shower, and a previously unknown weak shower. Meteoroid speeds obtained using the Fresnel transform are presented. The diurnal, annual, and height variation of meteoroid speeds are presented, with the results found to be consistent with those obtained using specular meteor radars. Meteoroid speed estimates for echoes identified as Southern delta-Aquarid and Sextantid meteor candidates show good agreement with the theoretical pre-atmospheric speeds of these showers (41 km s -1 and 32 km s -1, respectively). The meteoroid speeds estimated for these showers show decreasing speed with decreasing height, consistent with the effects of meteoroid deceleration. Finally, we illustrate how the new radiant mapping and meteoroid speed techniques can be combined for unambiguous meteor shower detection, and use these techniques to detect a previously unknown weak shower.
Technical note: false catastrophic age-at-death profiles in commingled bone deposits.
Sołtysiak, Arkadiusz
2013-12-01
Age-at-death profiles obtained using the minimum number of individuals (MNI) for mass deposits of commingled human remains may be biased by over-representation of subadult individuals. A computer simulation designed in the R environment has shown that this effect may lead to misinterpretation of such samples even in cases where the completeness rate is relatively high. The simulation demonstrates that the use of the Most Likely Number of Individuals (MLNI) substantially reduces this bias. Copyright © 2013 Wiley Periodicals, Inc.
Laboratory Transmission of Venezuelan Equine Encephalomyelitis Virus by the Tick Hyalomma Truncatum
1994-01-01
equine On day 21 after infestation of the first guinea-pig, none encephalomyelitis virus by the tick of 95 unfed nv mphs sampled contained virus...Epi/ootic strains oft Venezuelan equine encephalo- nymphs [minimum infection rate =2 200 1% I"( 1 contained inveliti. \\EE’ virus Alp/tavirus. family...Togaviridae virus mean titre= 102 1 1PFU ’. About 200 unfed nymphs ý:iuse serious disease tin horse % and humans throughout were placed on a guinea-pig at
Spencer, Kevin; Cuckle, Howard S
2002-10-01
To assess the within person biological variability of first trimester maternal serum biochemical markers of trisomy 21 across the 10-14 week gestational period. To evaluate whether repeat sampling and testing of free beta-hCG and PAPP-A during this period would result in an improved detection rate. Women presenting at the first trimester OSCAR clinic have blood collected prior to ultrasound dating and nuchal translucency measurement. All samples are analysed for free beta-hCG and PAPP-A before an accurate estimate of gestation is available. In 10% of cases the gestation is prior to the minimum time for NT measurement (11 weeks) and these women are rebooked for a repeat visit to the clinic at the appropriate time. Our fetal database was interrogated to obtain cases in which two maternal blood samples had been collected and analysed in the 10-14 week period. Using data from the marker correlations and statistical modelling, the impact of repeat testing on detection rate for trisomy 21 at a fixed 5% false positive rate, was assessed. 261 pairs of data were available for analysis collected over a 3 year period. The correlation between free beta-hCG in sample 1 and sample 2 was 0.890 and that for PAPP-A was 0.827. The average within person biological variation for free beta-hCG was 21% and 32% for PAPP-A. The increase in detection rate when using both sets of marker data was 3.5% when using serum biochemistry and maternal age, and 1.3% when using nuchal translucency, serum biochemistry and maternal age. Repeat sampling and testing of maternal serum biochemical markers is unlikely to substantially improve first trimester screening performance. Copyright 2002 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Martin, M. W.; Kubiak, E. T.
1982-01-01
A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.
Evaluation of a Dust Control for a Small Slab-Riding Dowel Drill for Concrete Pavement
Echt, Alan; Mead, Kenneth
2016-01-01
Purpose To assess the effectiveness of local exhaust ventilation to control respirable crystalline silica exposures to acceptable levels during concrete dowel drilling. Approach Personal breathing zone samples for respirable dust and crystalline silica were collected while laborers drilled holes 3.5 cm diameter by 36 cm deep in a concrete slab using a single-drill slab-riding dowel drill equipped with local exhaust ventilation. Data were collected on air flow, weather, and productivity. Results All respirable dust samples were below the 90 µg detection limit which, when combined with the largest sample volume, resulted in a minimum detectable concentration of 0.31 mg m−3. This occurred in a 32-min sample collected when 27 holes were drilled. Quartz was only detected in one air sample; 0.09 mg m−3 of quartz was found on an 8-min sample collected during a drill maintenance task. The minimum detectable concentration for quartz in personal air samples collected while drilling was performed was 0.02 mg m−3. The average number of holes drilled during each drilling sample was 23. Over the course of the 2-day study, air flow measured at the dust collector decreased from 2.2 to 1.7 m3 s−1. Conclusions The dust control performed well under the conditions of this test. The initial duct velocity with a clean filter was sufficient to prevent settling, but gradually fell below the recommended value to prevent dust from settling in the duct. The practice of raising the drill between each hole may have prevented the dust from settling in the duct. A slightly higher flow rate and an improved duct design would prevent settling without regard to the position of the drill. PMID:26826033
Evaluation of a Dust Control for a Small Slab-Riding Dowel Drill for Concrete Pavement.
Echt, Alan; Mead, Kenneth
2016-05-01
To assess the effectiveness of local exhaust ventilation to control respirable crystalline silica exposures to acceptable levels during concrete dowel drilling. Personal breathing zone samples for respirable dust and crystalline silica were collected while laborers drilled holes 3.5 cm diameter by 36 cm deep in a concrete slab using a single-drill slab-riding dowel drill equipped with local exhaust ventilation. Data were collected on air flow, weather, and productivity. All respirable dust samples were below the 90 µg detection limit which, when combined with the largest sample volume, resulted in a minimum detectable concentration of 0.31 mg m(-3). This occurred in a 32-min sample collected when 27 holes were drilled. Quartz was only detected in one air sample; 0.09 mg m(-3) of quartz was found on an 8-min sample collected during a drill maintenance task. The minimum detectable concentration for quartz in personal air samples collected while drilling was performed was 0.02 mg m(-3). The average number of holes drilled during each drilling sample was 23. Over the course of the 2-day study, air flow measured at the dust collector decreased from 2.2 to 1.7 m(3) s(-1). The dust control performed well under the conditions of this test. The initial duct velocity with a clean filter was sufficient to prevent settling, but gradually fell below the recommended value to prevent dust from settling in the duct. The practice of raising the drill between each hole may have prevented the dust from settling in the duct. A slightly higher flow rate and an improved duct design would prevent settling without regard to the position of the drill. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2016.
Nitrite oxidation in the Namibian oxygen minimum zone.
Füssel, Jessika; Lam, Phyllis; Lavik, Gaute; Jensen, Marlene M; Holtappels, Moritz; Günter, Marcel; Kuypers, Marcel M M
2012-06-01
Nitrite oxidation is the second step of nitrification. It is the primary source of oceanic nitrate, the predominant form of bioavailable nitrogen in the ocean. Despite its obvious importance, nitrite oxidation has rarely been investigated in marine settings. We determined nitrite oxidation rates directly in (15)N-incubation experiments and compared the rates with those of nitrate reduction to nitrite, ammonia oxidation, anammox, denitrification, as well as dissimilatory nitrate/nitrite reduction to ammonium in the Namibian oxygen minimum zone (OMZ). Nitrite oxidation (≤372 nM NO(2)(-) d(-1)) was detected throughout the OMZ even when in situ oxygen concentrations were low to non-detectable. Nitrite oxidation rates often exceeded ammonia oxidation rates, whereas nitrate reduction served as an alternative and significant source of nitrite. Nitrite oxidation and anammox co-occurred in these oxygen-deficient waters, suggesting that nitrite-oxidizing bacteria (NOB) likely compete with anammox bacteria for nitrite when substrate availability became low. Among all of the known NOB genera targeted via catalyzed reporter deposition fluorescence in situ hybridization, only Nitrospina and Nitrococcus were detectable in the Namibian OMZ samples investigated. These NOB were abundant throughout the OMZ and contributed up to ~9% of total microbial community. Our combined results reveal that a considerable fraction of the recently recycled nitrogen or reduced NO(3)(-) was re-oxidized back to NO(3)(-) via nitrite oxidation, instead of being lost from the system through the anammox or denitrification pathways.
Precipitation in a lead calcium tin anode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Gonzalez, Francisco A., E-mail: fco.aurelio@inbox.com; Centro de Innovacion, Investigacion y Desarrollo en Ingenieria y Tecnologia, Universidad Autonoma de Nuevo Leon; Camurri, Carlos G., E-mail: ccamurri@udec.cl
Samples from a hot rolled sheet of a tin and calcium bearing lead alloy were solution heat treated at 300 Degree-Sign C and cooled down to room temperature at different rates; these samples were left at room temperature to study natural precipitation of CaSn{sub 3} particles. The samples were aged for 45 days before analysing their microstructure, which was carried out in a scanning electron microscope using secondary and backscattered electron detectors. Selected X-ray spectra analyses were conducted to verify the nature of the precipitates. Images were taken at different magnifications in both modes of observation to locate the precipitatesmore » and record their position within the images and calculate the distance between them. Differential scanning calorimeter analyses were conducted on selected samples. It was found that the mechanical properties of the material correlate with the minimum average distance between precipitates, which is related to the average cooling rate from solution heat treatment. - Highlights: Black-Right-Pointing-Pointer The distance between precipitates in a lead alloy is recorded. Black-Right-Pointing-Pointer The relationship between the distance and the cooling rate is established. Black-Right-Pointing-Pointer It is found that the strengthening of the alloy depends on the distance between precipitates.« less
Zhong, Hua; Redo-Sanchez, Albert; Zhang, X-C
2006-10-02
We present terahertz (THz) reflective spectroscopic focal-plane imaging of four explosive and bio-chemical materials (2, 4-DNT, Theophylline, RDX and Glutamic Acid) at a standoff imaging distance of 0.4 m. The 2 dimension (2-D) nature of this technique enables a fast acquisition time and is very close to a camera-like operation, compared to the most commonly used point emission-detection and raster scanning configuration. The samples are identified by their absorption peaks extracted from the negative derivative of the reflection coefficient respect to the frequency (-dr/dv) of each pixel. Classification of the samples is achieved by using minimum distance classifier and neural network methods with a rate of accuracy above 80% and a false alarm rate below 8%. This result supports the future application of THz time-domain spectroscopy (TDS) in standoff distance sensing, imaging, and identification.
40 CFR Table 3 to Subpart Jjjjjj... - Operating Limits for Boilers With Emission Limits
Code of Federal Regulations, 2013 CFR
2013-07-01
... as defined in § 63.11237. 4. Dry sorbent or activated carbon injection control Maintain the 30-day rolling average sorbent or activated carbon injection rate at or above the minimum sorbent injection rate or minimum activated carbon injection rate as defined in § 63.11237. When your boiler operates at...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2011 CFR
2011-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2013 CFR
2013-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2014 CFR
2014-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2012 CFR
2012-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
Prediction of obliteration after gamma knife surgery for cerebral arteriovenous malformations.
Karlsson, B; Lindquist, C; Steiner, L
1997-03-01
To define the factors of importance for the obliteration of cerebral arteriovenous malformations (AVMs), thus making a prediction of the probability for obliteration possible. In 945 AVMs of a series of 1319 patients treated with the gamma knife during 1970 to 1990, the relationship between patient, AVMs, and treatment parameters on the one hand and the obliteration of the nidus on the other was analyzed. The obliteration rate increased both with increased minimum (lowest periphery) and average dose and decreased with increased AVM volume. The minimum dose to the AVMs was the decisive dose factor for the treatment result. The higher the minimum dose, the higher the chance for total obliteration. The curve illustrating this relation increased logarithmically to a value of 87%. A higher average dose shortened the latency to AVM obliteration. For the obliterated cases, the larger the malformation, the lower the minimum dose used. This prompted us to relate the obliteration rate to the product minimum dose (AVM volume)1/3 (K index). The obliteration rate increased linearly with the K index up to a value of approximately 27, and for higher K values, the obliteration rate had a constant value of approximately 80%. For the group of 273 cases treated with a minimum dose of at least 25 Gy, the obliteration rate at the study end point (defined as 2-yr latency) was 80% (95% confidence interval = 75-85%). If obliterations that occurred beyond the end point are included, the obliteration rate increased to 85% (81-89%). The probability of obliteration of AVMs after gamma knife surgery is related both to the lowest dose to the AVMs and the AVM volume, and it can be predicted using the K index.
An Examination of Sunspot Number Rates of Growth and Decay in Relation to the Sunspot Cycle
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
On the basis of annual sunspot number averages, sunspot number rates of growth and decay are examined relative to both minimum and maximum amplitudes and the time of their occurrences using cycles 12 through present, the most reliably determined sunspot cycles. Indeed, strong correlations are found for predicting the minimum and maximum amplitudes and the time of their occurrences years in advance. As applied to predicting sunspot minimum for cycle 24, the next cycle, its minimum appears likely to occur in 2006, especially if it is a robust cycle similar in nature to cycles 17-23.
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1970-01-01
The theory used in FASTER-III, a Monte Carlo computer program for the transport of neutrons and gamma rays in complex geometries, is outlined. The program includes the treatment of geometric regions bounded by quadratic and quadric surfaces with multiple radiation sources which have specified space, angle, and energy dependence. The program calculates, using importance sampling, the resulting number and energy fluxes at specified point, surface, and volume detectors. It can also calculate minimum weight shield configuration meeting a specified dose rate constraint. Results are presented for sample problems involving primary neutron, and primary and secondary photon, transport in a spherical reactor shield configuration.
Verifying the operational set-up of a radionuclide air-monitoring station.
Werzi, R; Padoani, F
2007-05-01
A worldwide radionuclide network of 80 stations, part of the International Monitoring System, was designed to monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty. After installation, the stations are certified to comply with the minimum requirements laid down by the Preparatory Commission of the Comprehensive Nuclear-Test-Ban Treaty Organization. Among the several certification tests carried out at each station, the verification of the radionuclide activity concentrations is a crucial one and is based on an independent testing of the airflow rate measurement system and of the gamma detector system, as well as on the assessment of the samples collected during parallel sampling and measured at radionuclide laboratories.
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2013 CFR
2013-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2014 CFR
2014-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 783.26 - The section 6(b)(2) minimum wage requirement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false The section 6(b)(2) minimum wage requirement. 783.26... The section 6(b)(2) minimum wage requirement. Section 6(b), with paragraph (2) thereof, requires the... prescribed by” paragraph (1) of the subsection is the minimum wage rate applicable according to the schedule...
Hansen, Heidi; Ben-David, Merav; McDonald, David B
2008-03-01
In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Dry sliding wear behavior of TIG welding clad WC composite coatings
NASA Astrophysics Data System (ADS)
Buytoz, Soner; Ulutan, Mustafa; Yildirim, M. Mustafa
2005-12-01
In this study, melted tungsten carbide powders on the surface of AISI 4340 steel was applied by using tungsten inert gas (TIG) method. It was observed that it has been solidified in different microstructures depending on the production parameters. As a result of microstructure examinations, in the surface modified layers an eutectic and dendrite solidification was observed together with WC, W 2C phases. In the layer produced, the hardness values varied between 950 and 1200 HV. The minimum mass loss was observed in the sample, which was treated in 1.209 mm/s production rate, 0.5 g/s powder feed rate and 13.9 kJ/cm heat input.
36 CFR 223.61 - Establishing minimum stumpage rates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 2 2014-07-01 2014-07-01 false Establishing minimum stumpage rates. 223.61 Section 223.61 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER, SPECIAL FOREST PRODUCTS, AND FOREST BOTANICAL...
36 CFR 223.61 - Establishing minimum stumpage rates.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 2 2013-07-01 2013-07-01 false Establishing minimum stumpage rates. 223.61 Section 223.61 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER, SPECIAL FOREST PRODUCTS, AND FOREST BOTANICAL...
36 CFR 223.61 - Establishing minimum stumpage rates.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 2 2012-07-01 2012-07-01 false Establishing minimum stumpage rates. 223.61 Section 223.61 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER, SPECIAL FOREST PRODUCTS, AND FOREST BOTANICAL...
36 CFR 223.61 - Establishing minimum stumpage rates.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 2 2011-07-01 2011-07-01 false Establishing minimum stumpage rates. 223.61 Section 223.61 Parks, Forests, and Public Property FOREST SERVICE, DEPARTMENT OF AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER, SPECIAL FOREST PRODUCTS, AND FOREST BOTANICAL...
33 CFR 154.1130 - Requirements for prepositioned response equipment.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Additional Response Plan Requirements for a Trans-Alaska Pipeline Authorization Act (TAPAA) Facility...: (a) On-water recovery equipment with a minimum effective daily recovery rate of 30,000 barrels... of a discharge. (c) On-water recovery equipment with a minimum effective daily recovery rate of 40...
33 CFR 154.1130 - Requirements for prepositioned response equipment.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Additional Response Plan Requirements for a Trans-Alaska Pipeline Authorization Act (TAPAA) Facility...: (a) On-water recovery equipment with a minimum effective daily recovery rate of 30,000 barrels... of a discharge. (c) On-water recovery equipment with a minimum effective daily recovery rate of 40...
33 CFR 154.1130 - Requirements for prepositioned response equipment.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Additional Response Plan Requirements for a Trans-Alaska Pipeline Authorization Act (TAPAA) Facility...: (a) On-water recovery equipment with a minimum effective daily recovery rate of 30,000 barrels... of a discharge. (c) On-water recovery equipment with a minimum effective daily recovery rate of 40...
33 CFR 154.1130 - Requirements for prepositioned response equipment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Additional Response Plan Requirements for a Trans-Alaska Pipeline Authorization Act (TAPAA) Facility...: (a) On-water recovery equipment with a minimum effective daily recovery rate of 30,000 barrels... of a discharge. (c) On-water recovery equipment with a minimum effective daily recovery rate of 40...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
14 CFR 121.335 - Equipment standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Equipment standards. (a) Reciprocating engine powered airplanes. The oxygen apparatus, the minimum rates of oxygen flow, and the supply of oxygen necessary to comply with § 121.327 must meet the standards...) Turbine engine powered airplanes. The oxygen apparatus, the minimum rate of oxygen flow, and the supply of...
Would a Higher Minimum Wage Help Poor Families Headed by Women?
ERIC Educational Resources Information Center
Martin, Linda R.; Giannaros, Demetrios
1990-01-01
Studies suggest negative employment consequences if the minimum wage is increased. This may not affect poverty among households headed by women because the unemployment rate does not seem to play a statistically significant role in determining the poverty rate for this cohort. (Author)
Rakhshan, Vahid
2013-10-01
No meta-analyses or systematic reviews have been conducted to evaluate numerous potential biasing factors contributing to the controversial results on congenitally missing teeth (CMT). We aimed to perform a rather comprehensive meta-analysis and systematic review on this subject. A thorough search was performed during September 2012 until April 2013 to find the available literature regarding CMT prevalence. Besides qualitatively discussing the literature, the meta-sample homogeneity, publication bias, and the effects of sample type, sample size, minimum and maximum ages of included subjects, gender imbalances, and scientific credit of the publishing journals on the reported CMT prevalence were statistically analyzed using Q-test, Egger regression, Spearman coefficient, Kruskal-Wallis, Welch t test (α=0.05), and Mann-Whitney U test (α=0.016, α=0.007). A total of 111 reports were collected. Metadata were heterogeneous (P=0.000). There was not a significant publication bias (Egger Regression P=0.073). Prevalence rates differed in different types of populations (Kruskal-Wallis P=0.001). Studies on orthodontic patients might report slightly (about 1%) higher prevalence (P=0.009, corrected α=0.016). Non-orthodontic dental patients showed a significant 2% decline [P=0.007 (Mann-Whitney U)]. Enrolling more males in researches might significantly reduce the observed prevalence (Spearman ρ=-0.407, P=0.001). Studies with higher minimums of subjects' age showed always slightly less CMT prevalence. This reached about -1.6% around the ages 10 to 13 and was significant for ages 10 to 12 (Welch t test P<0.05). There seems to be no limit over the maximum age (Welch t test P>0.2). Studies' sample sizes were correlated negatively with CMT prevalence (ρ=-0.250, P=0.009). It was not verified whether higher CMT rates have better chances of being published (ρ=0.132, P=0.177). CMT definition should be unified. Samples should be sex-balanced. Enrolling both orthodontic and dental patients in similar proportions might be preferable over sampling from each of those groups. Sampling from children over 12 years seems advantageous. Two or more observers should examine larger samples to reduce the false negative error tied with such samples.
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-05-01
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Genetic Stability of Streptomyces Lividans pIJ702 in Response to Spaceflight
NASA Astrophysics Data System (ADS)
Lim, K. S.; Goins, T. L.; Voeikova, T. A.; Pyle, B. H.
2008-06-01
Streptomyces lividans carrying plasmid pIJ702 encoding genes for thiostrepton resistance (tsr-) and melanin production (mel+) was plated on agar and flown on the Russian satellite Foton-M3 for 16 days. The percentage loss of plasmid expression in flight samples was lower than that in ground samples when both samples were grown in enriched (ISP) media. Differences in media content also affect plasmid expression rate; ISP media have a higher loss of plasmid expression than samples in minimum media when both were grown on ground conditions. Results suggest that stress resulted in the increased expression of plasmid pIJ702 by S. lividans. Screening of thiostrepton resistant white (tsr+ mel-) mutants showed similar proportions of variants in ground samples and flight samples. To determine if there are mutations in the mel gene, DNA extracted from flight and control white mutants was amplified and gel electrophoresis of amplified products show no major mutation in the products. Sequencing of amplified products is required to identify mutations resulting in loss of pigmentation.
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false F.o.b. Origin-Minimum Size... Clauses 52.247-61 F.o.b. Origin—Minimum Size of Shipments. As prescribed in 47.305-16(c), insert the following clause in solicitations and contracts when volume rates may apply: F.o.b. Origin—Minimum Size of...
43 CFR 3504.21 - What are the minimum royalty rates?
Code of Federal Regulations, 2012 CFR
2012-10-01
... or phosphate rock and associated or related minerals. (b) Sodium 2% of the quantity or gross value of... LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) LEASING OF SOLID MINERALS OTHER... the point of shipment to market. (e) Gilsonite No minimum royalty rate. (f) Hardrock Minerals No...
43 CFR 3504.21 - What are the minimum royalty rates?
Code of Federal Regulations, 2014 CFR
2014-10-01
... or phosphate rock and associated or related minerals. (b) Sodium 2% of the quantity or gross value of... LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) LEASING OF SOLID MINERALS OTHER... the point of shipment to market. (e) Gilsonite No minimum royalty rate. (f) Hardrock Minerals No...
43 CFR 3504.21 - What are the minimum royalty rates?
Code of Federal Regulations, 2013 CFR
2013-10-01
... or phosphate rock and associated or related minerals. (b) Sodium 2% of the quantity or gross value of... LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) LEASING OF SOLID MINERALS OTHER... the point of shipment to market. (e) Gilsonite No minimum royalty rate. (f) Hardrock Minerals No...
43 CFR 3504.21 - What are the minimum royalty rates?
Code of Federal Regulations, 2011 CFR
2011-10-01
... or phosphate rock and associated or related minerals. (b) Sodium 2% of the quantity or gross value of... LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR MINERALS MANAGEMENT (3000) LEASING OF SOLID MINERALS OTHER... the point of shipment to market. (e) Gilsonite No minimum royalty rate. (f) Hardrock Minerals No...
36 CFR 223.61 - Establishing minimum stumpage rates.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGRICULTURE SALE AND DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER Timber Sale Contracts Appraisal and Pricing.... No timber may be sold or cut under timber sale contracts for less than minimum stumpage rates except... amounts of material not meeting utilization standards of the timber sale contract. For any timber sale...
9 CFR 130.30 - Hourly rate and minimum user fees.
Code of Federal Regulations, 2010 CFR
2010-01-01
... food testing. (16) Export-related services provided at animal auctions. (17) Various export-related... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Hourly rate and minimum user fees. 130.30 Section 130.30 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT...
40 CFR 90.508 - Test procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... service or target is less than the minimum rate specified (12 hours per day), then the minimum daily accumulation rate shall be equal to the manufacturer's service target. (3) Service accumulation shall be... nonroad engine sales for the United States market for the applicable year of 7,500 or greater shall...
NASA Astrophysics Data System (ADS)
Sabater, Bartolomé; Marín, Dolores
2018-03-01
The minimum rate principle is applied to the chemical reaction in a steady-state open cell system where, under constant supply of the glucose precursor, reference to time or to glucose consumption does not affect the conclusions.
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine
2017-11-16
Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.
Haseli, Y
2016-05-01
The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.
Nigro, Olivia D; Steward, Grieg F
2015-04-01
Plating environmental samples on vibrio-selective chromogenic media is a commonly used technique that allows one to quickly estimate concentrations of putative vibrio pathogens or to isolate them for further study. Although this approach is convenient, its usefulness depends directly on how well the procedure selects against false positives. We tested whether a chromogenic medium, CHROMagar Vibrio (CaV), used alone (single-plating) or in combination (double-plating) with a traditional medium thiosulfate-citrate-bile-salts (TCBS), could improve the discrimination among three pathogenic vibrio species (Vibrio cholerae, Vibrio parahaemolyticus, and Vibrio vulnificus) and thereby decrease the number of false-positive colonies that must be screened by molecular methods. Assays were conducted on water samples from two estuarine environments (one subtropical, one tropical) in a variety of seasonal conditions. The results of the double-plating method were confirmed by PCR and 16S rRNA sequencing. Our data indicate that there is no significant difference in the false-positive rate between CaV and TCBS when using a single-plating technique, but determining color changes on the two media sequentially (double-plating) reduced the rate of false positive identification in most cases. The improvement achieved was about two-fold on average, but varied greatly (from 0- to 5-fold) and depended on the sampling time and location. The double-plating method was most effective for V. vulnificus in warm months, when overall V. vulnificus abundance is high (false positive rates as low as 2%, n=178). Similar results were obtained for V. cholerae (minimum false positive rate of 16%, n=146). In contrast, the false positive rate for V. parahaemolyticus was always high (minimum of 59%, n=109). Sequence analysis of false-positive isolates indicated that the majority of confounding isolates are from the Vibrionaceae family, however, members of distantly related bacterial groups were also able to grow on vibrio-selective media, even when using the double-plating method. In conclusion, the double-plating assay is a simple means to increase the efficiency of identifying pathogenic vibrios in aquatic environments and to reduce the number of molecular assays required for identity confirmation. However, the high spatial and temporal variability in the performance of the media mean that molecular approaches are still essential to obtain the most accurate vibrio abundance estimates from environmental samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Critical fluid light scattering
NASA Technical Reports Server (NTRS)
Gammon, Robert W.
1988-01-01
The objective is to measure the decay rates of critical density fluctuations in a simple fluid (xenon) very near its liquid-vapor critical point using laser light scattering and photon correlation spectroscopy. Such experiments were severely limited on Earth by the presence of gravity which causes large density gradients in the sample when the compressibility diverges approaching the critical point. The goal is to measure fluctuation decay rates at least two decades closer to the critical point than is possible on earth, with a resolution of 3 microK. This will require loading the sample to 0.1 percent of the critical density and taking data as close as 100 microK to the critical temperature. The minimum mission time of 100 hours will allow a complete range of temperature points to be covered, limited by the thermal response of the sample. Other technical problems have to be addressed such as multiple scattering and the effect of wetting layers. The experiment entails measurement of the scattering intensity fluctuation decay rate at two angles for each temperature and simultaneously recording the scattering intensities and sample turbidity (from the transmission). The analyzed intensity and turbidity data gives the correlation length at each temperature and locates the critical temperature. The fluctuation decay rate data from these measurements will provide a severe test of the generalized hydrodynamic theories of transport coefficients in the critical regions. When compared to equivalent data from binary liquid critical mixtures they will test the universality of critical dynamics.
Persistence and bioaccumulation of oxyfluorfen residues in onion.
Sondhia, Shobha
2010-03-01
A field study was conducted to determine persistence and bioaccumulation of oxyflorfen residues in onion crop at two growth stages. Oxyfluorfen (23.5% EC) was sprayed at 250 and 500 g ai/ha on the crop (variety, N53). Mature onion and soil samples were collected at harvest. Green onion were collected at 55 days from each treated and control plot and analyzed for oxyfluorfen residues by a validated high-performance liquid chromatography method with an accepted recovery of 78-92% at the minimum detectable concentration of 0.003 microg g(-1). Analysis showed 0.015 and 0.005 microg g(-1) residues of oxyfluorfen at 250 g a.i. ha(-1) rate in green and mature onion samples, respectively; however, at 500 g a.i.ha(-1) rates, 0.025 and 0.011 microg g(-1) of oxyfluorfen residues were detected in green and mature onion samples, respectively. Soil samples collected at harvest showed 0.003 and 0.003 microg g(-1) of oxyfluorfen residues at the doses 250 and 500 g a.i. ha(-1), respectively. From the study, a pre-harvest interval of 118 days for onion crop after the herbicide application is suggested.
Integration of a Capacitive EIS Sensor into a FIA System for pH and Penicillin Determination
Rolka, David; Poghossian, Arshak; Schöning, Michael J.
2004-01-01
A field-effect based capacitive EIS (electrolyte-insulator-semiconductor) sensor with a p-Si-SiO2-Ta2O5 structure has been successfully integrated into a commercial FIA (flow-injection analysis) system and system performances have been proven and optimised for pH and penicillin detection. A flow-through cell was designed taking into account the requirement of a variable internal volume (from 12 μl up to 48 μl) as well as an easy replacement of the EIS sensor. FIA parameters (sample volume, flow rate, distance between the injection valve and the EIS sensor) have been optimised in terms of high sensitivity and reproducibility as well as a minimum dispersion of the injected sample zone. An acceptable compromise between different FIA parameters has been found. For the cell design used in this study, best results have been achieved with a flow rate of 1.4 ml/min, distance between the injection valve and the EIS sensor of 6.5 cm, probe volume of 0.75 ml, cell internal volume of 12 μl. A sample throughput of at least 15 samples/h was typically obtained.
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.
Camien, M N; Warner, R C
1984-05-01
A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.
Dispersion durations of P-wave and QT interval in children treated with a ketogenic diet.
Doksöz, Önder; Güzel, Orkide; Yılmaz, Ünsal; Işgüder, Rana; Çeleğen, Kübra; Meşe, Timur
2014-04-01
Limited data are available on the effects of a ketogenic diet on dispersion duration of P-wave and QT-interval measures in children. We searched for the changes in these measures with serial electrocardiograms in patients treated with a ketogenic diet. Twenty-five drug-resistant patients with epilepsy treated with a ketogenic diet were enrolled in this study. Electrocardiography was performed in all patients before the beginning and at the sixth month after implementation of the ketogenic diet. Heart rate, maximum and minimum P-wave duration, P-wave dispersion, and maximum and minimum corrected QT interval and QT dispersion were manually measured from the 12-lead surface electrocardiogram. Minimum and maximum corrected QT and QT dispersion measurements showed nonsignificant increase at month 6 compared with baseline values. Other previously mentioned electrocardiogram parameters also showed no significant changes. A ketogenic diet of 6 months' duration has no significant effect on electrocardiogram parameters in children. Further studies with larger samples and longer duration of follow-up are needed to clarify the effects of ketogenic diet on P-wave dispersion and corrected QT and QT dispersion. Copyright © 2014 Elsevier Inc. All rights reserved.
Aerobic Microbial Respiration In Oceanic Oxygen Minimum Zones.
Kalvelage, Tim; Lavik, Gaute; Jensen, Marlene M; Revsbech, Niels Peter; Löscher, Carolin; Schunck, Harald; Desai, Dhwani K; Hauss, Helena; Kiko, Rainer; Holtappels, Moritz; LaRoche, Julie; Schmitz, Ruth A; Graco, Michelle I; Kuypers, Marcel M M
2015-01-01
Oxygen minimum zones are major sites of fixed nitrogen loss in the ocean. Recent studies have highlighted the importance of anaerobic ammonium oxidation, anammox, in pelagic nitrogen removal. Sources of ammonium for the anammox reaction, however, remain controversial, as heterotrophic denitrification and alternative anaerobic pathways of organic matter remineralization cannot account for the ammonium requirements of reported anammox rates. Here, we explore the significance of microaerobic respiration as a source of ammonium during organic matter degradation in the oxygen-deficient waters off Namibia and Peru. Experiments with additions of double-labelled oxygen revealed high aerobic activity in the upper OMZs, likely controlled by surface organic matter export. Consistently observed oxygen consumption in samples retrieved throughout the lower OMZs hints at efficient exploitation of vertically and laterally advected, oxygenated waters in this zone by aerobic microorganisms. In accordance, metagenomic and metatranscriptomic analyses identified genes encoding for aerobic terminal oxidases and demonstrated their expression by diverse microbial communities, even in virtually anoxic waters. Our results suggest that microaerobic respiration is a major mode of organic matter remineralization and source of ammonium (~45-100%) in the upper oxygen minimum zones, and reconcile hitherto observed mismatches between ammonium producing and consuming processes therein.
Aerobic Microbial Respiration In Oceanic Oxygen Minimum Zones
Kalvelage, Tim; Lavik, Gaute; Jensen, Marlene M.; Revsbech, Niels Peter; Löscher, Carolin; Schunck, Harald; Desai, Dhwani K.; Hauss, Helena; Kiko, Rainer; Holtappels, Moritz; LaRoche, Julie; Schmitz, Ruth A.; Graco, Michelle I.; Kuypers, Marcel M. M.
2015-01-01
Oxygen minimum zones are major sites of fixed nitrogen loss in the ocean. Recent studies have highlighted the importance of anaerobic ammonium oxidation, anammox, in pelagic nitrogen removal. Sources of ammonium for the anammox reaction, however, remain controversial, as heterotrophic denitrification and alternative anaerobic pathways of organic matter remineralization cannot account for the ammonium requirements of reported anammox rates. Here, we explore the significance of microaerobic respiration as a source of ammonium during organic matter degradation in the oxygen-deficient waters off Namibia and Peru. Experiments with additions of double-labelled oxygen revealed high aerobic activity in the upper OMZs, likely controlled by surface organic matter export. Consistently observed oxygen consumption in samples retrieved throughout the lower OMZs hints at efficient exploitation of vertically and laterally advected, oxygenated waters in this zone by aerobic microorganisms. In accordance, metagenomic and metatranscriptomic analyses identified genes encoding for aerobic terminal oxidases and demonstrated their expression by diverse microbial communities, even in virtually anoxic waters. Our results suggest that microaerobic respiration is a major mode of organic matter remineralization and source of ammonium (~45-100%) in the upper oxygen minimum zones, and reconcile hitherto observed mismatches between ammonium producing and consuming processes therein. PMID:26192623
Population demographics and genetic diversity in remnant and translocated populations of sea otters
Bodkin, James L.; Ballachey, Brenda E.; Cronin, M.A.; Scribner, K.T.
1999-01-01
The effects of small population size on genetic diversity and subsequent population recovery are theoretically predicted, but few empirical data are available to describe those relations. We use data from four remnant and three translocated sea otter (Enhydra lutris) populations to examine relations among magnitude and duration of minimum population size, population growth rates, and genetic variation. Metochondrial (mt)DNA haplotype diversity was correlated with the number of years at minimum population size (r = -0.741, p = 0.038) and minimum population size (r = 0.709, p = 0.054). We found no relation between population growth and haplotype diversity, altough growth was significantly greater in translocated than in remnant populations. Haplotype diversity in populations established from two sources was higher than in a population established from a single source and was higher than in the respective source populations. Haplotype frequencies in translocated populations of founding sizes of 4 and 28 differed from expected, indicating genetic drift and differential reproduction between source populations, whereas haplotype frequencies in a translocated population with a founding size of 150 did not. Relations between population demographics and genetic characteristics suggest that genetic sampling of source and translocated populations can provide valuable inferences about translocations.
Systematic investigation of NLTE phenomena in the limit of small departures from LTE
NASA Astrophysics Data System (ADS)
Libby, S. B.; Graziani, F. R.; More, R. M.; Kato, T.
1997-04-01
In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff's law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models.
Plasma potassium and diurnal cyclic potassium excretion in the rat.
Rabinowitz, L; Berlin, R; Yamauchi, H
1987-12-01
The relation of the plasma potassium concentration to the daily cyclic variation in potassium excretion was examined in undisturbed, unanesthetized male Sprague-Dawley rats maintained on a liquid diet in a 12-h light-dark environment. Potassium excretion increased from a light-phase minimum of 16 mu eq/h to a peak of 256 mu eq/h 3 h after the beginning of the dark phase. Plasma potassium concentration in arterial blood, sampled in rats at 90-min intervals during these changes in potassium excretion, showed no significant change and was in the range 4.50-4.99 meq/liter. In adrenalectomized rats receiving aldosterone and dexamethasone at constant basal rates by implanted pumps, the daily cycle of potassium excretion was the same as in the intact rats, and plasma potassium was not significantly different when measured at the time of minimum and maximum rates of potassium excretion (4.79 +/- 0.42 vs 5.16 +/- 0.47 meq/liter, mean +/- SD). These results indicate that plasma potassium concentration is not the efferent factor controlling diurnal cyclic changes in potassium excretion in adrenal intact rats and may not be the only significant factor in adrenalectomized-steroid replaced rats.
Use of FTA sampling cards for molecular detection of avian influenza virus in wild birds.
Keeler, Shamus P; Ferro, Pamela J; Brown, Justin D; Fang, Xingwang; El-Attrache, John; Poulson, Rebecca; Jackwood, Mark W; Stallknecht, David E
2012-03-01
Current avian influenza (AI) virus surveillance programs involving wild birds rely on sample collection methods that require refrigeration or low temperature freezing to maintain sample integrity for virus isolation and/or reverse-transcriptase (RT) PCR. Maintaining the cold chain is critical for the success of these diagnostic assays but is not always possible under field conditions. The aim of this study was to test the utility of Finders Technology Associates (FTA) cards for reliable detection of AI virus from cloacal and oropharyngeal swabs of wild birds. The minimum detectable titer was determined, and the effect of room temperature storage was evaluated experimentally using multiple egg-propagated stock viruses (n = 6). Using real time RT-PCR, we compared results from paired cloacal swab and samples collected on FTA cards from both experimentally infected mallards (Anasplatyrhynchos) and hunter-harvested waterfowl sampled along the Texas Gulf Coast. Based on the laboratory trials, the average minimal detectable viral titer was determined to be 1 x 10(4.7) median embryo infectious dose (EID50)/ml (range: 1 x 10(4.3) to 1 x 10(5.4) EID50/ml), and viral RNA was consistently detectable on the FTA cards for a minimum of 20 days and up to 30 days for most subtypes at room temperature (23 C) storage. Real-time RT-PCR of samples collected using the FTA cards showed fair to good agreement in live birds when compared with both real-time RT-PCR and virus isolation of swabs. AI virus detection rates in samples from several wild bird species were higher when samples were collected using the FTA cards compared with cloacal swabs. These results suggest that FTA cards can be used as an alternative sample collection method when traditional surveillance methods are not possible, especially in avian populations that have historically received limited testing or situations in which field conditions limit the ability to properly store or ship swab samples.
P wave dispersion in patients with hypochondriasis.
Atmaca, Murad; Korkmaz, Hasan; Korkmaz, Sevda
2010-11-26
P wave dispersion (Pd), defined as the difference between the maximum and the minimum P wave duration, has been associated with anxiety. Thus, we wondered whether Pd in hypochondriasis which is associated with anxiety differed from that in healthy controls. Pd was measured in 30 hypochondriac patients and same number of physically and mentally healthy age- and gender-matched controls. Hamilton Depression Rating (HDRS) and Hamilton Anxiety Rating Scales (HARS) were scored. The heart rate and left atrium (LA) sizes were not significantly different between groups. However, both Pmax and Pmin values of the patients were significantly higher than those of healthy controls. As for the main variable investigated in the present study, the corrected Pd was significantly longer in the patient group compared to control group. On the basis of this study, we can conclude that Pd may be related to hypochondriasis though our sample is too small to allow us to obtain a clear conclusion. Future studies with larger sample evaluating the effects of treatment are required. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error
Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong
2013-01-01
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526
NASA Astrophysics Data System (ADS)
Kheifets, A. E.; Khomskaya, I. V.; Korshunov, L. G.; Zel'dovich, V. I.; Frolova, N. Yu.
2018-04-01
The effect of the preliminary high strain-rate deformation, performed via the method of dynamic channel-angular pressing (DCAP), and subsequent annealings on the tribological properties of a dispersionhardened Cu-0.092 wt % Cr-0.086 wt % Zr alloy has been investigated. It has been shown that the surfacelayer material of the alloy with a submicrocrystalline (SMC) structure obtained by the DCAP method can be strengthened using severe plastic deformation by sliding friction at the expense of creating a nanocrystalline structure with crystallites of 15-60 nm in size. It has been shown that the SMC structure obtained by the high strain-rate DCAP deformation decreases the wear rate of the samples upon sliding friction by a factor of 1.4 compared to the initial coarse-grained state. The maximum values of the microhardness and minimum values of the coefficient of friction and shear strength have been obtained in the samples preliminarily subjected to DCAP and aging at 400°C. The attained level of microhardness is 3350 MPa, which exceeds the microhardness of the alloy in the initial coarse-grained state by five times.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the minimum wage required by section 6(a) of the Fair Labor Standards Act? 520.200 Section 520.200... lower than the minimum wage required by section 6(a) of the Fair Labor Standards Act? Section 14(a) of..., for the payment of special minimum wage rates to workers employed as messengers, learners (including...
Lin, Yunzhi
2016-08-15
Responder analysis is in common use in clinical trials, and has been described and endorsed in regulatory guidance documents, especially in trials where "soft" clinical endpoints such as rating scales are used. The procedure is useful, because responder rates can be understood more intuitively than a difference in means of rating scales. However, two major issues arise: 1) such dichotomized outcomes are inefficient in terms of using the information available and can seriously reduce the power of the study; and 2) the results of clinical trials depend considerably on the response cutoff chosen, yet in many disease areas there is no consensus as to what is the most appropriate cutoff. This article addresses these two issues, offering a novel approach for responder analysis that could both improve the power of responder analysis and explore different responder cutoffs if an agreed-upon common cutoff is not present. Specifically, we propose a statistically rigorous clinical trial design that pre-specifies multiple tests of responder rates between treatment groups based on a range of pre-specified responder cutoffs, and uses the minimum of the p-values for formal inference. The critical value for hypothesis testing comes from permutation distributions. Simulation studies are carried out to examine the finite sample performance of the proposed method. We demonstrate that the new method substantially improves the power of responder analysis, and in certain cases, yields power that is approaching the analysis using the original continuous (or ordinal) measure.
Nitrite oxidation in the Namibian oxygen minimum zone
Füssel, Jessika; Lam, Phyllis; Lavik, Gaute; Jensen, Marlene M; Holtappels, Moritz; Günter, Marcel; Kuypers, Marcel MM
2012-01-01
Nitrite oxidation is the second step of nitrification. It is the primary source of oceanic nitrate, the predominant form of bioavailable nitrogen in the ocean. Despite its obvious importance, nitrite oxidation has rarely been investigated in marine settings. We determined nitrite oxidation rates directly in 15N-incubation experiments and compared the rates with those of nitrate reduction to nitrite, ammonia oxidation, anammox, denitrification, as well as dissimilatory nitrate/nitrite reduction to ammonium in the Namibian oxygen minimum zone (OMZ). Nitrite oxidation (⩽372 nM NO2− d−1) was detected throughout the OMZ even when in situ oxygen concentrations were low to non-detectable. Nitrite oxidation rates often exceeded ammonia oxidation rates, whereas nitrate reduction served as an alternative and significant source of nitrite. Nitrite oxidation and anammox co-occurred in these oxygen-deficient waters, suggesting that nitrite-oxidizing bacteria (NOB) likely compete with anammox bacteria for nitrite when substrate availability became low. Among all of the known NOB genera targeted via catalyzed reporter deposition fluorescence in situ hybridization, only Nitrospina and Nitrococcus were detectable in the Namibian OMZ samples investigated. These NOB were abundant throughout the OMZ and contributed up to ∼9% of total microbial community. Our combined results reveal that a considerable fraction of the recently recycled nitrogen or reduced NO3− was re-oxidized back to NO3− via nitrite oxidation, instead of being lost from the system through the anammox or denitrification pathways. PMID:22170426
NASA Astrophysics Data System (ADS)
Kittell, David E.; Cummock, Nick R.; Son, Steven F.
2016-08-01
Small scale characterization experiments using only 1-5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, it is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kittell, David E.; Cummock, Nick R.; Son, Steven F.
2016-08-14
Small scale characterization experiments using only 1–5 g of a baseline ammonium nitrate plus fuel oil (ANFO) explosive are discussed and simulated using an ignition and growth reactive flow model. There exists a strong need for the small scale characterization of non-ideal explosives in order to adequately survey the wide parameter space in sample composition, density, and microstructure of these materials. However, it is largely unknown in the scientific community whether any useful or meaningful result may be obtained from detonation failure, and whether a minimum sample size or level of confinement exists for the experiments. In this work, itmore » is shown that the parameters of an ignition and growth rate law may be calibrated using the small scale data, which is obtained from a 35 GHz microwave interferometer. Calibration is feasible when the samples are heavily confined and overdriven; this conclusion is supported with detailed simulation output, including pressure and reaction contours inside the ANFO samples. The resulting shock wave velocity is most likely a combined chemical-mechanical response, and simulations of these experiments require an accurate unreacted equation of state (EOS) in addition to the calibrated reaction rate. Other experiments are proposed to gain further insight into the detonation failure data, as well as to help discriminate between the role of the EOS and reaction rate in predicting the measured outcome.« less
ERIC Educational Resources Information Center
West, Lloyd Wilbert
An investigation was designed to ascertain the effects of cultural background on selected intelligence tests and to identify instruments which validly measure intellectual ability with a minimum of cultural bias. A battery of tests, selected for factor analytic study, was administered and replicated at four grade levels to a sample of Metis and…
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample... per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method... appendix A of this part) Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour...
NASA Astrophysics Data System (ADS)
Bongale, Arunkumar M.; Kumar, Satish; Sachit, T. S.; Jadhav, Priya
2018-03-01
Studies on wear properties of Aluminium based hybrid nano composite materials, processed through powder metallurgy technique, are reported in the present study. Silicon Carbide nano particles and E-glass fibre are reinforced in pure aluminium matrix to fabricate hybrid nano composite material samples. Pin-on-Disc wear testing equipment is used to evaluate dry sliding wear properties of the composite samples. The tests were conducted following the Taguchi’s Design of Experiments method. Signal-to-Noise ratio analysis and Analysis of Variance are carried out on the test data to find out the influence of test parameters on the wear rate. Scanning Electron Microscopic analysis and Energy Dispersive x-ray analysis are conducted on the worn surfaces to find out the wear mechanisms responsible for wear of the composites. Multiple linear regression analysis and Genetic Algorithm techniques are employed for optimization of wear test parameters to yield minimum wear of the composite samples. Finally, a wear model is built by the application of Artificial Neural Networks to predict the wear rate of the composite material, under different testing conditions. The predicted values of wear rate are found to be very close to the experimental values with a deviation in the range of 0.15% to 8.09%.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
Definition of hydraulic stability of KVGM-100 hot-water boiler and minimum water flow rate
NASA Astrophysics Data System (ADS)
Belov, A. A.; Ozerov, A. N.; Usikov, N. V.; Shkondin, I. A.
2016-08-01
In domestic power engineering, the methods of quantitative and qualitative-quantitative adjusting the load of the heat supply systems are widely distributed; furthermore, during the greater part of the heating period, the actual discharge of network water is less than estimated values when changing to quantitative adjustment. Hence, the hydraulic circuits of hot-water boilers should ensure the water velocities, minimizing the scale formation and excluding the formation of stagnant zones. The results of the calculations of hot-water KVGM-100 boiler and minimum water flow rate for the basic and peak modes at the fulfillment of condition of the lack of surface boil are presented in the article. The minimal flow rates of water at its underheating to the saturation state and the thermal flows in the furnace chamber were defined. The boiler hydraulic calculation was performed using the "Hydraulic" program, and the analysis of permissible and actual velocities of the water movement in the pipes of the heating surfaces was carried out. Based on the thermal calculations of furnace chamber and thermal- hydraulic calculations of heating surfaces, the following conclusions were drawn: the minimum velocity of water movement (by condition of boiling surface) at lifting movement of environment increases from 0.64 to 0.79 m/s; it increases from 1.14 to 1.38 m/s at down movement of environmental; the minimum water flow rate by the boiler in the basic mode (by condition of the surface boiling) increased from 887 t/h at the load of 20% up to 1074 t/h at the load of 100%. The minimum flow rate is 1074 t/h at nominal load and is achieved at the pressure at the boiler outlet equal to 1.1 MPa; the minimum water flow rate by the boiler in the peak mode by condition of surface boiling increases from 1669 t/h at the load of 20% up to 2021 t/h at the load of 100%.
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.47 - Child's benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
20 CFR 229.55 - Reduction for spouse social security benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...
20 CFR 229.56 - Reduction for child's social security benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...
Amino acid geochemistry of fossil bones from the Rancho La Brea asphalt deposit, California
McMenamin, M.A.S.; Blunt, D.J.; Kvenvolden, K.A.; Miller, S.E.; Marcus, L.F.; Pardi, R.R.
1982-01-01
Low aspartic acid d:l ratios and modern collagenlike concentration values indicate that amino acids in bones from the Rancho La Brea asphalt deposit, Los Angeles, California are better preserved than amino acids in bones of equivalent age that have not been preserved in asphalt. Amino acids were recovered from 10 Rancho La Brea bone samples which range in age from less than 200 to greater than 36,000 yr. The calibrated rates of aspartic acid racemization range from 2.1 to 5.0 ?? 10-6yr-1. Although this wide range of rate constants decreases the level of confidence for age estimates, use of the larger rate constant of 5.0 ?? 10-6yr-1 provides minimum age estimates which fit the known stratigraphic and chronologic records of the Rancho La Brea deposits. ?? 1982.
Close encounters and collisions of comets with the earth
NASA Technical Reports Server (NTRS)
Sekanina, Z.; Yeomans, D. K.
1984-01-01
A computer search for earth-approaching comets among those listed in Marsden's (1983) updated orbit catalog has identified 36 cases at which minimum separation distance was less than 2500 earth radii. A strong representation of short period comets in the sample is noted, and the constant rate of the close approaching comets in the last 300 years is interpreted to suggest the lack of long-period comets intrinsically fainter than an absolute magnitude of about 11. A comet-earth collision rate derived from the statistics of these close encounters implies an average period of 33-64 million years between any two events. This rate is comparable with the frequency of geologically recent global catastrophes which appear to be associated with extraterrestrial object impacts, such as the Cretaceous-Tertiary extinction 65 million years ago and the late Eocene event 34 million years ago.
Variability in daily pH scales with coral reef accretion and community structure
NASA Astrophysics Data System (ADS)
Price, N.; Martz, T.; Brainard, R. E.; Smith, J.
2011-12-01
Little is known about natural variability in pH in coastal waters and how resident organisms respond to current nearshore seawater conditions. We used autonomous sensors (SeaFETs) to record temperature and, for the first time, pH with high temporal (hourly observations; 7 months of sampling) resolution on the reef benthos (5-10m depth) at several islands (Kingman, Palmyra and Jarvis) within the newly designated Pacific Remote Island Areas Marine National Monument (PRIMNM) in the northern Line Islands; these islands are uninhabited and lack potentially confounding local impacts (e.g. pollution and overfishing). Recorded benthic pH values were compared with regional means and minimum thresholds based on seasonal amplitude estimated from surrounding open-ocean climatological data, which represent seawater chemistry values in the absence of feedback from the reef. Each SeaFET sensor was co-located with replicate Calcification/Acidification Units (CAUs) designed to quantify species abundances and net community calcification rates so we could determine which, if any, metrics of natural variability in benthic pH and temperature were related to community development and reef accretion rates. The observed range in daily pH encompassed maximums reported from the last century (8.104 in the early evening) to minimums approaching projected levels within the next 100 yrs (7.824 at dawn) for pelagic waters. Net reef calcification rates, measured as calcium carbonate accretion on CAUs, varied within and among islands and were comparable with rates measured from the Pacific and Caribbean using chemistry-based approaches. Benthic species assemblages on the CAUs were differentiated by the presence of calcifying and fleshy taxa (CAP analysis, mean allocation success 80%, δ2 = 0.886, P = <0.001). In general, accretion rates were higher at sites that had a greater number of hours at high pH values each day. Where daily pH failed to exceed climatological seasonal minimum thresholds, net accretion was slower and fleshy, non-calcifying benthic organisms dominated. Natural variation in benthic pH offers a unique opportunity to study ecological consequences of likely future ocean chemistry.
Assessment of Caspian Seal By-Catch in an Illegal Fishery Using an Interview-Based Approach.
Dmitrieva, Lilia; Kondakov, Andrey A; Oleynikov, Eugeny; Kydyrmanov, Aidyn; Karamendin, Kobey; Kasimbekov, Yesbol; Baimukanov, Mirgaliy; Wilson, Susan; Goodman, Simon J
2013-01-01
The Caspian seal (Pusa caspica) has declined by more than 90% since 1900 and is listed as endangered by IUCN. We made the first quantitative assessment of Caspian seal by-catch mortality in fisheries in the north Caspian Sea by conducting semi-structured interviews in fishing communities along the coasts of Russia (Kalmykia, Dagestan), Kazakhstan and Turkmenistan. We recorded a documented minimum by-catch of 1,215 seals in the survey sample, for the 2008-2009 fishing season, 93% of which occurred in illegal sturgeon fisheries. Due to the illegal nature of the fishery, accurately quantifying total fishing effort is problematic and the survey sample could reflect less than 10% of poaching activity in the north Caspian Sea. Therefore total annual by-catch may be significantly greater than the minimum documented by the survey. The presence of high by-catch rates was supported independently by evidence of net entanglement from seal carcasses, during a mass stranding on the Kazakh coast in May 2009, where 30 of 312 carcasses were entangled in large mesh sturgeon net remnants. The documented minimum by-catch may account for 5 to 19% of annual pup production. Sturgeon poaching therefore not only represents a serious threat to Caspian sturgeon populations, but may also be having broader impacts on the Caspian Sea ecosystem by contributing to a decline in one of the ecosystem's key predators. This study demonstrates the utility of interview-based approaches in providing rapid assessments of by-catch in illegal small-scale fisheries, which are not amenable to study by other methods.
Assessment of Caspian Seal By-Catch in an Illegal Fishery Using an Interview-Based Approach
Dmitrieva, Lilia; Kondakov, Andrey A.; Oleynikov, Eugeny; Kydyrmanov, Aidyn; Karamendin, Kobey; Kasimbekov, Yesbol; Baimukanov, Mirgaliy; Wilson, Susan; Goodman, Simon J.
2013-01-01
The Caspian seal (Pusa caspica) has declined by more than 90% since 1900 and is listed as endangered by IUCN. We made the first quantitative assessment of Caspian seal by-catch mortality in fisheries in the north Caspian Sea by conducting semi-structured interviews in fishing communities along the coasts of Russia (Kalmykia, Dagestan), Kazakhstan and Turkmenistan. We recorded a documented minimum by-catch of 1,215 seals in the survey sample, for the 2008–2009 fishing season, 93% of which occurred in illegal sturgeon fisheries. Due to the illegal nature of the fishery, accurately quantifying total fishing effort is problematic and the survey sample could reflect less than 10% of poaching activity in the north Caspian Sea. Therefore total annual by-catch may be significantly greater than the minimum documented by the survey. The presence of high by-catch rates was supported independently by evidence of net entanglement from seal carcasses, during a mass stranding on the Kazakh coast in May 2009, where 30 of 312 carcasses were entangled in large mesh sturgeon net remnants. The documented minimum by-catch may account for 5 to 19% of annual pup production. Sturgeon poaching therefore not only represents a serious threat to Caspian sturgeon populations, but may also be having broader impacts on the Caspian Sea ecosystem by contributing to a decline in one of the ecosystem’s key predators. This study demonstrates the utility of interview-based approaches in providing rapid assessments of by-catch in illegal small-scale fisheries, which are not amenable to study by other methods. PMID:23840590
NASA Astrophysics Data System (ADS)
Denton, R. E.; Wang, Y.; Webb, P. A.; Tengdin, P. M.; Goldstein, J.; Redfern, J. A.; Reinisch, B. W.
2012-03-01
Using measurements of the electron density ne found from passive radio wave observations by the IMAGE spacecraft RPI instrument on consecutive passes through the magnetosphere, we calculate the long-term (>1 day) refilling rate of equatorial electron density dne,eq/dt from L = 2 to 9. Our events did not exhibit saturation, probably because our data set did not include a deep solar minimum and because saturation is an unusual occurrence, especially outside of solar minimum. The median rate in cm-3/day can be modeled with log10(dne,eq/dt) = 2.22 - 0.006L - 0.0347L2, while the third quartile rate can be modeled with log10(dne,eq/dt) = 3.39 - 0.353L, and the mean rate can be modeled as log10(dne,eq/dt) = 2.74 - 0.269L. These statistical values are found from the ensemble of all observed rates at each L value, including negative rates (decreases in density due to azimuthal structure or radial motion or for other reasons), in order to characterize the typical behavior. The first quartile rates are usually negative for L < 4.7 and close to zero for larger L values. Our rates are roughly consistent with previous observations of ion refilling at geostationary orbit. Most previous studies of refilling found larger refilling rates, but many of these examined a single event which may have exhibited unusually rapid refilling. Comparing refilling rates at solar maximum to those at solar minimum, we found that the refilling rate is larger at solar maximum for small L < 4, about the same at solar maximum and solar minimum for L = 4.2 to 5.8, and is larger at solar minimum for large L > 5.8 such as at geostationary orbit (L ˜ 6.8) (at least to L of about 8). These results agree with previous results for ion refilling at geostationary orbit, may agree with previous results at lower L, and are consistent with some trends for ionospheric density.
NASA Astrophysics Data System (ADS)
Tiano, Laura; Garcia-Robledo, Emilio; Dalsgaard, Tage; Devol, Allan H.; Ward, Bess B.; Ulloa, Osvaldo; Canfield, Donald E.; Peter Revsbech, Niels
2014-12-01
Highly sensitive STOX O2 sensors were used for determination of in situ O2 distribution in the eastern tropical north and south Pacific oxygen minimum zones (ETN/SP OMZs), as well as for laboratory determination of O2 uptake rates of water masses at various depths within these OMZs. Oxygen was generally below the detection limit (few nmol L-1) in the core of both OMZs, suggesting the presence of vast volumes of functionally anoxic waters in the eastern Pacific Ocean. Oxygen was often not detectable in the deep secondary chlorophyll maximum found at some locations, but other secondary maxima contained up to 0.4 μmol L-1. Directly measured respiration rates were high in surface and subsurface oxic layers of the coastal waters, reaching values up to 85 nmol L-1 O2 h-1. Substantially lower values were found at the depths of the upper oxycline, where values varied from 2 to 33 nmol L-1 O2 h-1. Where secondary chlorophyll maxima were found the rates were higher than in the oxic water just above. Incubation times longer than 20 h, in the all-glass containers, resulted in highly increased respiration rates. Addition of amino acids to the water from the upper oxycline did not lead to a significant initial rise in respiration rate within the first 20 h, indicating that the measurement of respiration rates in oligotrophic Ocean water may not be severely affected by low levels of organic contamination during sampling. Our measurements indicate that aerobic metabolism proceeds efficiently at extremely low oxygen concentrations with apparent half-saturation concentrations (Km values) ranging from about 10 to about 200 nmol L-1.
20 CFR 229.49 - Adjustment of benefits under family maximum for change in family group.
Code of Federal Regulations, 2010 CFR
2010-04-01
... for change in family group. 229.49 Section 229.49 Employees' Benefits RAILROAD RETIREMENT BOARD... Overall Minimum Rate § 229.49 Adjustment of benefits under family maximum for change in family group. (a) Increase in family group. If an overall minimum rate is adjusted for the family maximum and an additional...
76 FR 37996 - West Virginia Regulatory Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-29
... Regulations (CSR) to provide for the establishment of a minimum incremental bonding rate as authorized by... minimum incremental bonding rate of $10,000 per increment at CSR 38-2-11.4.a.2. Because these revisions... at CSR 38-2-11.4.a.2. Section 22-3-11(a) of WVSCMRA currently requires mining operators to furnish a...
Fácio, Cássio L; Previato, Lígia F; Machado-Paula, Ligiane A; Matheus, Paulo Cs; Araújo, Edilberto
2016-12-01
This study aimed to assess and compare sperm motility, concentration, and morphology recovery rates, before and after processing through sperm washing followed by swim-up or discontinuous density gradient centrifugation in normospermic individuals. Fifty-eight semen samples were used in double intrauterine insemination procedures; 17 samples (group 1) were prepared with sperm washing followed by swim-up, and 41 (group 2) by discontinuous density gradient centrifugation. This prospective non-randomized study assessed seminal parameters before and after semen processing. A dependent t-test was used for the same technique to analyze seminal parameters before and after semen processing; an independent t-test was used to compare the results before and after processing for both techniques. The two techniques produced decreases in sample concentration (sperm washing followed by swim-up: P<0.000006; discontinuous density gradient centrifugation: P=0.008457) and increases in motility and normal morphology sperm rates after processing. The difference in sperm motility between the two techniques was not statistically significant. Sperm washing followed by swim-up had better morphology recovery rates than discontinuous density gradient centrifugation (P=0.0095); and the density gradient group had better concentration recovery rates than the swim-up group (P=0.0027). The two methods successfully recovered the minimum sperm values needed to perform intrauterine insemination. Sperm washing followed by swim-up is indicated for semen with high sperm concentration and better morphology recovery rates. Discontinuous density gradient centrifugation produced improved concentration recovery rates.
In vitro degradation of ZM21 magnesium alloy in simulated body fluids.
Witecka, Agnieszka; Bogucka, Aleksandra; Yamamoto, Akiko; Máthis, Kristián; Krajňák, Tomáš; Jaroszewicz, Jakub; Święszkowski, Wojciech
2016-08-01
In vitro degradation behavior of squeeze cast (CAST) and equal channel angular pressed (ECAP) ZM21 magnesium alloy (2.0wt% Zn-0.98wt% Mn) was studied using immersion tests up to 4w in three different biological environments. Hanks' Balanced Salt Solution (Hanks), Earle's Balanced Salt Solution (Earle) and Eagle minimum essential medium supplemented with 10% (v/v) fetal bovine serum (E-MEM+10% FBS) were used to investigate the effect of carbonate buffer system, organic compounds and material processing on the degradation behavior of the ZM21 alloy samples. Corrosion rate of the samples was evaluated by their Mg(2+) ion release, weight loss and volume loss. In the first 24h, the corrosion rate sequence of the CAST samples was as following: Hanks>E-MEM+10% FBS>Earle. However, in longer immersion periods, the corrosion rate sequence was Earle>E-MEM+10% FBS≥Hanks. Strong buffering effect provided by carbonate buffer system helped to maintain the pH avoiding drastic increase of the corrosion rate of ZM21 in the initial stage of immersion. Organic compounds also contributed to maintain the pH of the fluid. Moreover, they adsorbed on the sample surface and formed an additional barrier on the insoluble salt layer, which was effective to retard the corrosion of CAST samples. In case of ECAP, however, this effect was overcome by the occurrence of strong localized corrosion due to the lower pH of the medium. Corrosion of ECAP samples was much greater than that of CAST, especially in Hanks, due to higher sensitivity of ECAP to localized corrosion and the presence of Cl(-). The present work demonstrates the importance of using an appropriate solution for a reliable estimation of the degradation rate of Mg-base degradable implants in biological environments, and concludes that the most appropriate solution for this purpose is E-MEM+10% FBS, which has the closest chemical composition to human blood plasma. Copyright © 2016 Elsevier B.V. All rights reserved.
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
20 CFR 229.50 - Age reduction in employee or spouse benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...
29 CFR 780.313 - Piece rate basis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... That Is Exempted From the Minimum Wage and Overtime Pay Requirements Under Section 13(a)(6) Statutory... to the minimum wage provisions of the Act does not meet all the requirements set forth in this section he must be paid at least the minimum wage for each hour worked in a particular workweek...
The minimum control authority of a system of actuators with applications to Gravity Probe-B
NASA Technical Reports Server (NTRS)
Wiktor, Peter; Debra, Dan
1991-01-01
The forcing capabilities of systems composed of many actuators are analyzed in this paper. Multiactuator systems can generate higher forces in some directions than in others. Techniques are developed to find the force in the weakest direction. This corresponds to the worst-case output and is defined as the 'minimum control authority'. The minimum control authority is a function of three things: the actuator configuration, the actuator controller and the way in which the output of the system is limited. Three output limits are studied: (1) fuel-flow rate, (2) power, and (3) actuator output. The three corresponding actuator controllers are derived. These controllers generate the desired force while minimizing either fuel flow rate, power or actuator output. It is shown that using the optimal controller can substantially increase the minimum control authority. The techniques for calculating the minimum control authority are applied to the Gravity Probe-B spacecraft thruster system. This example shows that the minimum control authority can be used to design the individual actuators, choose actuator configuration, actuator controller, and study redundancy.
The impact of minimum wages on population health: evidence from 24 OECD countries.
Lenhart, Otto
2017-11-01
This study examines the relationship between minimum wages and several measures of population health by analyzing data from 24 OECD countries for a time period of 31 years. Specifically, I test for health effects as a result of within-country variations in the generosity of minimum wages, which are measured by the Kaitz index. The paper finds that higher levels of minimum wages are associated with significant reductions of overall mortality rates as well as in the number of deaths due to outcomes that have been shown to be more prevalent among individuals with low socioeconomic status (e.g., diabetes, disease of the circulatory system, stroke). A 10% point increase of the Kaitz index is associated with significant declines in death rates and an increase in life expectancy of 0.44 years. Furthermore, I provide evidence for potential channels through which minimum wages impact population health by showing that more generous minimum wages impact outcomes such as poverty, the share of the population with unmet medical needs, the number of doctor consultations, tobacco consumption, calorie intake, and the likelihood of people being overweight.
Increasing the minimum age of marriage program to improve maternal and child health in Indonesia
NASA Astrophysics Data System (ADS)
Anjarwati
2017-08-01
The objective of the article is to review the importance of understanding the adolescent reproductive health, especially the impact of early marriage to have commitment for health maintenance by increasing the minimum age of marriage. There are countless studies describing the impact of pregnancy at a very young age, the risk that young people must understand to support the program of increasing minimum age of marriage in Indonesia. Increasing the minimum age of marriage is as one of the government programs in improving maternal and child health. It also supports the Indonesian government's program about a thousand days of life. It is required that teens understand the impact of early marriage to prepare for optimal health for future generations. The maternal mortality rate and infant mortality rate in Indonesia is still high because health is not optimal since the early period of pregnancy. These studies reveal that the increased number of early marriages leads to rising divorce rate, maternal mortality rate, and infant mortality and intensifies the risk of cervical cancer. The increase in early marriage is mostly attributed to unwanted pregnancy. It is revealed that early marriage increases the rate of pregnancy at too young an age with the risk of maternal and child health in Indonesia.
NASA Astrophysics Data System (ADS)
Fukuda, Satoru; Nakajima, Teruyuki; Takenaka, Hideaki; Higurashi, Akiko; Kikuchi, Nobuyuki; Nakajima, Takashi Y.; Ishida, Haruma
2013-12-01
satellite aerosol retrieval algorithm was developed to utilize a near-ultraviolet band of the Greenhouse gases Observing SATellite/Thermal And Near infrared Sensor for carbon Observation (GOSAT/TANSO)-Cloud and Aerosol Imager (CAI). At near-ultraviolet wavelengths, the surface reflectance over land is smaller than that at visible wavelengths. Therefore, it is thought possible to reduce retrieval error by using the near-ultraviolet spectral region. In the present study, we first developed a cloud shadow detection algorithm that uses first and second minimum reflectances of 380 nm and 680 nm based on the difference in Rayleigh scattering contribution for these two bands. Then, we developed a new surface reflectance correction algorithm, the modified Kaufman method, which uses minimum reflectance data at 680 nm and the NDVI to estimate the surface reflectance at 380 nm. This algorithm was found to be particularly effective at reducing the aerosol effect remaining in the 380 nm minimum reflectance; this effect has previously proven difficult to remove owing to the infrequent sampling rate associated with the three-day recursion period of GOSAT and the narrow CAI swath of 1000 km. Finally, we applied these two algorithms to retrieve aerosol optical thicknesses over a land area. Our results exhibited better agreement with sun-sky radiometer observations than results obtained using a simple surface reflectance correction technique using minimum radiances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pombet, Denis; Desnoyers, Yvon; Charters, Grant
2013-07-01
The TruPro{sup R} process enables to collect a significant number of samples to characterize radiological materials. This innovative and alternative technique is experimented for the ANDRA quality-control inspection of cemented packages. It proves to be quicker and more prolific than the current methodology. Using classical statistics and geo-statistics approaches, the physical and radiological characteristics of two hulls containing immobilized wastes (sludges or concentrates) in a hydraulic binder are assessed in this paper. The waste homogeneity is also evaluated in comparison to ANDRA criterion. Sensibility to sample size (support effect), presence of extreme values, acceptable deviation rate and minimum number ofmore » data are discussed. The final objectives are to check the homogeneity of the two characterized radwaste packages and also to validate and reinforce this alternative characterization methodology. (authors)« less
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
Atmospheric CO2 Records from Sites in the Umweltbundesamt (UBA) Air Sampling Network (1972 - 1997)
Fricke, W. [Umweltbundesamt, Offenbach/Main, Germany; Wallasch, M. [Umweltbundesamt, Offenbach/Main, Germany; Uhse, Karin [Umweltbundesamt, Offenbach/Main, Germany; Schmidt, Martina [University of Heidelberg, Heidelberg, Germany; Levin, Ingeborg [University of Heidelberg, Heidelberg, Germany
1998-01-01
Air samples for the purpose of monitoring atmospheric CO2 were collected from five sites in the UBA air sampling network. Annual atmospheric CO2 concentrations at Brotjacklriegel rose from 331.63 parts per million by volume (ppmv) in 1972 to 353.12 ppmv in 1988. Because of the site's forest location, the monthly atmospheric CO2 record from Brotjacklriegel exhibits very large seasonal amplitude. This amplitude reached almost 40 ppmv in 1985. Minimum mixing ratios are recorded at Brotjacklriegel during July-September; maximum values, during November-March. CO2 concentrations at Deuselbach rose from 340.82 parts per million by volume (ppmv) in 1972 to 363.76 ppmv in 1989. The monthly atmospheric CO2 record from Deuselbach is influenced by local agricultural activities and photosynthetic depletion but does not exhibit the large seasonal amplitude observed at other UBA monitoring sites. Minimum monthly atmospheric CO2 mixing ratios at Deuselbach are typically observed in August but may appear as early as June. Maximum values are seen in the record for November-March. Atmospheric CO2 concentrations at Schauinsland rose from ~328 parts per million by volume (ppmv) in 1972 to ~365 ppmv in 1997. This represents a growth rate of approximately 1.5 ppmv per year. The Schauinsland site is considered the least contaminated of the UBA sites. CO2 concentrations at Waldhof rose from 346.82 parts per million by volume (ppmv) in 1972 to 372.09 ppmv in 1993. The Waldhof site is subject to pollution sources; consequently, the monthly atmospheric CO2 record exhibits a large seasonal amplitude. Atmospheric CO2 concentrations at Westerland rose from ~329 parts per million by volume (ppmv) in 1973 to ~364 ppmv in 1997. The atmospheric CO2 record from Westerland shows a seasonal pattern similar to other UBA sites; minimum values are recorded during July-September; maximum mixing ratios during November-March.
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
40 CFR 63.1257 - Test methods and compliance procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
...)(2), or 63.1256(h)(2)(i)(C) with a minimum residence time of 0.5 seconds and a minimum temperature of... temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B) For a...
Systematic adaptation of data delivery
Bakken, David Edward
2016-02-02
This disclosure describes, in part, a system management component for use in a power grid data network to systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription and the system management component may adjust the data rates in real-time to ensure that the power grid data network does not become overloaded and/or fail. In one example, subscriptions with lower priorities may have their quality of service adjusted before subscriptions with higher priorities. In each instance, the quality of service may be maintained, even if reduced, to meet or exceed the minimum acceptable quality of service for the subscription.
The evaluation of alternate methodologies for land cover classification in an urbanizing area
NASA Technical Reports Server (NTRS)
Smekofski, R. M.
1981-01-01
The usefulness of LANDSAT in classifying land cover and in identifying and classifying land use change was investigated using an urbanizing area as the study area. The question of what was the best technique for classification was the primary focus of the study. The many computer-assisted techniques available to analyze LANDSAT data were evaluated. Techniques of statistical training (polygons from CRT, unsupervised clustering, polygons from digitizer and binary masks) were tested with minimum distance to the mean, maximum likelihood and canonical analysis with minimum distance to the mean classifiers. The twelve output images were compared to photointerpreted samples, ground verified samples and a current land use data base. Results indicate that for a reconnaissance inventory, the unsupervised training with canonical analysis-minimum distance classifier is the most efficient. If more detailed ground truth and ground verification is available, the polygons from the digitizer training with the canonical analysis minimum distance is more accurate.
Zhang, Qing; Zhu, Liang; Feng, Hanhua; Ang, Simon; Chau, Fook Siong; Liu, Wen-Tso
2006-01-18
This paper reported the development of a microfludic device for the rapid detection of viable and nonviable microbial cells through dual labeling by fluorescent in situ hybridization (FISH) and quantum dots (QDs)-labeled immunofluorescent assay (IFA). The coin sized device consists of a microchannel and filtering pillars (gap=1-2 microm) and was demonstrated to effectively trap and concentrate microbial cells (i.e. Giardia lamblia). After sample injection, FISH probe solution and QDs-labeled antibody solution were sequentially pumped into the device to accelerate the fluorescent labeling reactions at optimized flow rates (i.e. 1 and 20 microL/min, respectively). After 2 min washing for each assay, the whole process could be finished within 30 min, with minimum consumption of labeling reagents and superior fluorescent signal intensity. The choice of QDs 525 for IFA resulted in bright and stable fluorescent signal, with minimum interference with the Cy3 signal from FISH detection.
Telephone survey of hospital staff knowledge of medical device surveillance in a Paris hospital.
Mazeau, Valérie; Grenier-Sennelier, Catherine; Paturel, Denys Xavier; Mokhtari, Mostafa; Vidal-Trecan, Gwenaëlle
2004-12-01
Reporting of incidents or near incidents because of medical devices in French hospitals relies on procedures following European and national guidelines. The authors intend to evaluate hospital staff knowledge on these surveillance procedures as a marker of appropriate application. A telephone survey is conducted on a sample of Paris University hospital staff (n = 327) using a structured questionnaire. Two-hundred sixteen persons completed the questionnaire. The response rate was lower among physicians, especially surgeons paid on an hourly basis. Rates of correct answers were different according to age, seniority, job, and department categories. Physicians and nurses correctly answered questions on theoretical knowledge more often than the other job categories. However, on questions dealing with actual practice conditions, correct answers depended more on age and seniority with a U-shaped distribution (minimum rates in intermediate categories of age and seniority).
Release-rate calorimetry of multilayered materials for aircraft seats
NASA Technical Reports Server (NTRS)
Fewell, L. L.; Duskin, F. E.; Spieth, H.; Trabold, E.; Parker, J. A.
1979-01-01
Multilayered samples of contemporary and improved fire resistant aircraft seat materials (foam cushion, decorative fabric, slip sheet, fire blocking layer, and cushion reinforcement layer) were evaluated for their rates of heat release and smoke generation. Top layers (decorative fabric, slip sheet, fire blocking, and cushion reinforcement) with glass fiber block cushion were evaluated to determine which materials based on their minimum contributions to the total heat release of the multilayered assembly may be added or deleted. Top layers exhibiting desirable burning profiles were combined with foam cushion materials. The smoke and heat release rates of multilayered seat materials were then measured at heat fluxes of 1.5 and 3.5 W/sq cm. Choices of contact and silicone adhesives for bonding multilayered assemblies were based on flammability, burn and smoke generation, animal toxicity tests, and thermal gravimetric analysis. Abrasion tests were conducted on the decorative fabric covering and slip sheet to ascertain service life and compatibility of layers.
Disorder dependence electron phonon scattering rate of V82Pd18 - xFex alloys at low temperature
NASA Astrophysics Data System (ADS)
Jana, R. N.; Meikap, A. K.
2018-04-01
We have systematically investigated the disorder dependence electron phonon scattering rate in three dimensional disordered V82Pd18 - xFex alloys. A minimum in temperature dependence resistivity curve has been observed at low temperature T =Tm. In the temperature range 5 K ≤ T ≤Tm the resistivity correction follows ρo 5 / 2T 1 / 2 law. The dephasing scattering time has been calculated from analysis of magnetoresistivity by weak localization theory. The electron dephasing time is dominated by electron-phonon scattering and follows anomalous temperature (T) and disorder (ρ0) dependence behaviour like τe-ph-1 ∝T2 /ρ0, where ρ0 is the impurity resistivity. The magnitude of the saturated dephasing scattering time (τ0) at zero temperature decreases with increasing disorder of the samples. Such anomalous behaviour of dephasing scattering rate is still unresolved.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1977-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
Optimal estimation for discrete time jump processes
NASA Technical Reports Server (NTRS)
Vaca, M. V.; Tretter, S. A.
1978-01-01
Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
Turboprop: improved PROPELLER imaging.
Pipe, James G; Zwart, Nicholas
2006-02-01
A variant of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI, called turboprop, is introduced. This method employs an oscillating readout gradient during each spin echo of the echo train to collect more lines of data per echo train, which reduces the minimum scan time, motion-related artifact, and specific absorption rate (SAR) while increasing sampling efficiency. It can be applied to conventional fast spin-echo (FSE) imaging; however, this article emphasizes its application in diffusion-weighted imaging (DWI). The method is described and compared with conventional PROPELLER imaging, and clinical images collected with this PROPELLER variant are shown. Copyright 2006 Wiley-Liss, Inc.
Rodríguez, Inés; Alfonso, Amparo; Alonso, Eva; Rubiolo, Juan A; Roel, María; Vlamis, Aristidis; Katikou, Panagiota; Jackson, Stephen A; Menon, Margassery Lekha; Dobson, Alan; Botana, Luis M
2017-01-20
In 2012, Tetrodotoxin (TTX) was identified in mussels and linked to the presence of Prorocentrum minimum (P. minimum) in Greece. The connexion between TTX and P. minimum was further studied in this paper. First, the presence of TTX-producer bacteria, Vibrio and Pseudomonas spp, was confirmed in Greek mussels. In addition these samples showed high activity as inhibitors of sodium currents (I Na ). P. minimum was before associated with neurotoxic symptoms, however, the nature and structure of toxins produced by this dinoflagellate remains unknown. Three P. minimum strains, ccmp1529, ccmp2811 and ccmp2956, growing in different conditions of temperature, salinity and light were used to study the production of toxic compounds. Electrophysiological assays showed no effect of ccmp2811 strain on I Na , while ccmp1529 and ccmp2956 strains were able to significantly reduce I Na in the same way as TTX. In these samples two new compounds, m/z 265 and m/z 308, were identified and characterized by liquid chromatography tandem high-resolution mass spectrometry. Besides, two TTX-related bacteria, Roseobacter and Vibrio sp, were observed. These results show for the first time that P. minimum produce TTX-like compounds with a similar ion pattern and C9-base to TTX analogues and with the same effect on I Na .
Influence of Sampling Effort on the Estimated Richness of Road-Killed Vertebrate Wildlife
NASA Astrophysics Data System (ADS)
Bager, Alex; da Rosa, Clarissa A.
2011-05-01
Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.
Influence of sampling effort on the estimated richness of road-killed vertebrate wildlife.
Bager, Alex; da Rosa, Clarissa A
2011-05-01
Road-killed mammals, birds, and reptiles were collected weekly from highways in southern Brazil in 2002 and 2005. The objective was to assess variation in estimates of road-kill impacts on species richness produced by different sampling efforts, and to provide information to aid in the experimental design of future sampling. Richness observed in weekly samples was compared with sampling for different periods. In each period, the list of road-killed species was evaluated based on estimates the community structure derived from weekly samplings, and by the presence of the ten species most subject to road mortality, and also of threatened species. Weekly samples were sufficient only for reptiles and mammals, considered separately. Richness estimated from the biweekly samples was equal to that found in the weekly samples, and gave satisfactory results for sampling the most abundant and threatened species. The ten most affected species showed constant road-mortality rates, independent of sampling interval, and also maintained their dominance structure. Birds required greater sampling effort. When the composition of road-killed species varies seasonally, it is necessary to take biweekly samples for a minimum of one year. Weekly or more-frequent sampling for periods longer than two years is necessary to provide a reliable estimate of total species richness.
Temperature constraints on the Ginkgo flow of the Columbia River Basalt Group
NASA Astrophysics Data System (ADS)
Ho, Anita M.; Cashman, Katharine V.
1997-05-01
This study provides the first quantitative estimate of heat loss for a Columbia River Basalt Group flow. A glass composition-based geothermometer was experimentally calibrated for a composition representative of the 500-km-long Ginkgo flow of the Columbia River Basalt Group to measure temperature change during transport. Melting experiments were conducted on a bulk sample at 1 atm between 1200 and 1050 °C. Natural glass was sampled from the margin of a feeder dike near Kahlotus, Washington, and from pillow basalt at distances of 120 km (Vantage, Washington), 350 km (Molalla, Oregon), and 370 km (Portland, Oregon). Ginkgo basalt was also sampled at its distal end at Yaquina Head, Oregon (500 km). Comparison of the glass MgO content, K2O in plagioclase, and measured crystallinities in the experimental charges and natural samples tightly constrains the minimum flow temperature to 1085 ± 5 °C. Glass and plagioclase compositions indicate an upper temperature of 1095 ± 5 °C; thus the maximum temperature decrease along the flow axis of the Ginkgo is 20 °C, suggesting cooling rates of 0.02 0.04 °C/km. These cooling rates, substantially lower than rates observed in active and historic flows, are inconsistent with turbulent flow models. Calculated melt temperatures and viscosities of 240 750 Pa · s allow emplacement either as a fast laminar flow under an insulating crust or as a slower, inflated flow.
Electrofishing effort required to estimate biotic condition in southern Idaho Rivers
Maret, Terry R.; Ott, Douglas S.; Herlihy, Alan T.
2007-01-01
An important issue surrounding biomonitoring in large rivers is the minimum sampling effort required to collect an adequate number of fish for accurate and precise determinations of biotic condition. During the summer of 2002, we sampled 15 randomly selected large-river sites in southern Idaho to evaluate the effects of sampling effort on an index of biotic integrity (IBI). Boat electrofishing was used to collect sample populations of fish in river reaches representing 40 and 100 times the mean channel width (MCW; wetted channel) at base flow. Minimum sampling effort was assessed by comparing the relation between reach length sampled and change in IBI score. Thirty-two species of fish in the families Catostomidae, Centrarchidae, Cottidae, Cyprinidae, Ictaluridae, Percidae, and Salmonidae were collected. Of these, 12 alien species were collected at 80% (12 of 15) of the sample sites; alien species represented about 38% of all species (N = 32) collected during the study. A total of 60% (9 of 15) of the sample sites had poor IBI scores. A minimum reach length of about 36 times MCW was determined to be sufficient for collecting an adequate number of fish for estimating biotic condition based on an IBI score. For most sites, this equates to collecting 275 fish at a site. Results may be applicable to other semiarid, fifth-order through seventh-order rivers sampled during summer low-flow conditions.
ERIC Educational Resources Information Center
Zwick, Rebecca
2012-01-01
Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…
Code of Federal Regulations, 2010 CFR
2010-07-01
... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...
Code of Federal Regulations, 2011 CFR
2011-07-01
... minimum pressure drop and liquid flow-rate at or above the operating levels established during the... leak detection system alarm does not sound more than 5 percent of the operating time during a 6-month... control Maintain the minimum sorbent or carbon injection rate at or above the operating levels established...
USSR Report International Affairs.
1986-09-02
minimum interest rate (price of a loan), proportion of the value of a contract to be covered by an easy loan (minimum size of payments in cash...Kuwait Laos Lebanon Malaysia Mongolian People’s Republic Nepal Pakistan Saudi Arabia Singapore Syria Turnover Export Import Turnover...including the blockade imposed on export financing. The latter was started in July 1980 by swiftly increasing the interest rates on foreign trade loans
Evidence for ultrafast outflows in radio-quiet AGNs - III. Location and energetics
NASA Astrophysics Data System (ADS)
Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.
2012-05-01
Using the results of a previous X-ray photoionization modelling of blueshifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this Letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval ˜0.0003-0.03 pc (˜ 102-104rs) from the central black hole, consistent with what is expected for accretion disc winds/outflows. The mass outflow rates are constrained between ˜0.01 and 1 M⊙ yr-1, corresponding to >rsim5-10 per cent of the accretion rates. The average lower/upper limits on the mechanical power are log? 42.6-44.6 erg s-1. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN cosmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyfert galaxies.
Modeling Hybridization Kinetics of Gene Probes in a DNA Biochip Using FEMLAB
Munir, Ahsan; Waseem, Hassan; Williams, Maggie R.; Stedtfeld, Robert D.; Gulari, Erdogan; Tiedje, James M.; Hashsham, Syed A.
2017-01-01
Microfluidic DNA biochips capable of detecting specific DNA sequences are useful in medical diagnostics, drug discovery, food safety monitoring and agriculture. They are used as miniaturized platforms for analysis of nucleic acids-based biomarkers. Binding kinetics between immobilized single stranded DNA on the surface and its complementary strand present in the sample are of interest. To achieve optimal sensitivity with minimum sample size and rapid hybridization, ability to predict the kinetics of hybridization based on the thermodynamic characteristics of the probe is crucial. In this study, a computer aided numerical model for the design and optimization of a flow-through biochip was developed using a finite element technique packaged software tool (FEMLAB; package included in COMSOL Multiphysics) to simulate the transport of DNA through a microfluidic chamber to the reaction surface. The model accounts for fluid flow, convection and diffusion in the channel and on the reaction surface. Concentration, association rate constant, dissociation rate constant, recirculation flow rate, and temperature were key parameters affecting the rate of hybridization. The model predicted the kinetic profile and signal intensities of eighteen 20-mer probes targeting vancomycin resistance genes (VRGs). Predicted signal intensities and hybridization kinetics strongly correlated with experimental data in the biochip (R2 = 0.8131). PMID:28555058
Modeling Hybridization Kinetics of Gene Probes in a DNA Biochip Using FEMLAB.
Munir, Ahsan; Waseem, Hassan; Williams, Maggie R; Stedtfeld, Robert D; Gulari, Erdogan; Tiedje, James M; Hashsham, Syed A
2017-05-29
Microfluidic DNA biochips capable of detecting specific DNA sequences are useful in medical diagnostics, drug discovery, food safety monitoring and agriculture. They are used as miniaturized platforms for analysis of nucleic acids-based biomarkers. Binding kinetics between immobilized single stranded DNA on the surface and its complementary strand present in the sample are of interest. To achieve optimal sensitivity with minimum sample size and rapid hybridization, ability to predict the kinetics of hybridization based on the thermodynamic characteristics of the probe is crucial. In this study, a computer aided numerical model for the design and optimization of a flow-through biochip was developed using a finite element technique packaged software tool (FEMLAB; package included in COMSOL Multiphysics) to simulate the transport of DNA through a microfluidic chamber to the reaction surface. The model accounts for fluid flow, convection and diffusion in the channel and on the reaction surface. Concentration, association rate constant, dissociation rate constant, recirculation flow rate, and temperature were key parameters affecting the rate of hybridization. The model predicted the kinetic profile and signal intensities of eighteen 20-mer probes targeting vancomycin resistance genes (VRGs). Predicted signal intensities and hybridization kinetics strongly correlated with experimental data in the biochip (R² = 0.8131).
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
High pressure die casting of Fe-based metallic glass.
Ramasamy, Parthiban; Szabo, Attila; Borzel, Stefan; Eckert, Jürgen; Stoica, Mihai; Bárdos, András
2016-10-11
Soft ferromagnetic Fe-based bulk metallic glass key-shaped specimens with a maximum and minimum width of 25.4 and 5 mm, respectively, were successfully produced using a high pressure die casting (HPDC) method, The influence of die material, alloy temperature and flow rate on the microstructure, thermal stability and soft ferromagnetic properties has been studied. The results suggest that a steel die in which the molten metal flows at low rate and high temperature can be used to produce completely glassy samples. This can be attributed to the laminar filling of the mold and to a lower heat transfer coefficient, which avoids the skin effect in the steel mold. In addition, magnetic measurements reveal that the amorphous structure of the material is maintained throughout the key-shaped samples. Although it is difficult to control the flow and cooling rate of the molten metal in the corners of the key due to different cross sections, this can be overcome by proper tool geometry. The present results confirm that HPDC is a suitable method for the casting of Fe-based bulk glassy alloys even with complex geometries for a broad range of applications.
High pressure die casting of Fe-based metallic glass
NASA Astrophysics Data System (ADS)
Ramasamy, Parthiban; Szabo, Attila; Borzel, Stefan; Eckert, Jürgen; Stoica, Mihai; Bárdos, András
2016-10-01
Soft ferromagnetic Fe-based bulk metallic glass key-shaped specimens with a maximum and minimum width of 25.4 and 5 mm, respectively, were successfully produced using a high pressure die casting (HPDC) method, The influence of die material, alloy temperature and flow rate on the microstructure, thermal stability and soft ferromagnetic properties has been studied. The results suggest that a steel die in which the molten metal flows at low rate and high temperature can be used to produce completely glassy samples. This can be attributed to the laminar filling of the mold and to a lower heat transfer coefficient, which avoids the skin effect in the steel mold. In addition, magnetic measurements reveal that the amorphous structure of the material is maintained throughout the key-shaped samples. Although it is difficult to control the flow and cooling rate of the molten metal in the corners of the key due to different cross sections, this can be overcome by proper tool geometry. The present results confirm that HPDC is a suitable method for the casting of Fe-based bulk glassy alloys even with complex geometries for a broad range of applications.
High pressure die casting of Fe-based metallic glass
Ramasamy, Parthiban; Szabo, Attila; Borzel, Stefan; Eckert, Jürgen; Stoica, Mihai; Bárdos, András
2016-01-01
Soft ferromagnetic Fe-based bulk metallic glass key-shaped specimens with a maximum and minimum width of 25.4 and 5 mm, respectively, were successfully produced using a high pressure die casting (HPDC) method, The influence of die material, alloy temperature and flow rate on the microstructure, thermal stability and soft ferromagnetic properties has been studied. The results suggest that a steel die in which the molten metal flows at low rate and high temperature can be used to produce completely glassy samples. This can be attributed to the laminar filling of the mold and to a lower heat transfer coefficient, which avoids the skin effect in the steel mold. In addition, magnetic measurements reveal that the amorphous structure of the material is maintained throughout the key-shaped samples. Although it is difficult to control the flow and cooling rate of the molten metal in the corners of the key due to different cross sections, this can be overcome by proper tool geometry. The present results confirm that HPDC is a suitable method for the casting of Fe-based bulk glassy alloys even with complex geometries for a broad range of applications. PMID:27725780
NASA Astrophysics Data System (ADS)
Smith, C. G.; Cable, J. E.; Martin, J. B.; Roy, M.
2008-05-01
Pore water distributions of 222Rn (t1/2 = 3.83 d), obtained during two sampling trips 9-12 May 2005 and 6-8 May 2006, are used to determine spatial and temporal variations of fluid discharge from a seepage face located along the mainland shoreline of Indian River Lagoon, Florida. Porewater samples were collected from a 30 m transect of multi-level piezometers and analyzed for 222Rn via liquid scintillation counting; the mean of triplicate measurements was used to represent the porewater 222Rn activities. Sediment samples were collected from five vibracores (0, 10, 17.5, 20, and 30 m offshore) and emanation rates of 222Rn (sediment supported) were determined using a standard cryogenic extraction technique. A conceptual 222Rn transport model and subsequent numerical model were developed based on the vertical distribution of dissolved and sediment-supported 222Rn and applicable processes occurring along the seepage face (e.g. advection, diffusion, and nonlocal exchange). The model was solved inversely with the addition of two Monte Carlo (MC) simulations to increase the statistical reliability of three parameters: fresh groundwater seepage velocity (v), irrigation intensity (α0), and irrigation attenuation (α1). The first MC simulation ensures that the Nelder-Mead minimization algorithm converges on a global minimum of the merit function and that the parameters estimates are consistent within this global minimum. The second MC simulation provides 90% confidence intervals on the parameter estimates using the measured 222Rn activity variance. Fresh groundwater seepage velocities obtained from the model decrease linearly with distance from the shoreline; seepage velocities range between 0.6 and 42.2 cm d-1. Based on this linear relationship, the terminus of the fresh groundwater seepage is approximately 25 m offshore and total fresh groundwater discharge for the May-2005 and May-2006 sampling trips are 1.16 and 1.45 m3 d-1 m-1 of shoreline, respectively. We hypothesize that the 25% increase in specific discharge between May-2005 and May- 2006 reflects higher recharge via precipitation to the Surficial aquifer during the highly active 2005 Atlantic hurricane season. Irrigation rates generally decrease offshore for both sampling periods; irrigation rates range between 4.9 and 85.7 cm d-1. Physical and biological mechanisms reasonable for the observed irrigation likely include density-driven convection, wave pumping, and bio-irrigation. The inclusion of both advective and nonlocal exchange processes in the model permits the separation of submarine groundwater discharge into fresh submarine groundwater discharge (seepage velocities) and (re)circulated lagoon water (as irrigation).
ERIC Educational Resources Information Center
Carey, Theodore; Carifio, James
2012-01-01
In an effort to reduce failure and drop-out rates, schools have been implementing minimum grading. One form involves raising catastrophically low student quarter grades to a predetermined minimum--typically a 50. Proponents argue it gives struggling students a reasonable chance to recover from failure. Critics contend the practice induces grade…
29 CFR 780.620 - Minimum wage for livestock auction work.
Code of Federal Regulations, 2010 CFR
2010-07-01
... minimum rate required by section 6(a)(1) of the Act for the time spent in livestock auction work. The exemption does not apply unless there is payment for all hours spent in livestock auction work at not less... 29 Labor 3 2010-07-01 2010-07-01 false Minimum wage for livestock auction work. 780.620 Section...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2013 CFR
2013-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2014 CFR
2014-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
42 CFR 84.207 - Bench tests; gas and vapor tests; minimum requirements; general.
Code of Federal Regulations, 2012 CFR
2012-10-01
....) Flowrate (l.p.m.) Number of tests Penetration 1 (p.p.m.) Minimum life 2 (min.) Ammonia As received NH3 1000... minimum life shall be one-half that shown for each type of gas or vapor. Where a respirator is designed... at predetermined concentrations and rates of flow, and that has means for determining the test life...
Zimmerman, Frederick J.; Ralston, James D.; Martin, Diane P.
2011-01-01
Objectives. We examined whether minimum wage policy is associated with access to medical care among low-skilled workers in the United States. Methods. We used multilevel logistic regression to analyze a data set consisting of individual-level indicators of uninsurance and unmet medical need from the Behavioral Risk Factor Surveillance System and state-level ecological controls from the US Census, Bureau of Labor Statistics, and several other sources in all 50 states and the District of Columbia between 1996 and 2007. Results. Higher state-level minimum wage rates were associated with significantly reduced odds of reporting unmet medical need after control for the ecological covariates, substate region fixed effects, and individual demographic and health characteristics (odds ratio = 0.853; 95% confidence interval = 0.750, 0.971). Minimum wage rates were not significantly associated with being uninsured. Conclusions. Higher minimum wages may be associated with a reduced likelihood of experiencing unmet medical need among low-skilled workers, and do not appear to be associated with uninsurance. These findings appear to refute the suggestion that minimum wage laws have detrimental effects on access to health care, as opponents of the policies have suggested. PMID:21164102
McCarrier, Kelly P; Zimmerman, Frederick J; Ralston, James D; Martin, Diane P
2011-02-01
We examined whether minimum wage policy is associated with access to medical care among low-skilled workers in the United States. We used multilevel logistic regression to analyze a data set consisting of individual-level indicators of uninsurance and unmet medical need from the Behavioral Risk Factor Surveillance System and state-level ecological controls from the US Census, Bureau of Labor Statistics, and several other sources in all 50 states and the District of Columbia between 1996 and 2007. Higher state-level minimum wage rates were associated with significantly reduced odds of reporting unmet medical need after control for the ecological covariates, substate region fixed effects, and individual demographic and health characteristics (odds ratio = 0.853; 95% confidence interval = 0.750, 0.971). Minimum wage rates were not significantly associated with being uninsured. Higher minimum wages may be associated with a reduced likelihood of experiencing unmet medical need among low-skilled workers, and do not appear to be associated with uninsurance. These findings appear to refute the suggestion that minimum wage laws have detrimental effects on access to health care, as opponents of the policies have suggested.
NASA Technical Reports Server (NTRS)
Allton, J. H.; Bevill, T. J.
2003-01-01
The strategy of raking rock fragments from the lunar regolith as a means of acquiring representative samples has wide support due to science return, spacecraft simplicity (reliability) and economy [3, 4, 5]. While there exists widespread agreement that raking or sieving the bulk regolith is good strategy, there is lively discussion about the minimum sample size. Advocates of consor-tium studies desire fragments large enough to support petrologic and isotopic studies. Fragments from 5 to 10 mm are thought adequate [4, 5]. Yet, Jolliff et al. [6] demonstrated use of 2-4 mm fragments as repre-sentative of larger rocks. Here we make use of cura-torial records and sample catalogs to give a different perspective on minimum sample size for a robotic sample collector.
Al-Waili, Badria R; Al-Thawadi, Sahar; Hajjar, Sami Al
2013-01-01
In January 2008, the Clinical Laboratory Standard Institute (CLSI) revised the Streptococcus pneumoniae breakpoints for penicillin to define the susceptibility of meningeal and non-meningeal isolates. We studied the impact of these changes. In addition, the pneumococcal resistance rate to other antimicrobial agents was reviewed. Laboratory data on peumococcal isolates collected retrospectively from hospitalized children in tertiary care hospital in Riyadh, Saudi Arabia from January 2006 to March 2012. Only sterile samples were included from cerebrospinal fluids, blood, sterile body fluids and surgical tissue. Other samples such as sputum and non sterile samples were excluded. We included samples from children 14 years old or younger. The minimum inhibitory concentration (MIC) for penicillin, cefuroxime, ceftriaxone and meropenem were determined by using the E-test, while susceptibility to erythromycin, cotrimoxazole and vancomycin were measured using the disc diffusion methods following the guideline of CLSI. Specimens were analyzed in two different periods: from January 2006 to December 2007 and from January 2008 to March 2012. During the two periods there were 208 samples of which 203 were blood samples. Full penicillin resistance was detected in 6.6% in the first period. There was decrease in penicillin nonmeningeal resistance to 1.5% and an increase in resistance in penicillin meningeal 68.2% in the second period (P=.0001). There was an increase in rate of resistance among S pneumoniae isolates over the two periods to parenteral cefuroxime, erythromycin and cotrimoxazole by 34.6%, 35.5% and 51.9%, respectively. Total meropenem resistance found 4.3% and no vancomycin resistance was detected. The current study supports the use of the revised CLSI susceptibility breakpoints that promote using penicillin to treat nonmeningeal pneumococcal disease, and might slow the development of resistance to broader-spectrum antibiotics.
Elevated temperature ductility of types 304 and 316 stainless steel. [640/sup 0/ to 750/sup 0/C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikka, V. K.
1978-01-01
Austenitic stainless steel types 304 and 316 are known for their high ductility and toughness. However, the present study shows that certain combinations of strain rate and test temperature can result in a significant loss in elevated-temperature ductility. Such a phenomenon is referred to as ductility minimum. The strain rate, below which ductility loss is initiated, decreases with decrease in test temperature. Besides strain rate and temperature, the ductility minimum was also affected by nitrogen content and thermal aging conditions. Thermal aging at 649/sup 0/C was observed to eliminate the ductility minimum at 649/sup 0/C in both types 304 andmore » 316 stainless steel. Such an aging treatment resulted in a higher ductility than the unaged value. Aging at 593/sup 0/C still resulted in some loss in ductility. Current results suggest that ductility-minimum conditions for stainless steel should be considered in design, thermal aging data analysis, and while studying the effects of chemical composition.« less
Conklin, Annalijn I; Ponce, Ninez A; Crespi, Catherine M; Frank, John; Nandi, Arijit; Heymann, Jody
2018-04-01
To examine changes in minimum wage associated with changes in women's weight status. Longitudinal study of legislated minimum wage levels (per month, purchasing power parity-adjusted, 2011 constant US dollar values) linked to anthropometric and sociodemographic data from multiple Demographic and Health Surveys (2000-2014). Separate multilevel models estimated associations of a $10 increase in monthly minimum wage with the rate of change in underweight and obesity, conditioning on individual and country confounders. Post-estimation analysis computed predicted mean probabilities of being underweight or obese associated with higher levels of minimum wage at study start and end. Twenty-four low-income countries. Adult non-pregnant women (n 150 796). Higher minimum wages were associated (OR; 95 % CI) with reduced underweight in women (0·986; 0·977, 0·995); a decrease that accelerated over time (P-interaction=0·025). Increasing minimum wage was associated with higher obesity (1·019; 1·008, 1·030), but did not alter the rate of increase in obesity prevalence (P-interaction=0·8). A $10 rise in monthly minimum wage was associated (prevalence difference; 95 % CI) with an average decrease of about 0·14 percentage points (-0·14; -0·23, -0·05) for underweight and an increase of about 0·1 percentage points (0·12; 0·04, 0·20) for obesity. The present longitudinal multi-country study showed that a $10 rise in monthly minimum wage significantly accelerated the decline in women's underweight prevalence, but had no association with the pace of growth in obesity prevalence. Thus, modest rises in minimum wage may be beneficial for addressing the protracted underweight problem in poor countries, especially South Asia and parts of Africa.
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
Belkahia, Hanène; Ben Said, Mourad; El Mabrouk, Narjesse; Saidani, Mariem; Cherni, Chayma; Ben Hassen, Mariem; Bouattour, Ali; Messadi, Lilia
2017-09-01
In cattle, anaplasmosis is a tick-borne rickettsial disease caused by Anaplasma marginale, A. centrale, A. phagocytophilum, and A. bovis. To date, no information concerning the seasonal dynamics of single and/or mixed infections by different Anaplasma species in bovines are available in Tunisia. In this work, a total of 1035 blood bovine samples were collected in spring (n=367), summer (n=248), autumn (n=244) and winter (n=176) from five different governorates belonging to three bioclimatic zones from the North of Tunisia. Molecular survey of A. marginale, A. centrale and A. bovis in cattle showed that average prevalence rates were 4.7% (minimum 4.1% in autumn and maximum 5.6% in summer), 7% (minimum 3.9% in winter and maximum 10.7% in autumn) and 4.9% (minimum 2.7% in spring and maximum 7.3% in summer), respectively. A. phagocytophilum was not detected in all investigated cattle. Seasonal variations of Anaplasma spp. infection and co-infection rates in overall and/or according to each bioclimatic area were recorded. Molecular characterization of A. marginale msp4 gene indicated a high sequence homology of revealed strains with A. marginale sequences from African countries. Alignment of 16S rRNA A. centrale sequences showed that Tunisian strains were identical to the vaccine strain from several sub-Saharan African and European countries. The comparison of the 16S rRNA sequences of A. bovis variants showed a perfect homology between Tunisian variants isolated from cattle, goats and sheep. These present data are essential to estimate the risk of bovine anaplasmosis in order to develop integrated control policies against multi-species pathogen communities, infecting humans and different animal species, in the country. Copyright © 2017 Elsevier B.V. All rights reserved.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
The Effect of Minimum Wages on the Labor Force Participation Rates of Teenagers.
ERIC Educational Resources Information Center
Wessels, Walter J.
In light of pressure on Congress to raise the minimum wage from $5.15 to $6.15 per hour, a study looked at the effects such a raise would have on more than 10 million workers, many of them teenagers. The study used quarterly data on the labor force participation rates of teenagers from 1978 through 1999 and other studies to assess the effects of…
Aeroacoustic and aerodynamic applications of the theory of nonequilibrium thermodynamics
NASA Technical Reports Server (NTRS)
Horne, W. Clifton; Smith, Charles A.; Karamcheti, Krishnamurty
1991-01-01
Recent developments in the field of nonequilibrium thermodynamics associated with viscous flows are examined and related to developments to the understanding of specific phenomena in aerodynamics and aeroacoustics. A key element of the nonequilibrium theory is the principle of minimum entropy production rate for steady dissipative processes near equilibrium, and variational calculus is used to apply this principle to several examples of viscous flow. A review of nonequilibrium thermodynamics and its role in fluid motion are presented. Several formulations are presented of the local entropy production rate and the local energy dissipation rate, two quantities that are of central importance to the theory. These expressions and the principle of minimum entropy production rate for steady viscous flows are used to identify parallel-wall channel flow and irrotational flow as having minimally dissipative velocity distributions. Features of irrotational, steady, viscous flow near an airfoil, such as the effect of trailing-edge radius on circulation, are also found to be compatible with the minimum principle. Finally, the minimum principle is used to interpret the stability of infinitesimal and finite amplitude disturbances in an initially laminar, parallel shear flow, with results that are consistent with experiment and linearized hydrodynamic stability theory. These results suggest that a thermodynamic approach may be useful in unifying the understanding of many diverse phenomena in aerodynamics and aeroacoustics.
Rural drinking water at supply and household levels: quality and management.
Hoque, Bilqis A; Hallman, Kelly; Levy, Jason; Bouis, Howarth; Ali, Nahid; Khan, Feroze; Khanam, Sufia; Kabir, Mamun; Hossain, Sanower; Shah Alam, Mohammad
2006-09-01
Access to safe drinking water has been an important national goal in Bangladesh and other developing countries. While Bangladesh has almost achieved accepted bacteriological drinking water standards for water supply, high rates of diarrheal disease morbidity indicate that pathogen transmission continues through water supply chain (and other modes). This paper investigates the association between water quality and selected management practices by users at both the supply and household levels in rural Bangladesh. Two hundred and seventy tube-well water samples and 300 water samples from household storage containers were tested for fecal coliform (FC) concentrations over three surveys (during different seasons). The tube-well water samples were tested for arsenic concentration during the first survey. Overall, the FC was low (the median value ranged from 0 to 4 cfu/100ml) in water at the supply point (tube-well water samples) but significantly higher in water samples stored in households. At the supply point, 61% of tube-well water samples met the Bangladesh and WHO standards of FC; however, only 37% of stored water samples met the standards during the first survey. When arsenic contamination was also taken into account, only 52% of the samples met both the minimum microbiological and arsenic content standards of safety. The contamination rate for water samples from covered household storage containers was significantly lower than that of uncovered containers. The rate of water contamination in storage containers was highest during the February-May period. It is shown that safe drinking water was achieved by a combination of a protected and high quality source at the initial point and maintaining quality from the initial supply (source) point through to final consumption. It is recommended that the government and other relevant actors in Bangladesh establish a comprehensive drinking water system that integrates water supply, quality, handling and related educational programs in order to ensure the safety of drinking water supplies.
Discriminant WSRC for Large-Scale Plant Species Recognition.
Zhang, Shanwen; Zhang, Chuanlei; Zhu, Yihai; You, Zhuhong
2017-01-01
In sparse representation based classification (SRC) and weighted SRC (WSRC), it is time-consuming to solve the global sparse representation problem. A discriminant WSRC (DWSRC) is proposed for large-scale plant species recognition, including two stages. Firstly, several subdictionaries are constructed by dividing the dataset into several similar classes, and a subdictionary is chosen by the maximum similarity between the test sample and the typical sample of each similar class. Secondly, the weighted sparse representation of the test image is calculated with respect to the chosen subdictionary, and then the leaf category is assigned through the minimum reconstruction error. Different from the traditional SRC and its improved approaches, we sparsely represent the test sample on a subdictionary whose base elements are the training samples of the selected similar class, instead of using the generic overcomplete dictionary on the entire training samples. Thus, the complexity to solving the sparse representation problem is reduced. Moreover, DWSRC is adapted to newly added leaf species without rebuilding the dictionary. Experimental results on the ICL plant leaf database show that the method has low computational complexity and high recognition rate and can be clearly interpreted.
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Reducing tobacco use and access through strengthened minimum price laws.
McLaughlin, Ian; Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt
2014-10-01
Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry's discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price "markups" over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements.
Modeling creep deformation of a two-phase TiAI/Ti3Al alloy with a lamellar microstructure
NASA Astrophysics Data System (ADS)
Bartholomeusz, Michael F.; Wert, John A.
1994-10-01
A two-phase TiAl/Ti3Al alloy with a lamellar microstructure has been previously shown to exhibit a lower minimum creep rate than the minimum creep rates of the constituent TiAl and Ti3Al single-phase alloys. Fiducial-line experiments described in the present article demonstrate that the creep rates of the constituent phases within the two-phase TiAl/Ti3Al lamellar alloy tested in compression are more than an order of magnitude lower than the creep rates of single-phase TiAl and Ti3Al alloys tested in compression at the same stress and temperature. Additionally, the fiducial-line experiments show that no interfacial sliding of the phases in the TiAl/Ti3Al lamellar alloy occurs during creep. The lower creep rate of the lamellar alloy is attributed to enhanced hardening of the constituent phases within the lamellar microstructure. A composite-strength model has been formulated to predict the creep rate of the lamellar alloy, taking into account the lower creep rates of the constituent phases within the lamellar micro-structure. Application of the model yields a very good correlation between predicted and experimentally observed minimum creep rates over moderate stress and temperature ranges.
Elango, Rajavel; Humayun, Mohammad A; Turner, Justine M; Rafii, Mahroukh; Langos, Veronika; Ball, Ronald O; Pencharz, Paul B
2017-10-01
Background: The total sulfur amino acid (TSAA) and minimum Met requirements have been previously determined in healthy children. TSAA metabolism is altered in kidney disease. Whether TSAA requirements are altered in children with chronic renal insufficiency (CRI) is unknown. Objective: We sought to determine the TSAA (Met in the absence of Cys) requirements and minimum Met (in the presence of excess Cys) requirements in children with CRI. Methods: Five children (4 boys, 1 girl) aged 10 ± 2.6 y with CRI were randomly assigned to receive graded intakes of Met (0, 5, 10, 15, 25, and 35 mg · kg -1 · d -1 ) with no Cys in the diet. Four of the children (3 boys, 1 girl) were then randomly assigned to receive graded dietary intakes of Met (0, 2.5, 5, 7.5, 10, and 15 mg · kg -1 · d -1 ) with 21 mg · kg -1 · d -1 Cys. The mean TSAA and minimum Met requirements were determined by measuring the oxidation of l-[1- 13 C]Phe to 13 CO 2 (F 13 CO 2 ). A 2-phase linear-regression crossover analysis of the F 13 CO 2 data identified a breakpoint at minimal F 13 CO 2 Urine samples collected from all study days and from previous studies of healthy children were measured for sulfur metabolites. Results: The mean and population-safe (upper 95% CI) intakes of TSAA and minimum Met in children with CRI were determined to be 12.6 and 15.9 mg · kg -1 · d -1 and 7.3 and 10.9 mg · kg -1 · d -1 , respectively. In healthy school-aged children the mean and upper 95% CI intakes of TSAA and minimum Met were determined to be 12.9 and 17.2 mg · kg -1 · d -1 and 5.8 and 7.3 mg · kg -1 · d -1 , respectively. A comparison of the minimum Met requirements between healthy children and children with CRI indicated significant ( P < 0.05) differences. Conclusion: These results suggest that children with CRI have a similar mean and population-safe TSAA to that of healthy children, suggesting adequate Cys synthesis via transsulfuration, but higher minimum Met requirement, suggesting reduced remethylation rates. © 2017 American Society for Nutrition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutton, Spencer M.; Fisk, William J.
For a stand-alone retail building, a primary school, and a secondary school in each of the 16 California climate zones, the EnergyPlus building energy simulation model was used to estimate how minimum mechanical ventilation rates (VRs) affect energy use and indoor air concentrations of an indoor-generated contaminant. The modeling indicates large changes in heating energy use, but only moderate changes in total building energy use, as minimum VRs in the retail building are changed. For example, predicted state-wide heating energy consumption in the retail building decreases by more than 50% and total building energy consumption decreases by approximately 10% asmore » the minimum VR decreases from the Title 24 requirement to no mechanical ventilation. The primary and secondary schools have notably higher internal heat gains than in the retail building models, resulting in significantly reduced demand for heating. The school heating energy use was correspondingly less sensitive to changes in the minimum VR. The modeling indicates that minimum VRs influence HVAC energy and total energy use in schools by only a few percent. For both the retail building and the school buildings, minimum VRs substantially affected the predicted annual-average indoor concentrations of an indoor generated contaminant, with larger effects in schools. The shape of the curves relating contaminant concentrations with VRs illustrate the importance of avoiding particularly low VRs.« less
Shrinkage Estimators for a Composite Measure of Quality Conceptualized as a Formative Construct
Shwartz, Michael; Peköz, Erol A; Christiansen, Cindy L; Burgess, James F; Berlowitz, Dan
2013-01-01
Objective To demonstrate the value of shrinkage estimators when calculating a composite quality measure as the weighted average of a set of individual quality indicators. Data Sources Rates of 28 quality indicators (QIs) calculated from the minimum dataset from residents of 112 Veterans Health Administration nursing homes in fiscal years 2005–2008. Study Design We compared composite scores calculated from the 28 QIs using both observed rates and shrunken rates derived from a Bayesian multivariate normal-binomial model. Principal Findings Shrunken-rate composite scores, because they take into account unreliability of estimates from small samples and the correlation among QIs, have more intuitive appeal than observed-rate composite scores. Facilities can be profiled based on more policy-relevant measures than point estimates of composite scores, and interval estimates can be calculated without assuming the QIs are independent. Usually, shrunken-rate composite scores in 1 year are better able to predict the observed total number of QI events or the observed-rate composite scores in the following year than the initial year observed-rate composite scores. Conclusion Shrinkage estimators can be useful when a composite measure is conceptualized as a formative construct. PMID:22716650
Kubota, Yoshihisa; Takahashi, Hiroyuki; Watanabe, Yoshito; Fuma, Shoichi; Kawaguchi, Isao; Aoki, Masanari; Kubota, Masahide; Furuhata, Yoshiaki; Shigemura, Yusaku; Yamada, Fumio; Ishikawa, Takahiro; Obara, Satoshi; Yoshida, Satoshi
2015-04-01
The dose rates of radiation absorbed by wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi Nuclear Power Plant accident were estimated. The large Japanese field mouse (Apodemus speciosus), also called the wood mouse, was the major rodent species captured in the sampling area, although other species of rodents, such as small field mice (Apodemus argenteus) and Japanese grass voles (Microtus montebelli), were also collected. The external exposure of rodents calculated from the activity concentrations of radiocesium ((134)Cs and (137)Cs) in litter and soil samples using the ERICA (Environmental Risk from Ionizing Contaminants: Assessment and Management) tool under the assumption that radionuclides existed as the infinite plane isotropic source was almost the same as those measured directly with glass dosimeters embedded in rodent abdomens. Our findings suggest that the ERICA tool is useful for estimating external dose rates to small animals inhabiting forest floors; however, the estimated dose rates showed large standard deviations. This could be an indication of the inhomogeneous distribution of radionuclides in the sampled litter and soil. There was a 50-fold difference between minimum and maximum whole-body activity concentrations measured in rodents at the time of capture. The radionuclides retained in rodents after capture decreased exponentially over time. Regression equations indicated that the biological half-life of radiocesium after capture was 3.31 d. At the time of capture, the lowest activity concentration was measured in the lung and was approximately half of the highest concentration measured in the mixture of muscle and bone. The average internal absorbed dose rate was markedly smaller than the average external dose rate (<10% of the total absorbed dose rate). The average total absorbed dose rate to wild rodents inhabiting the sampling area was estimated to be approximately 52 μGy h(-1) (1.2 mGy d(-1)), even 3 years after the accident. This dose rate exceeds 0.1-1 mGy d(-1) derived consideration reference level for Reference rat proposed by the International Commission on Radiological Protection (ICRP). Copyright © 2015 Elsevier Ltd. All rights reserved.
Orthopaedic surgery in natural disaster and conflict settings: how can quality care be ensured?
Alvarado, Oscar; Trelles, Miguel; Tayler-Smith, Katie; Joseph, Holdine; Gesline, Rodné; Wilna, Thélusma Eli; Mohammad Omar, Mohammad Karim; Faiz Mohammad, Niaz Mohammad; Muhima Mastaki, John; Chingumwa Buhu, Richard; Caluwaerts, An; Dominguez, Lynette
2015-10-01
Médecins sans Frontières (MSF) is one of the main providers of orthopaedic surgery in natural disaster and conflict settings and strictly imposes a minimum set of context-specific standards before any surgery can be performed. Based on MSF's experience of performing orthopaedic surgery in a number of such settings, we describe: (a) whether it was possible to implement the minimum standards for one of the more rigorous orthopaedic procedures--internal fixation--and when possible, the time frame, (b) the volume and type of interventions performed and (c) the intra-operative mortality rates and postoperative infection rates. We conducted a retrospective review of routine programme data collected between 2007 and 2014 from three MSF emergency surgical interventions in Haiti (following the 2010 earthquake) and three ongoing MSF projects in Kunduz (Afghanistan), Masisi (Democratic Republic of the Congo) and Tabarre (Haiti). The minimum standards for internal fixation were achieved in one emergency intervention site in Haiti, and in Kunduz and Tabarre, taking up to 18 months to implement in Kunduz. All sites achieved the minimum standards to perform amputations, reductions and external fixations, with a total of 9,409 orthopaedic procedures performed during the study period. Intraoperative mortality rates ranged from 0.6 to 1.9 % and postoperative infection rates from 2.4 to 3.5 %. In settings affected by natural disaster or conflict, a high volume and wide repertoire of orthopaedic surgical procedures can be performed with good outcomes when minimum standards are in place. More demanding procedures like internal fixation may not always be feasible.
Automated storm water sampling on small watersheds
Harmel, R.D.; King, K.W.; Slade, R.M.
2003-01-01
Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.
Mann, L.J.
1989-01-01
Concern has been expressed that some of the approximately 30,900 curies of tritium disposed to the Snake River Plain aquifer from 1952 to 1988 at the INEL (Idaho National Engineering Laboratory) have migrated to springs discharging to the Snake River in the Twin Falls-Hagerman area. To document tritium concentrations in springflow, 17 springs were sampled in November 1988 and 19 springs were sampled in March 1989. Tritium concentrations were less than the minimum detectable concentration of 0.5 pCi/mL (picocuries/mL) in November 1988 and less than the minimum detectable concentration of 0.2 pCi/mL in March 1989; the minimum detectable concentration was smaller in March 1989 owing to a longer counting time in the liquid scintillation system. The maximum contaminant level of tritium in drinking water as established by the U.S. Environmental Protection Agency is 20 pCi/mL. U.S. Environmental Protection Agency sample analyses indicate that the tritium concentration has decreased in the Snake River near Buhl since the 1970's. In 1974-79, tritium concentrations were less than 0.3 +/-0.2 pCi/mL in 3 of 20 samples; in 1983-88, 17 of 23 samples contained less than 0.3 +/-0.2 pCi/mL of tritium; the minimum detectable concentration is 0.2 pCi/mL. On the basis of decreasing tritium concentrations in the Snake River, their correlation to cessation of atmospheric weapons tests tritium concentrations in springflow less than the minimum detectable concentration, and the distribution of tritium in groundwater at the INEL, aqueous disposal of tritium at the INEL has had no measurable effect on tritium concentrations in springflow from the Snake River Plain aquifer and in the Snake River near Buhl. (USGS)
Studies on droplet evaporation and combustion in high pressures
NASA Technical Reports Server (NTRS)
Sato, J.
1993-01-01
High pressure droplet evaporation and combustion have been studied up to 15 MPa under normal and microgravity fields. From the evaporation studies, it has been found that in the supercritical environments, the droplet evaporation rate and lifetime take a maximum and a minimum at an ambient pressure over the critical pressure. Its maximum and minimum points move toward the lower ambient pressures if the ambient temperature is increased. It has been found from the combustion studies that the burning life time takes a minimum at an ambient pressure being equal to the critical pressure. It is attributable to both the pressure dependency of the diffusion rate and the droplet evaporation characteristics described above.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
A multi-purpose readout electronics for CdTe and CZT detectors for x-ray imaging applications
NASA Astrophysics Data System (ADS)
Yue, X. B.; Deng, Z.; Xing, Y. X.; Liu, Y. N.
2017-09-01
A multi-purpose readout electronics based on the DPLMS digital filter has been developed for CdTe and CZT detectors for X-ray imaging applications. Different filter coefficients can be synthesized optimized either for high energy resolution at relatively low counting rate or for high rate photon-counting with reduced energy resolution. The effects of signal width constraints, sampling rate and length were numerical studied by Mento Carlo simulation with simple CRRC shaper input signals. The signal width constraint had minor effect and the ENC was only increased by 6.5% when the signal width was shortened down to 2 τc. The sampling rate and length depended on the characteristic time constants of both input and output signals. For simple CR-RC input signals, the minimum number of the filter coefficients was 12 with 10% increase in ENC when the output time constant was close to the input shaping time. A prototype readout electronics was developed for demonstration, using a previously designed analog front ASIC and a commercial ADC card. Two different DPLMS filters were successfully synthesized and applied for high resolution and high counting rate applications respectively. The readout electronics was also tested with a linear array CdTe detector. The energy resolutions of Am-241 59.5 keV peak were measured to be 6.41% in FWHM for the high resolution filter and to be 13.58% in FWHM for the high counting rate filter with 160 ns signal width constraint.
Solomon, Dagmawit; Aderaw, Zewdie; Tegegne, Teketo Kassaw
2017-10-12
Dietary diversity has long been recognized as a key element of high quality diets. Minimum Dietary Diversity (MDD) is the consumption of four or more food groups from the seven food groups. Globally, only few children are receiving nutritionally adequate and diversified foods. More than two-thirds of malnutrition related child deaths are associated with inappropriate feeding practice during the first two years of life. In Ethiopia, only 7 % of children age 6-23 months had received the minimum acceptable diet. Therefore, the main aim of this study was to determine the level of minimum dietary diversity practice and identify the associated factors among children aged 6-23 months in Addis Ababa, Ethiopia. A health facility based cross sectional study was undertaken in the three sub-cities of Addis Ababa from 26th February to 28th April, 2016. A multi-stage sampling technique was used to sample the 352 study participants or mothers who had children aged 6-23 months. Data were collected by using a structured and pretested questionnaire, cleaned and entered into Epi info 7 and analyzed using SPSS 24 software. Logistic regression was fitted and odds ratio with 95% confidence interval (CI) with p-value less than 0.05 was used to identify factors associated with minimum dietary diversity. In this study, the overall children with minimum dietary diversity score were found to be 59.9%. Mother's educational attainment and a higher household monthly income were positively associated with the minimum dietary diversity practice. Similarly, mothers' knowledge on dietary diversity and child feeding was positively associated with minimum dietary diversity child feeding practice, with an adjusted odds ratio of 1.98 (95% CI: 1.11-3.53). In this study, the consumption of minimum dietary diversity was found to be high. In spite of this, more efforts need to be done to achieve the recommended minimum dietary diversity intake for all children aged between 6 and 23 months.
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... test protocol and the means by which sampling variability and analytical variability were accounted for... also establish the design minimum and average temperature in the combustion zone and the combustion... the design minimum and average temperatures across the catalyst bed inlet and outlet. (C) For a boiler...
40 CFR 761.323 - Sample preparation.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 761.323 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL... Remediation Waste Samples § 761.323 Sample preparation. (a) The comparison study requires analysis of a minimum of 10 samples weighing at least 300 grams each. Samples of PCB remediation waste used in the...
40 CFR 761.323 - Sample preparation.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 761.323 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL... Remediation Waste Samples § 761.323 Sample preparation. (a) The comparison study requires analysis of a minimum of 10 samples weighing at least 300 grams each. Samples of PCB remediation waste used in the...
40 CFR 761.323 - Sample preparation.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 761.323 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) TOXIC SUBSTANCES CONTROL... Remediation Waste Samples § 761.323 Sample preparation. (a) The comparison study requires analysis of a minimum of 10 samples weighing at least 300 grams each. Samples of PCB remediation waste used in the...
Code of Federal Regulations, 2011 CFR
2011-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Code of Federal Regulations, 2010 CFR
2010-07-01
...) (grains per dry standard cubic foot (gr/dscf)) 115 (0.05) 69 (0.03) 34 (0.015) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part 60, or EPA Reference Method...-run average (1-hour minimum sample time per run) EPA Reference Method 10 or 10B of appendix A-4 of...
Determination of minimum suction level necessary for field dental units.
Charlton, David G
2010-04-01
A significant problem with most field dental units is that their suction is too weak to effectively remove debris from the mouth. The purpose of this study was to determine the minimum clinically acceptable suction level for routine dentistry. A vacuum pump was connected to a high-volume dental evacuation line in a simulated clinical setting and different suction airflow rates were evaluated by nine evaluator dentists for their capability to effectively remove amalgam debris and water. Airflow levels were rated as "clinically acceptable" or "clinically unacceptable" by each evaluator. Data were analyzed using a chi2 test for trend. Analysis indicated a significant linear trend between airflow and ratings (p < 0.0001). The first airflow level considered by all evaluators as producing clinically acceptable suction was 4.5 standard cubic feet per minute (0.127 standard cubic meters per minute). This value should be the minimum level required for all military field dental units.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwamoto, A.; Mito, T.; Takahata, K.
Heat transfer of large copper plates (18 x 76 mm) in liquid helium has been measured as a function of orientation and treatment of the heat transfer surface. The results relate to applications of large scale superconductors. In order to clarify the influence of the area where the surface treatment peels off, the authors studied five types of heat transfer surface areas including: (a) 100% polished copper sample, (b) and (c) two 50% oxidized copper samples having different patterns of oxidation, (d) 75% oxidized copper sample, (e) 90% oxidized copper sample, and (f) 100% oxidized copper sample. They observed thatmore » the critical heat flux depends on the heat transfer surface orientation. The critical heat flux is a maximum at angles of 0{degrees} - 30{degrees} and decreases monotonically with increasing angles above 30{degrees}, where the angle is taken in reference to the horizontal axis. On the other hand, the minimum heat flux is less dependent on the surface orientation. More than 75% oxidation on the surface makes the critical heat flux increase. The minimum heat fluxes of the 50 and 90% oxidized Cu samples approximately agree with that of the 100% oxidized Cu sample. Experiments and calculations show that the critical and the minimum heat fluxes are a bilinear function of the fraction of oxidized surface area.« less
The role of the oceanic oxygen minima in generating biodiversity in the deep sea
NASA Astrophysics Data System (ADS)
Rogers, Alex D.
2000-01-01
Many studies on the deep-sea benthic biota have shown that the most species-rich areas lie on the continental margins between 500 and 2500 m, which coincides with the present oxygen-minimum in the world's oceans. Some species have adapted to hypoxic conditions in oxygen-minimum zones, and some can even fulfil all their energy requirements through anaerobic metabolism for at least short periods of time. It is, however, apparent that the geographic and vertical distribution of many species is restricted by the presence of oxygen-minimum zones. Historically, cycles of global warming and cooling have led to periods of expansion and contraction of oxygen-minimum layers throughout the world's oceans. Such shifts in the global distribution of oxygen-minimum zones have presented many opportunities for allopatric speciation in organisms inhabiting slope habitats associated with continental margins, oceanic islands and seamounts. On a smaller scale, oxygen-minimum zones can be seen today as providing a barrier to gene-flow between allopatric populations. Recent studies of the Arabian Sea and in other regions of upwelling also have shown that the presence of an oxygen-minimum layer creates a strong vertical gradient in physical and biological parameters. The reduced utilisation of the downward flux of organic material in the oxygen-minimum zone results in an abundant supply of food for organisms immediately below it. The occupation of this area by species exploiting abundant food supplies may lead to strong vertical gradients in selective pressures for optimal rates of growth, modes of reproduction and development and in other aspects of species biology. The presence of such strong selective gradients may have led to an increase in habitat specialisation in the lower reaches of oxygen-minimum zones and an increased rate of speciation.
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
On-Site Incineration of Contaminated Soil: A Study into U.S. Navy Applications
1991-08-01
venturi scrubber Minimum water flow rate and p1l to absorber Minimum water/alkaline reagent flow to dry scrubber Minimum particulate scrubber blowdown...remove hydrochloric acid and sulfur dioxide from flue gases using, for example, wet scrubbers and limestone adsorption towers, respectively. Modified...Reagent preparation 8) Bllending 26) Fugitive emission control 9) Pretreatment 27) Scrubber liquid cooling 10) Blended and pretreated solid waste
ERIC Educational Resources Information Center
Brandon, Peter D.
The potential effects of raising the minimum wage on the earnings of mothers moving from welfare to work were examined by analyzing the differences that existed in the late 1980s in the various states' minimum wage rates and data from three waves of the Survey of Income and Program Participation for the years 1985-1990 (during which time 13 states…
Zhao, Jinhui; Martin, Gina; Macdonald, Scott; Vallance, Kate; Treno, Andrew; Ponicki, William; Tu, Andrew; Buxton, Jane
2013-01-01
Objectives. We investigated whether periodic increases in minimum alcohol prices were associated with reduced alcohol-attributable hospital admissions in British Columbia. Methods. The longitudinal panel study (2002–2009) incorporated minimum alcohol prices, density of alcohol outlets, and age- and gender-standardized rates of acute, chronic, and 100% alcohol-attributable admissions. We applied mixed-method regression models to data from 89 geographic areas of British Columbia across 32 time periods, adjusting for spatial and temporal autocorrelation, moving average effects, season, and a range of economic and social variables. Results. A 10% increase in the average minimum price of all alcoholic beverages was associated with an 8.95% decrease in acute alcohol-attributable admissions and a 9.22% reduction in chronic alcohol-attributable admissions 2 years later. A Can$ 0.10 increase in average minimum price would prevent 166 acute admissions in the 1st year and 275 chronic admissions 2 years later. We also estimated significant, though smaller, adverse impacts of increased private liquor store density on hospital admission rates for all types of alcohol-attributable admissions. Conclusions. Significant health benefits were observed when minimum alcohol prices in British Columbia were increased. By contrast, adverse health outcomes were associated with an expansion of private liquor stores. PMID:23597383
NASA Astrophysics Data System (ADS)
Webb, N.; Chappell, A.; Van Zee, J.; Toledo, D.; Duniway, M.; Billings, B.; Tedela, N.
2017-12-01
Anthropogenic land use and land cover change (LULCC) influence global rates of wind erosion and dust emission, yet our understanding of the magnitude of the responses remains poor. Field measurements and monitoring provide essential data to resolve aeolian sediment transport patterns and assess the impacts of human land use and management intensity. Data collected in the field are also required for dust model calibration and testing, as models have become the primary tool for assessing LULCC-dust cycle interactions. However, there is considerable uncertainty in estimates of dust emission due to the spatial variability of sediment transport. Field sampling designs are currently rudimentary and considerable opportunities are available to reduce the uncertainty. Establishing the minimum detectable change is critical for measuring spatial and temporal patterns of sediment transport, detecting potential impacts of LULCC and land management, and for quantifying the uncertainty of dust model estimates. Here, we evaluate the effectiveness of common sampling designs (e.g., simple random sampling, systematic sampling) used to measure and monitor aeolian sediment transport rates. Using data from the US National Wind Erosion Research Network across diverse rangeland and cropland cover types, we demonstrate how only large changes in sediment mass flux (of the order 200% to 800%) can be detected when small sample sizes are used, crude sampling designs are implemented, or when the spatial variation is large. We then show how statistical rigour and the straightforward application of a sampling design can reduce the uncertainty and detect change in sediment transport over time and between land use and land cover types.
Uribe-Leitz, Tarsicio; Esquivel, Micaela M; Molina, George; Lipsitz, Stuart R; Verguet, Stéphane; Rose, John; Bickler, Stephen W; Gawande, Atul A; Haynes, Alex B; Weiser, Thomas G
2015-09-01
We previously identified a range of 4344-5028 annual operations per 100,000 people to be related to desirable health outcomes. From this and other evidence, the Lancet Commission on Global Surgery recommends a minimum rate of 5000 operations per 100,000 people. We evaluate rates of growth and estimate the time it will take to reach this minimum surgical rate threshold. We aggregated country-level surgical rate estimates from 2004 to 2012 into the twenty-one Global Burden of Disease (GBD) regions. We calculated mean rates of surgery proportional to population size for each year and assessed the rate of growth over time. We then extrapolated the time it will take each region to reach a surgical rate of 5000 operations per 100,000 population based on linear rates of change. All but two regions experienced growth in their surgical rates during the past 8 years. Fourteen regions did not meet the recommended threshold in 2012. If surgical capacity continues to grow at current rates, seven regions will not meet the threshold by 2035. Eastern Sub-Saharan Africa will not reach the recommended threshold until 2124. The rates of growth in surgical service delivery are exceedingly variable. At current rates of surgical and population growth, 6.2 billion people (73% of the world's population) will be living in countries below the minimum recommended rate of surgical care in 2035. A strategy for strengthening surgical capacity is essential if these targets are to be met in a timely fashion as part of the integrated health system development.
14 CFR 23.1443 - Minimum mass flow of supplemental oxygen.
Code of Federal Regulations, 2010 CFR
2010-01-01
... discretion. (c) If first-aid oxygen equipment is installed, the minimum mass flow of oxygen to each user may... upon an average flow rate of 3 liters per minute per person for whom first-aid oxygen is required. (d...
State Labor Legislation Enacted in 1973
ERIC Educational Resources Information Center
Levy, David A.
1974-01-01
The primary areas considered by State legislatures in 1973 included higher minimum wage rates and broader coverage of minimum wage laws, improved occupational safety, collective bargaining procedures for public employees, elimination of discrimination in employment, and updating of child labor standards. (Author)
Reducing Tobacco Use and Access Through Strengthened Minimum Price Laws
Pearson, Anne; Laird-Metke, Elisa; Ribisl, Kurt
2014-01-01
Higher prices reduce consumption and initiation of tobacco products. A minimum price law that establishes a high statutory minimum price and prohibits the industry’s discounting tactics for tobacco products is a promising pricing strategy as an alternative to excise tax increases. Although some states have adopted minimum price laws on the basis of statutorily defined price “markups” over the invoice price, existing state laws have been largely ineffective at increasing the retail price. We analyzed 3 new variations of minimum price laws that hold great potential for raising tobacco prices and reducing consumption: (1) a flat rate minimum price law similar to a recent enactment in New York City, (2) an enhanced markup law, and (3) a law that incorporates both elements. PMID:25121820
The minimum follow-up required for radial head arthroplasty: a meta-analysis.
Laumonerie, P; Reina, N; Kerezoudis, P; Declaux, S; Tibbo, M E; Bonnevialle, N; Mansat, P
2017-12-01
The primary aim of this study was to define the standard minimum follow-up required to produce a reliable estimate of the rate of re-operation after radial head arthroplasty (RHA). The secondary objective was to define the leading reasons for re-operation. Four electronic databases, between January 2000 and March 2017 were searched. Articles reporting reasons for re-operation (Group I) and results (Group II) after RHA were included. In Group I, a meta-analysis was performed to obtain the standard minimum follow-up, the mean time to re-operation and the reason for failure. In Group II, the minimum follow-up for each study was compared with the standard minimum follow-up. A total of 40 studies were analysed: three were Group I and included 80 implants and 37 were Group II and included 1192 implants. In Group I, the mean time to re-operation was 1.37 years (0 to 11.25), the standard minimum follow-up was 3.25 years; painful loosening was the main indication for re-operation. In Group II, 33 Group II articles (89.2%) reported a minimum follow-up of < 3.25 years. The literature does not provide a reliable estimate of the rate of re-operation after RHA. The reproducibility of results would be improved by using a minimum follow-up of three years combined with a consensus of the definition of the reasons for failure after RHA. Cite this article: Bone Joint J 2017;99-B:1561-70. ©2017 The British Editorial Society of Bone & Joint Surgery.
Regional influences on reconstructed global mean sea level
NASA Astrophysics Data System (ADS)
Natarov, Svetlana I.; Merrifield, Mark A.; Becker, Janet M.; Thompson, Phillip R.
2017-04-01
Reconstructions of global mean sea level (GMSL) based on tide gauge measurements tend to exhibit common multidecadal rate fluctuations over the twentieth century. GMSL rate changes may result from physical drivers, such as changes in radiative forcing or land water storage. Alternatively, these fluctuations may represent artifacts due to sampling limitations inherent in the historical tide gauge network. In particular, a high percentage of tide gauges used in reconstructions, especially prior to the 1950s, are from Europe and North America in the North Atlantic region. Here a GMSL reconstruction based on the reduced space optimal interpolation algorithm is deconstructed, with the contributions of individual tide gauge stations quantified and assessed regionally. It is demonstrated that the North Atlantic region has a disproportionate influence on reconstructed GMSL rate fluctuations prior to the 1950s, notably accounting for a rate minimum in the 1920s and contributing to a rate maximum in the 1950s. North Atlantic coastal sea level fluctuations related to wind-driven ocean volume redistribution likely contribute to these estimated GMSL rate inflections. The findings support previous claims that multidecadal rate changes in GMSL reconstructions are likely related to the geographic distribution of tide gauge stations within a sparse global network.
Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio
2017-01-01
The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.
Essers, Geurt; Kramer, Anneke; Andriesse, Boukje; van Weel, Chris; van der Vleuten, Cees; van Dulmen, Sandra
2013-05-22
Assessment of medical communication performance usually focuses on rating generically applicable, well-defined communication skills. However, in daily practice, communication is determined by (specific) context factors, such as acquaintance with the patient, or the presented problem. Merely valuing the presence of generic skills may not do justice to the doctor's proficiency.Our aim was to perform an exploratory study on how assessment of general practitioner (GP) communication performance changes if context factors are explicitly taken into account. We used a mixed method design to explore how ratings would change. A random sample of 40 everyday GP consultations was used to see if previously identified context factors could be observed again. The sample was rated twice using a widely used assessment instrument (the MAAS-Global), first in the standard way and secondly after context factors were explicitly taken into account, by using a context-specific rating protocol to assess communication performance in the workplace. In between first and second rating, the presence of context factors was established. Item score differences were calculated using paired sample t-tests. In 38 out of 40 consultations, context factors prompted application of the context-specific rating protocol. Mean overall score on the 7-point MAAS-Global scale increased from 2.98 in standard to 3.66 in the context-specific rating (p<0.00); the effect size for the total mean score was 0.84. In earlier research the minimum standard score for adequate communication was set at 3.17. Applying the protocol, the mean overall score rose above the level set in an earlier study for the MAAS-Global scores to represent 'adequate GP communication behaviour'. Our findings indicate that incorporating context factors in communication assessment thus makes a meaningful difference and shows that context factors should be considered as 'signal' instead of 'noise' in GP communication assessment. Explicating context factors leads to a more deliberate and transparent rating of GP communication performance.
Luo, Yong; Wu, Dapeng; Zeng, Shaojiang; Gai, Hongwei; Long, Zhicheng; Shen, Zheng; Dai, Zhongpeng; Qin, Jianhua; Lin, Bingcheng
2006-09-01
A novel sample injection method for chip CE was presented. This injection method uses hydrostatic pressure, generated by emptying the sample waste reservoir, for sample loading and electrokinetic force for dispensing. The injection was performed on a double-cross microchip. One cross, created by the sample and separation channels, is used for formation of a sample plug. Another cross, formed by the sample and controlling channels, is used for plug control. By varying the electric field in the controlling channel, the sample plug volume can be linearly adjusted. Hydrostatic pressure takes advantage of its ease of generation on a microfluidic chip, without any electrode or external pressure pump, thus allowing a sample injection with a minimum number of electrodes. The potential of this injection method was demonstrated by a four-separation-channel chip CE system. In this system, parallel sample separation can be achieved with only two electrodes, which is otherwise impossible with conventional injection methods. Hydrostatic pressure maintains the sample composition during the sample loading, allowing the injection to be free of injection bias.
Beman, J Michael; Leilei Shih, Joy; Popp, Brian N
2013-01-01
Nitrogen (N) is an essential nutrient in the sea and its distribution is controlled by microorganisms. Within the N cycle, nitrite (NO2−) has a central role because its intermediate redox state allows both oxidation and reduction, and so it may be used by several coupled and/or competing microbial processes. In the upper water column and oxygen minimum zone (OMZ) of the eastern tropical North Pacific Ocean (ETNP), we investigated aerobic NO2− oxidation, and its relationship to ammonia (NH3) oxidation, using rate measurements, quantification of NO2−-oxidizing bacteria via quantitative PCR (QPCR), and pyrosequencing. 15NO2− oxidation rates typically exhibited two subsurface maxima at six stations sampled: one located below the euphotic zone and beneath NH3 oxidation rate maxima, and another within the OMZ. 15NO2− oxidation rates were highest where dissolved oxygen concentrations were <5 μM, where NO2− accumulated, and when nitrate (NO3−) reductase genes were expressed; they are likely sustained by NO3− reduction at these depths. QPCR and pyrosequencing data were strongly correlated (r2=0.79), and indicated that Nitrospina bacteria numbered up to 9.25% of bacterial communities. Different Nitrospina groups were distributed across different depth ranges, suggesting significant ecological diversity within Nitrospina as a whole. Across the data set, 15NO2− oxidation rates were decoupled from 15NH4+ oxidation rates, but correlated with Nitrospina (r2=0.246, P<0.05) and NO2− concentrations (r2=0.276, P<0.05). Our findings suggest that Nitrospina have a quantitatively important role in NO2− oxidation and N cycling in the ETNP, and provide new insight into their ecology and interactions with other N-cycling processes in this biogeochemically important region of the ocean. PMID:23804152
Pesticide data for selected Wyoming streams, 1976-78
Butler, David L.
1987-01-01
In 1976, the U.S. Geological Survey, in cooperation with the Wyoming Department of Agriculture, started a monitoring program to determine pesticide concentrations in Wyoming streams. This program was incorporated into the water-quality data-collection system already in operation. Samples were collected at 20 sites for analysis of various insecticides, herbicides, polychlorinated biphenyls, and polychlorinated napthalenes.\\The results through 1978 revealed small concentrations of pesticides in water and bottom-material samples were DDE (39 percent of the concentrations equal to or greater than the minimum reported concentrations of the analytical methods), DDD (20 percent), dieldrin (21 percent), and polychlorinated biphenyls (29 percent). The herbicides most commonly found in water samples were 2,4-D (29 percent of the concentrations equal to or greater than the minimum reported concentrations of the analytical method) and picloram (23 percent). Most concentrations were significantly less than concentrations thought to be harmful to freshwater aquatic life based on available toxicity data. However for some pesticides, U.S. Environmental Protection Agency water-quality criteria for freshwater aquatic life are based on bioaccumulation factors that result in criteria concentrations less than the minimum reported concentrations of the analytical methods. It is not known if certain pesticides were present at concentrations less than the minimum reported concentrations that exceeded these criteria.
Davidson, C A; Griffith, C J; Peters, A C; Fielding, L M
1999-01-01
The minimum bacterial detection limits and operator reproducibility of the Biotrace Clean-Tracetrade mark Rapid Cleanliness Test and traditional hygiene swabbing were determined. Areas (100 cm2) of food grade stainless steel were separately inoculated with known levels of Staphylococcus aureus (NCTC 6571) and Escherichia coli (ATCC 25922). Surfaces were sampled either immediately after inoculation while still wet, or after 60 min when completely dry. For both organisms the minimum detection limit of the ATP Clean-Tracetrade mark Rapid Cleanliness Test was 10(4) cfu/100 cm2 (p < 0.05) and was the same for wet and dry surfaces. Both organism type and surface status (i.e. wet or dry) influenced the minimum detection limits of hygiene swabbing, which ranged from 10(2) cfu/100 cm2 to >10(7) cfu/100 cm2. Hygiene swabbing percentage recovery rates for both organisms were less than 0.1% for dried surfaces but ranged from 0.33% to 8.8% for wet surfaces. When assessed by six technically qualified operators, the Biotrace Clean-Tracetrade mark Rapid Cleanliness Test gave superior reproducibility for both clean and inoculated surfaces, giving mean coefficients of variation of 24% and 32%, respectively. Hygiene swabbing of inoculated surfaces gave a mean CV of 130%. The results are discussed in the context of hygiene monitoring within the food industry. Copyright 1999 John Wiley & Sons, Ltd.
The effects of ketamine on the minimum alveolar concentration of isoflurane in cats.
Pascoe, Peter J; Ilkiw, Janet E; Craig, Carolyn; Kollias-Baker, Cynthia
2007-01-01
To determine the minimum alveolar concentration (MAC) of isoflurane during the infusion of ketamine. Prospective, experimental trial. Twelve adult spayed female cats weighing 5.1 +/- 0.9 kg. Six cats were anesthetized with isoflurane in oxygen, intubated and attached to a circle-breathing system with mechanical ventilation. Catheters were placed in a peripheral vein for the infusion of fluids and ketamine, and the jugular vein for blood sampling for the measurement of ketamine concentrations. An arterial catheter was placed to allow blood pressure measurement and sampling for the measurement of PaCO2, PaO2 and pH. PaCO2 was maintained between 29 and 41 mmHg (3.9-5.5 kPa) and body temperature was kept between 37.8 and 39.3 degrees C. Following instrumentation, the MAC of isoflurane was determined in triplicate using a tail clamp method. A loading dose (2 mg kg(-1) over 5 minutes) and an infusion (23 microg kg(-1) minute(-1)) of ketamine was started and MAC was redetermined starting 30 minutes later. Two further loading doses and infusions were used, 2 mg kg(-1) and 6 mg kg(-1) with 46 and 115 microg kg(-1) minute(-1), respectively and MAC was redetermined. Cardiopulmonary measurements were taken before application of the noxious stimulus. The second group of six cats was used for the measurement of steady state plasma ketamine concentrations at each of the three infusion rates used in the initial study and the appropriate MAC value determined from the first study. The MAC decreased by 45 +/- 17%, 63 +/- 18%, and 75 +/- 17% at the infusion rates of 23, 46, and 115 microg kg(-1) minute(-1). These infusion rates corresponded to ketamine plasma concentrations of 1.75 +/- 0.21, 2.69 +/- 0.40, and 5.36 +/- 1.19 microg mL(-1). Arterial blood pressure and heart rate increased significantly with ketamine. Recovery was protracted. The MAC of isoflurane was significantly decreased by an infusion of ketamine and this was accompanied by an increase in heart rate and blood pressure. Because of the prolonged recovery in our cats, further work needs to be performed before using this in patients.
On a thermonuclear origin for the 1980-81 deep light minimum of the symbiotic nova PU Vul
NASA Technical Reports Server (NTRS)
Sion, Edward M.
1993-01-01
The puzzling 1980-81 deep light minimum of the symbiotic nova PU Vul is discussed in terms of a sequence of quasi-static evolutionary models of a hot, 0.5 solar mass white dwarf accreting H-rich matter at a rate 1 x 10 exp -8 solar mass/yr. On the basis of the morphological behavior of the models, it is suggested that the deep light minimum of PU Vul could have been the result of two successive, closely spaced, hydrogen shell flashes on an accreting white dwarf whose core thermal structure and accreted H-rich envelope was not in a long-term thermal 'cycle-averaged' steady state with the rate of accretion.
40 CFR 761.323 - Sample preparation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Remediation Waste Samples § 761.323 Sample preparation. (a) The comparison study requires analysis of a minimum of 10 samples weighing at least 300 grams each. Samples of PCB remediation waste used in the... PCB remediation waste at the cleanup site, or must be the same kind of material as that waste. For...
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operation requirements as follows: (i) All COMS shall complete a minimum of one cycle of sampling and analyzing for each successive 10-second period and one cycle of data recording for each successive 6-minute period. (ii) All CEMS for measuring emissions other than opacity shall complete a minimum of one cycle of...
30 CFR 7.308 - Lockwasher equivalency test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... hole and continuously and uniformly tightened at a speed not to exceed 30 rpm until the fastening's... cycles. (b) Acceptable performance. The minimum torque value required to start removal of the fastening from the installed position (minimum breakway torque) for any cycle of any test sample shall be greater...
30 CFR 7.308 - Lockwasher equivalency test.
Code of Federal Regulations, 2011 CFR
2011-07-01
... hole and continuously and uniformly tightened at a speed not to exceed 30 rpm until the fastening's... cycles. (b) Acceptable performance. The minimum torque value required to start removal of the fastening from the installed position (minimum breakway torque) for any cycle of any test sample shall be greater...
30 CFR 7.308 - Lockwasher equivalency test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... hole and continuously and uniformly tightened at a speed not to exceed 30 rpm until the fastening's... cycles. (b) Acceptable performance. The minimum torque value required to start removal of the fastening from the installed position (minimum breakway torque) for any cycle of any test sample shall be greater...
30 CFR 7.308 - Lockwasher equivalency test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... hole and continuously and uniformly tightened at a speed not to exceed 30 rpm until the fastening's... cycles. (b) Acceptable performance. The minimum torque value required to start removal of the fastening from the installed position (minimum breakway torque) for any cycle of any test sample shall be greater...
30 CFR 7.308 - Lockwasher equivalency test.
Code of Federal Regulations, 2010 CFR
2010-07-01
... hole and continuously and uniformly tightened at a speed not to exceed 30 rpm until the fastening's... cycles. (b) Acceptable performance. The minimum torque value required to start removal of the fastening from the installed position (minimum breakway torque) for any cycle of any test sample shall be greater...
40 CFR 1065.546 - Validation of minimum dilution ratio for PM batch sampling.
Code of Federal Regulations, 2010 CFR
2010-07-01
... flows and/or tracer gas concentrations for transient and ramped modal cycles to validate the minimum... mode-average values instead of continuous measurements for discrete mode steady-state duty cycles... molar flow data. This involves determination of at least two of the following three quantities: Raw...
Teenagers and the Minimum Wage in Retail Trade
ERIC Educational Resources Information Center
Cotterill, Philip G.; Wadycki, Walter J.
1976-01-01
The impact of minimum wage policy on the hiring of teenagers in relation to adult laborers in retail trade has been assessed through analysis of a study sample of 353 male and 391 female retail trade employees who were part of the 1967 Survey of Economic Opportunity. (LH)
Biver, Marc; Filella, Montserrat
2016-05-03
The toxicity of Cd being well established and that of Te suspected, the bulk, surface-normalized steady-state dissolution rates of two industrially important binary tellurides-polycrystalline cadmium and bismuth tellurides- were studied over the pH range 3-11, at various temperatures (25-70 °C) and dissolved oxygen concentrations (0-100% O2 in the gas phase). The behavior of both tellurides is strikingly different. The dissolution rates of CdTe monotonically decreased with increasing pH, the trend becoming more pronounced with increasing temperature. Activation energies were of the order of magnitude associated with surface controlled processes; they decreased with decreasing acidity. At pH 7, the CdTe dissolution rate increased linearly with dissolved oxygen. In anoxic solution, CdTe dissolved at a finite rate. In contrast, the dissolution rate of Bi2Te3 passed through a minimum at pH 5.3. The activation energy had a maximum in the rate minimum at pH 5.3 and fell below the threshold for diffusion control at pH 11. No oxygen dependence was detected. Bi2Te3 dissolves much more slowly than CdTe; from one to more than 3.5 orders of magnitude in the Bi2Te3 rate minimum. Both will readily dissolve under long-term landfill deposition conditions but comparatively slowly.
Modeling the Atmospheric Phase Effects of a Digital Antenna Array Communications System
NASA Technical Reports Server (NTRS)
Tkacenko, A.
2006-01-01
In an antenna array system such as that used in the Deep Space Network (DSN) for satellite communication, it is often necessary to account for the effects due to the atmosphere. Typically, the atmosphere induces amplitude and phase fluctuations on the transmitted downlink signal that invalidate the assumed stationarity of the signal model. The degree to which these perturbations affect the stationarity of the model depends both on parameters of the atmosphere, including wind speed and turbulence strength, and on parameters of the communication system, such as the sampling rate used. In this article, we focus on modeling the atmospheric phase fluctuations in a digital antenna array communications system. Based on a continuous-time statistical model for the atmospheric phase effects, we show how to obtain a related discrete-time model based on sampling the continuous-time process. The effects of the nonstationarity of the resulting signal model are investigated using the sample matrix inversion (SMI) algorithm for minimum mean-squared error (MMSE) equalization of the received signal
Acute stress induces increases in salivary IL-10 levels.
Szabo, Yvette Z; Newton, Tamara L; Miller, James J; Lyle, Keith B; Fernandez-Botran, Rafael
2016-09-01
The purpose of this study was to investigate the stress-reactivity of the anti-inflammatory cytokine, IL-10, in saliva and to determine how salivary IL-10 levels change in relation to those of IL-1β, a pro-inflammatory cytokine, following stress. Healthy young adults were randomly assigned to retrieve a negative emotional memory (n = 46) or complete a modified version of the Trier Social Stress Test (n = 45). Saliva samples were taken 10 min before (baseline) and 50 min after (post-stressor) onset of a 10-min stressor, and were assayed using a high sensitivity multiplex assay for cytokines. Measurable IL-10 levels (above the minimum detectable concentration) were found in 96% of the baseline samples, and 98% of the post-stressor samples. Flow rate-adjusted salivary IL-10 levels as well as IL-1β/IL-10 ratios showed moderate but statistically significant increases in response to stress. Measurement of salivary IL-10 and pro-/anti-inflammatory cytokine ratios may be useful, noninvasive tools, in stress research.
Prevalence study of compulsive buying in a sample with low individual monthly income.
Leite, Priscilla Lourenço; Silva, Adriana Cardoso
2015-01-01
Compulsive buying can be characterized as an almost irresistible impulse to acquire various items. This is a current issue and the prevalence rate in the global population is around 5 to 8%. Some surveys indicate that the problem is growing in young and low-income populations. To evaluate the prevalence of compulsive buying among people with low personal monthly incomes and analyze relationships with socio-demographic data. The Compulsive Buying Scale was administered to screen for compulsive buying and the Hospital Anxiety and Depression Scale was used to assess anxiety and depression in a sample of 56 participants. Pearson coefficients were used to test for correlations. The results indicated that 44.6% presented an average family income equal to or greater than 2.76 minimum wages. It is possible that compulsive buying is not linked to the purchasing power since it was found in a low-income population. Despite the small sample, the results of this study are important for understanding the problem in question.
Results from the first five years of radiation exposure monitoring aboard the ISS
NASA Astrophysics Data System (ADS)
Golightly, M.; Semones, E.; Shelfer, T.; Johnson, S.; Zapp, N.; Weyland, M.
NASA uses a variety of radiation monitoring devices aboard the International Space Station as part of its space flight radiation health program. This operational monitoring system consists of passive dosimeters, internal and external charged particle telescopes, and a tissue equivalent proportional counter (TEPC). Sixteen passive dosimeters, each consisting of TLD-100, TLD-300, TLD-600, and TLD-700 chips in a small acrylic holder, are placed throughout the habitable volume of the ISS. The TEPC and internal charged particle telescopes are portable and can be relocated to multiple locations in the Lab Module or Service Module. The external charged particle telescopes are mounted to a fixed boom attached to the starboard truss. Passive dosimeters were used in eleven monitoring periods over the period 20 May 1999 to 04 May 2003. Over this period exposure rates from TLD-100 measurements ranged from 0.120-0.300 mGy/d. Exposure rates inside the habitable volume are non-uniform: exposures vary by a factor of ˜ 1.7 from minimum to maximum, with the greatest non-uniformity occurring in the Lab Module. Highest daily exposure rates are near the window in the Lab Module, inside the Joint Airlock, and the sleep stations inside the Service Module, while the lowest rates occur inside the polyethylene-lined Temporary Sleep Station in the Lab Module, adjacent to the port ``arm'' of Node 1, and the aft end of the Service Module. The minimum exposure rates as measured by the passive dosimeters occurred in the spring of 2002, very close to the solar F10.7 emission maximum (Feb 2002), and two years after the sunspot maximum (Apr 2000). Exposure rates have since gradually increased as the sun's activity transitions towards solar minimum conditions. Since 01 Jun 2002, dose rates measured by the IV-CPDS, estimated from the count rate in first detector of the telescope's stack, ranged from ˜ 0.170-0.390 mGy/d. The maximum measured dose rate occurred 28 Oct 2003 during the ``Halloween'' space weather event. Interestingly, the minimum dose rate occurred 31 Oct 2003, near the end of the same remarkable space weather event, when the Earth was experiencing a significant Forbush decrease. The average IV-CPDS-measured dose rate increased from 0.194 to 0.234 mGy/d since 01 Jun 2002--an increase of ˜ 21% and a further indication that the low-Earth radiation environment is transitioning from solar maximum conditions towards solar minimum.
Method and apparatus for determination of material residual stress
NASA Technical Reports Server (NTRS)
Chern, Engmin J. (Inventor); Flom, Yury (Inventor)
1993-01-01
A device for the determination of residual stress in a material sample consisting of a sensor coil, adjacent to the material sample, whose resistance varies according to the amount of stress within the material sample, a mechanical push-pull machine for imparting a gradually increasing compressional and tensional force on the material sample, and an impedance gain/phase analyzer and personal computer (PC) for sending an input signal to and receiving an input signal from the sensor coil is presented. The PC will measure and record the change in resistance of the sensor coil and the corresponding amount of strain of the sample. The PC will then determine, from the measurements of change of resistance and corresponding strain of the sample, the point at which the resistance of the sensor coil is at a minimum and the corresponding value and type of strain of the sample at that minimum resistance point, thereby, enabling a calculation of the residual stress in the sample.
Evidence for Ultra-Fast Outflows in Radio-Quiet AGNs: III - Location and Energetics
NASA Technical Reports Server (NTRS)
Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.
2012-01-01
Using the results of a previous X-ray photo-ionization modelling of blue-shifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval approx.0.0003-0.03pc (approx.10(exp 2)-10(exp 4)tau(sub s) from the central black hole, consistent with what is expected for accretion disk winds/outflows. The mass outflow rates are constrained between approx.0.01- 1 Stellar Mass/y, corresponding to approx. or >5-10% of the accretion rates. The average lower-upper limits on the mechanical power are logE(sub K) approx. or = 42.6-44.6 erg/s. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN r.osmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyferts galaxies .
Cheng, Jiyi; Gu, Chenglin; Zhang, Dapeng; Wang, Dien; Chen, Shih-Chi
2016-04-01
In this Letter, we present an ultrafast nonmechanical axial scanning method for two-photon excitation (TPE) microscopy based on binary holography using a digital micromirror device (DMD), achieving a scanning rate of 4.2 kHz, scanning range of ∼180 μm, and scanning resolution (minimum step size) of ∼270 nm. Axial scanning is achieved by projecting the femtosecond laser to a DMD programmed with binary holograms of spherical wavefronts of increasing/decreasing radii. To guide the scanner design, we have derived the parametric relationships between the DMD parameters (i.e., aperture and pixel size), and the axial scanning characteristics, including (1) maximum optical power, (2) minimum step size, and (3) scan range. To verify the results, the DMD scanner is integrated with a custom-built TPE microscope that operates at 60 frames per second. In the experiment, we scanned a pollen sample via both the DMD scanner and a precision z-stage. The results show the DMD scanner generates images of equal quality throughout the scanning range. The overall efficiency of the TPE system was measured to be ∼3%. With the high scanning rate, the DMD scanner may find important applications in random-access imaging or high-speed volumetric imaging that enables visualization of highly dynamic biological processes in 3D with submillisecond temporal resolution.
Du, Xiaozhen; Wang, Wei; Helena van Velthoven, Michelle; Chen, Li; Scherpbier, Robert W.; Zhang, Yanfeng; Wu, Qiong; Li, Ye; Rao, Xiuqin; Car, Josip
2013-01-01
Background Face–to–face interviews by trained field workers are commonly used in household surveys. However, this data collection method is labor–intensive, time–consuming, expensive, prone to interviewer and recall bias and not easily scalable to increase sample representativeness. Objective To explore the feasibility of using text messaging to collect information on infant and young child feeding practice in rural China. Methods Our study was part of a clustered randomized controlled trial that recruited 591 mothers of children aged 12 to 29 months in rural China. We used the test–retest method: first we collected data through face–to–face interviews and then through text messages. We asked the same five questions on standard infant and young child feeding indicators for both methods and asked caregivers how they fed their children yesterday. We assessed the response rate of the text messaging method and compared data agreement of the two methods. Finding In the text messaging survey, the response rate for the first question and the completion rate were 56.5% and 48.7%, respectively. Data agreement between the two methods was excellent for whether the baby was breastfed yesterday (question 1) (kappa, κ = 0.81), moderate for the times of drinking infant formula, fresh milk or yoghurt yesterday (question 2) (intraclass correlation coefficient, ICC = 0.46) and whether iron fortified food or iron supplement was consumed (question 3) (κ = 0.44), and poor for 24–hour dietary recall (question 4) (ICC = 0.13) and times of eating solid and semi–solid food yesterday (question 5) (ICC = 0.06). There was no significant difference in data agreement between the two surveys at different time intervals. For infant and young child feeding indicators from both surveys, continued breastfeeding at 1 year (P = 1.000), continued breastfeeding at 2 years (P = 0.688) and minimum meal frequency (P = 0.056) were not significantly different, whereas minimum dietary diversity, minimum accepted diet and consumption of iron–rich or iron fortified foods were significantly different (P < 0.001). Conclusions The response rate for our text messaging survey was moderate compared to response rate of other studies using text messaging method and the data agreement between the two methods varied for different survey questions and infant and young child feeding indicators. Future research is needed to increase the response rate and improve data validity of text messaging data collection. PMID:24363921
Cherner, M; Suarez, P; Lazzaretto, D; Fortuny, L Artiola I; Mindt, Monica Rivera; Dawes, S; Marcotte, Thomas; Grant, I; Heaton, R
2007-03-01
The large number of primary Spanish speakers both in the United States and the world makes it imperative that appropriate neuropsychological assessment instruments be available to serve the needs of these populations. In this article we describe the norming process for Spanish speakers from the U.S.-Mexico border region on the Brief Visuospatial Memory Test-revised and the Hopkins Verbal Learning Test-revised. We computed the rates of impairment that would be obtained by applying the original published norms for these tests to raw scores from the normative sample, and found substantial overestimates compared to expected rates. As expected, these overestimates were most salient at the lowest levels of education, given the under-representation of poorly educated subjects in the original normative samples. Results suggest that demographically corrected norms derived from healthy Spanish-speaking adults with a broad range of education, are less likely to result in diagnostic errors. At minimum, demographic corrections for the tests in question should include the influence of literacy or education, in addition to the traditional adjustments for age. Because the age range of our sample was limited, the norms presented should not be applied to elderly populations.
Spatial variability in airborne pollen concentrations.
Raynor, G S; Ogden, E C; Hayes, J V
1975-03-01
Tests were conducted to determine the relationship between airborne pollen concentrations and distance. Simultaneous samples were taken in 171 tests with sets of eight rotoslide samplers spaced from one to 486 M. apart in straight lines. Use of all possible pairs gave 28 separation distances. Tests were conducted over a 2-year period in urban and rural locations distant from major pollen sources during both tree and ragweed pollen seasons. Samples were taken at a height of 1.5 M. during 5-to 20-minute periods. Tests were grouped by pollen type, location, year, and direction of the wind relative to the line. Data were analyzed to evaluate variability without regard to sampler spacing and variability as a function of separation distance. The mean, standard deviation, coefficient of variation, ratio of maximum to the mean, and ratio of minimum to the mean were calculated for each test, each group of tests, and all cases. The average coefficient of variation is 0.21, the maximum over the mean, 1.39 and the minimum over the mean, 0.69. No relationship was found with experimental conditions. Samples taken at the minimum separation distance had a mean difference of 18 per cent. Differences between pairs of samples increased with distance in 10 of 13 groups. These results suggest that airborne pollens are not always well mixed in the lower atmosphere and that a sample becomes less representative with increasing distance from the sampling location.
Gallego, E; Perales, J F; Roca, F J; Guardino, X
2014-02-01
Closed landfills can be a source of VOC and odorous nuisances to their atmospheric surroundings. A self-designed cylindrical air flux chamber was used to measure VOC surface emissions in a closed industrial landfill located in Cerdanyola del Vallès, Catalonia, Spain. The two main objectives of the study were the evaluation of the performance of the chamber setup in typical measurement conditions and the determination of the emission rates of 60 different VOC from that industrial landfill, generating a valuable database that can be useful in future studies related to industrial landfill management. Triplicate samples were taken in five selected sampling points. VOC were sampled dynamically using multi-sorbent bed tubes (Carbotrap, Carbopack X, Carboxen 569) connected to SKC AirCheck 2000 pumps. The analysis was performed by automatic thermal desorption coupled with a capillary gas chromatograph/mass spectrometry detector. The emission rates of sixty VOC were calculated for each sampling point in an effort to characterize surface emissions. To calculate average, minimum and maximum emission values for each VOC, the results were analyzed by three different methods: Global, Kriging and Tributary area. Global and Tributary area methodologies presented similar values, with total VOC emissions of 237 ± 48 and 222 ± 46 g day(-1), respectively; however, Kriging values were lower, 77 ± 17 gd ay(-1). The main contributors to the total emission rate were aldehydes (nonanal and decanal), acetic acid, ketones (acetone), aromatic hydrocarbons and alcohols. Most aromatic hydrocarbon (except benzene, naphthalene and methylnaphthalenes) and aldehyde emission rates exhibited strong correlations with the rest of VOC of their family, indicating a possible common source of these compounds. B:T ratio obtained from the emission rates of the studied landfill suggested that the factors that regulate aromatic hydrocarbon distributions in the landfill emissions are different from the ones from urban areas. Environmental conditions (atmospheric pressure, temperature and relative humidity) did not alter the pollutant emission fluxes. © 2013.
Two-sample binary phase 2 trials with low type I error and low sample size
Litwin, Samuel; Basickes, Stanley; Ross, Eric A.
2017-01-01
Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686
NASA Astrophysics Data System (ADS)
Zhang, Ying; Mao, Xuefei; Liu, Jixin; Wang, Min; Qian, Yongzhong; Gao, Chengling; Qi, Yuehan
2016-04-01
In this work, a solid sampling device consisting of a tungsten coil trap, porous carbon vaporizer and on-line ashing furnace of a Ni-Cr coil was interfaced with inductively coupled plasma mass spectrometry (ICP-MS). A modified double gas circuit system was employed that was composed of carrier and supplemental gas lines controlled by separate gas mass flow controllers. For Cd determination in food samples using the assembled solid sampling ICP-MS, the optimal ashing and vaporization conditions, flow rate of the argon-hydrogen (Ar/H2) (v:v = 24:1) carrier gas and supplemental gas, and minimum sampling mass were investigated. Under the optimized conditions, the limit of quantification was 0.5 pg and the relative standard deviation was within a 10.0% error range (n = 10). Furthermore, the mean spiked recoveries for various food samples were 99.4%-105.9% (n = 6). The Cd concentrations measured by the proposed method were all within the certified values of the reference materials or were not significantly different (P > 0.05) from those of the microwave digestion ICP-MS method, demonstrating the good accuracy and precision of the solid sampling ICP-MS method for Cd determination in food samples.
Robust linear discriminant analysis with distance based estimators
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina
2017-11-01
Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.
Holmes, John; Meng, Yang; Meier, Petra S; Brennan, Alan; Angus, Colin; Campbell-Burton, Alexia; Guo, Yelan; Hill-McManus, Daniel; Purshouse, Robin C
2014-01-01
Summary Background Several countries are considering a minimum price policy for alcohol, but concerns exist about the potential effects on drinkers with low incomes. We aimed to assess the effect of a £0·45 minimum unit price (1 unit is 8 g/10 mL ethanol) in England across the income and socioeconomic distributions. Methods We used the Sheffield Alcohol Policy Model (SAPM) version 2.6, a causal, deterministic, epidemiological model, to assess effects of a minimum unit price policy. SAPM accounts for alcohol purchasing and consumption preferences for population subgroups including income and socioeconomic groups. Purchasing preferences are regarded as the types and volumes of alcohol beverages, prices paid, and the balance between on-trade (eg, bars) and off-trade (eg, shops). We estimated price elasticities from 9 years of survey data and did sensitivity analyses with alternative elasticities. We assessed effects of the policy on moderate, hazardous, and harmful drinkers, split into three socioeconomic groups (living in routine or manual households, intermediate households, and managerial or professional households). We examined policy effects on alcohol consumption, spending, rates of alcohol-related health harm, and opportunity costs associated with that harm. Rates of harm and costs were estimated for a 10 year period after policy implementation. We adjusted baseline rates of mortality and morbidity to account for differential risk between socioeconomic groups. Findings Overall, a minimum unit price of £0·45 led to an immediate reduction in consumption of 1·6% (−11·7 units per drinker per year) in our model. Moderate drinkers were least affected in terms of consumption (−3·8 units per drinker per year for the lowest income quintile vs 0·8 units increase for the highest income quintile) and spending (increase in spending of £0·04 vs £1·86 per year). The greatest behavioural changes occurred in harmful drinkers (change in consumption of −3·7% or −138·2 units per drinker per year, with a decrease in spending of £4·01), especially in the lowest income quintile (−7·6% or −299·8 units per drinker per year, with a decrease in spending of £34·63) compared with the highest income quintile (−1·0% or −34·3 units, with an increase in spending of £16·35). Estimated health benefits from the policy were also unequally distributed. Individuals in the lowest socioeconomic group (living in routine or manual worker households and comprising 41·7% of the sample population) would accrue 81·8% of reductions in premature deaths and 87·1% of gains in terms of quality-adjusted life-years. Interpretation Irrespective of income, moderate drinkers were little affected by a minimum unit price of £0·45 in our model, with the greatest effects noted for harmful drinkers. Because harmful drinkers on low incomes purchase more alcohol at less than the minimum unit price threshold compared with other groups, they would be affected most by this policy. Large reductions in consumption in this group would however coincide with substantial health gains in terms of morbidity and mortality related to reduced alcohol consumption. Funding UK Medical Research Council and Economic and Social Research Council (grant G1000043). PMID:24522180
Holmes, John; Meng, Yang; Meier, Petra S; Brennan, Alan; Angus, Colin; Campbell-Burton, Alexia; Guo, Yelan; Hill-McManus, Daniel; Purshouse, Robin C
2014-05-10
Several countries are considering a minimum price policy for alcohol, but concerns exist about the potential effects on drinkers with low incomes. We aimed to assess the effect of a £0·45 minimum unit price (1 unit is 8 g/10 mL ethanol) in England across the income and socioeconomic distributions. We used the Sheffield Alcohol Policy Model (SAPM) version 2.6, a causal, deterministic, epidemiological model, to assess effects of a minimum unit price policy. SAPM accounts for alcohol purchasing and consumption preferences for population subgroups including income and socioeconomic groups. Purchasing preferences are regarded as the types and volumes of alcohol beverages, prices paid, and the balance between on-trade (eg, bars) and off-trade (eg, shops). We estimated price elasticities from 9 years of survey data and did sensitivity analyses with alternative elasticities. We assessed effects of the policy on moderate, hazardous, and harmful drinkers, split into three socioeconomic groups (living in routine or manual households, intermediate households, and managerial or professional households). We examined policy effects on alcohol consumption, spending, rates of alcohol-related health harm, and opportunity costs associated with that harm. Rates of harm and costs were estimated for a 10 year period after policy implementation. We adjusted baseline rates of mortality and morbidity to account for differential risk between socioeconomic groups. Overall, a minimum unit price of £0.45 led to an immediate reduction in consumption of 1.6% (-11.7 units per drinker per year) in our model. Moderate drinkers were least affected in terms of consumption (-3.8 units per drinker per year for the lowest income quintile vs 0.8 units increase for the highest income quintile) and spending (increase in spending of £0.04 vs £1.86 per year). The greatest behavioural changes occurred in harmful drinkers (change in consumption of -3.7% or -138.2 units per drinker per year, with a decrease in spending of £4.01), especially in the lowest income quintile (-7.6% or -299.8 units per drinker per year, with a decrease in spending of £34.63) compared with the highest income quintile (-1.0% or -34.3 units, with an increase in spending of £16.35). Estimated health benefits from the policy were also unequally distributed. Individuals in the lowest socioeconomic group (living in routine or manual worker households and comprising 41.7% of the sample population) would accrue 81.8% of reductions in premature deaths and 87.1% of gains in terms of quality-adjusted life-years. Irrespective of income, moderate drinkers were little affected by a minimum unit price of £0.45 in our model, with the greatest effects noted for harmful drinkers. Because harmful drinkers on low incomes purchase more alcohol at less than the minimum unit price threshold compared with other groups, they would be affected most by this policy. Large reductions in consumption in this group would however coincide with substantial health gains in terms of morbidity and mortality related to reduced alcohol consumption. UK Medical Research Council and Economic and Social Research Council (grant G1000043). Copyright © 2014 Holmes et al. Open Access article distributed under the terms of CC BY. Published by Elsevier Ltd. All rights reserved.
Daily temperature records from a mesonet in the foothills of the Canadian Rocky Mountains, 2005-2010
NASA Astrophysics Data System (ADS)
Wood, Wendy H.; Marshall, Shawn J.; Whitehead, Terri L.; Fargey, Shannon E.
2018-03-01
Near-surface air temperatures were monitored from 2005 to 2010 in a mesoscale network of 230 sites in the foothills of the Rocky Mountains in southwestern Alberta, Canada. The monitoring network covers a range of elevations from 890 to 2880 m above sea level and an area of about 18 000 km2, sampling a variety of topographic settings and surface environments with an average spatial density of one station per 78 km2. This paper presents the multiyear temperature dataset from this study, with minimum, maximum, and mean daily temperature data available at https://doi.org/10.1594/PANGAEA.880611. In this paper, we describe the quality control and processing methods used to clean and filter the data and assess its accuracy. Overall data coverage for the study period is 91 %. We introduce a weather-system-dependent gap-filling technique to estimate the missing 9 % of data. Monthly and seasonal distributions of minimum, maximum, and mean daily temperature lapse rates are shown for the region.
A probabilistic asteroid impact risk model: assessment of sub-300 m impacts
NASA Astrophysics Data System (ADS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2017-06-01
A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
20 CFR 229.45 - Employee benefit.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...
NASA Astrophysics Data System (ADS)
Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.
2018-06-01
Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.
Tomblin, J. Bruce; Peng, Shu-Chen; Spencer, Linda J.; Lu, Nelson
2011-01-01
Purpose This study characterized the development of speech sound production in prelingually deaf children with a minimum of 8 years of cochlear implant (CI) experience. Method Twenty-seven pediatric CI recipients' spontaneous speech samples from annual evaluation sessions were phonemically transcribed. Accuracy for these speech samples was evaluated in piecewise regression models. Results As a group, pediatric CI recipients showed steady improvement in speech sound production following implantation, but the improvement rate declined after 6 years of device experience. Piecewise regression models indicated that the slope estimating the participants' improvement rate was statistically greater than 0 during the first 6 years postimplantation, but not after 6 years. The group of pediatric CI recipients' accuracy of speech sound production after 4 years of device experience reasonably predicts their speech sound production after 5–10 years of device experience. Conclusions The development of speech sound production in prelingually deaf children stabilizes after 6 years of device experience, and typically approaches a plateau by 8 years of device use. Early growth in speech before 4 years of device experience did not predict later rates of growth or levels of achievement. However, good predictions could be made after 4 years of device use. PMID:18695018
NASA Astrophysics Data System (ADS)
Woeger, Julia; Kinoshita, Shunichi; Wolfgang, Eder; Briguglio, Antonino; Hohenegger, Johann
2016-04-01
Operculina complanata was collected in 20 and 50 m depth around the Island of Sesoko belonging to Japans southernmost prefecture Okinawa in a series of monthly sampling over a period of 16 months (Apr.2014-July2015). A minimum of 8 specimens (4 among the smallest and 4 among the largest) per sampling were cultured in a long term experiment that was set up to approximate conditions in the field as closely as possible. A set up allowing recognition of individual specimens enabled consistent documentation of chamber formation, which in combination with μ-CT-scanning after the investigation period permitted the assignment of growth steps to specific time periods. These data were used to fit various mathematical models to describe growth (exponential-, logistic-, generalized logistic-, Gompertz-function) and chamber building rate (Michaelis-Menten-, Bertalanffy- function) of Operculina complanata. The mathematically retrieved maximum lifespan and mean chamber building rate found in cultured Operculina complanata were further compared to first results obtained by the simultaneously conducted "natural laboratory approach". Even though these comparisons hint at a somewhat stunted growth and truncated life spans of Operculina complanata in culture, they represent a possibility to assess and improve the quality of further cultivation set ups, opening new prospects to a better understanding of the their theoretical niches.
Heart Rate During Sleep: Implications for Monitoring Training Status
Waldeck, Miriam R.; Lambert, Michael I.
2003-01-01
Resting heart rate has sometimes been used as a marker of training status. It is reasonable to assume that the relationship between heart rate and training status should be more evident during sleep when extraneous factors that may influence heart rate are reduced. Therefore the aim of the study was to assess the repeatability of monitoring heart rate during sleep when training status remained unchanged, to determine if this measurement had sufficient precision to be used as a marker of training status. The heart rate of ten female subjects was monitored for 24 hours on three occasions over three weeks whilst training status remained unchanged. Average, minimum and maximum heart rate during sleep was calculated. The average heart rate of the group during sleep was similar on each of the three tests (65 ± 9, 63 ± 6 and 67 ± 7 beats·min-1 respectively). The range in minimum heart rate variation during sleep for all subjects over the three testing sessions was from 0 to 10 beats·min-1 (mean = 5 ± 3 beats·min-1) and for maximum heart rate variation was 2 to 31 beats·min-1 (mean = 13 ± 9 beats·min-1). In summary it was found that on an individual basis the minimum heart rate during sleep varied by about 8 beats·min-1. This amount of intrinsic day-to-day variation needs to be considered when changes in heart rate that may occur with changes in training status are interpreted. PMID:24688273
Luo, Xiao-Feng; Jiao, Jian-Hua; Zhang, Wen-Yue; Pu, Han-Ming; Qu, Bao-Jin; Yang, Bing-Ya; Hou, Min; Ji, Min-Jun
2016-07-07
To investigate clarithromycin resistance positions 2142, 2143 and 2144 of the 23SrRNA gene in Helicobacter pylori (H. pylori) by nested-allele specific primer-polymerase chain reaction (nested-ASP-PCR). The gastric tissue and saliva samples from 99 patients with positive results of the rapid urease test (RUT) were collected. The nested-ASP-PCR method was carried out with the external primers and inner allele-specific primers corresponding to the reference strain and clinical strains. Thirty gastric tissue and saliva samples were tested to determine the sensitivity of nested-ASP-PCR and ASP-PCR methods. Then, clarithromycin resistance was detected for 99 clinical samples by using different methods, including nested-ASP-PCR, bacterial culture and disk diffusion. The nested-ASP-PCR method was successfully established to test the resistance mutation points 2142, 2143 and 2144 of the 23SrRNA gene of H. pylori. Among 30 samples of gastric tissue and saliva, the H. pylori detection rate of nested-ASP-PCR was 90% and 83.33%, while the detection rate of ASP-PCR was just 63% and 56.67%. Especially in the saliva samples, nested-ASP-PCR showed much higher sensitivity in H. pylori detection and resistance mutation rates than ASP-PCR. In the 99 RUT-positive gastric tissue and saliva samples, the H. pylori-positive detection rate by nested-ASP-PCR was 87 (87.88%) and 67 (67.68%), in which there were 30 wild-type and 57 mutated strains in gastric tissue and 22 wild-type and 45 mutated strains in saliva. Genotype analysis showed that three-points mixed mutations were quite common, but different resistant strains were present in gastric mucosa and saliva. Compared to the high sensitivity shown by nested-ASP-PCR, the positive detection of bacterial culture with gastric tissue samples was 50 cases, in which only 26 drug-resistant strains were found through analyzing minimum inhibitory zone of clarithromycin. The nested-ASP-PCR assay showed higher detection sensitivity than ASP-PCR and drug sensitivity testing, which could be performed to evaluate clarithromycin resistance of H. pylori.
Luo, Xiao-Feng; Jiao, Jian-Hua; Zhang, Wen-Yue; Pu, Han-Ming; Qu, Bao-Jin; Yang, Bing-Ya; Hou, Min; Ji, Min-Jun
2016-01-01
AIM: To investigate clarithromycin resistance positions 2142, 2143 and 2144 of the 23SrRNA gene in Helicobacter pylori (H. pylori) by nested-allele specific primer-polymerase chain reaction (nested-ASP-PCR). METHODS: The gastric tissue and saliva samples from 99 patients with positive results of the rapid urease test (RUT) were collected. The nested-ASP-PCR method was carried out with the external primers and inner allele-specific primers corresponding to the reference strain and clinical strains. Thirty gastric tissue and saliva samples were tested to determine the sensitivity of nested-ASP-PCR and ASP-PCR methods. Then, clarithromycin resistance was detected for 99 clinical samples by using different methods, including nested-ASP-PCR, bacterial culture and disk diffusion. RESULTS: The nested-ASP-PCR method was successfully established to test the resistance mutation points 2142, 2143 and 2144 of the 23SrRNA gene of H. pylori. Among 30 samples of gastric tissue and saliva, the H. pylori detection rate of nested-ASP-PCR was 90% and 83.33%, while the detection rate of ASP-PCR was just 63% and 56.67%. Especially in the saliva samples, nested-ASP-PCR showed much higher sensitivity in H. pylori detection and resistance mutation rates than ASP-PCR. In the 99 RUT-positive gastric tissue and saliva samples, the H. pylori-positive detection rate by nested-ASP-PCR was 87 (87.88%) and 67 (67.68%), in which there were 30 wild-type and 57 mutated strains in gastric tissue and 22 wild-type and 45 mutated strains in saliva. Genotype analysis showed that three-points mixed mutations were quite common, but different resistant strains were present in gastric mucosa and saliva. Compared to the high sensitivity shown by nested-ASP-PCR, the positive detection of bacterial culture with gastric tissue samples was 50 cases, in which only 26 drug-resistant strains were found through analyzing minimum inhibitory zone of clarithromycin. CONCLUSION: The nested-ASP-PCR assay showed higher detection sensitivity than ASP-PCR and drug sensitivity testing, which could be performed to evaluate clarithromycin resistance of H. pylori. PMID:27433095
Code of Federal Regulations, 2011 CFR
2011-07-01
....011) 3-run average (1-hour minimum sample time per run) EPA Reference Method 5 of appendix A-3 of part... by volume (ppmv) 20 5.5 11 3-run average (1-hour minimum sample time per run) EPA Reference Method 10... dscf) 16 (7.0) or 0.013 (0.0057) 0.85 (0.37) or 0.020 (0.0087) 9.3 (4.1) or 0.054 (0.024) 3-run average...
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
NASA Astrophysics Data System (ADS)
Maruyama, Keisuke; Hanafusa, Hiroaki; Ashihara, Ryuhei; Hayashi, Shohei; Murakami, Hideki; Higashi, Seiichiro
2015-06-01
We have investigated high-temperature and rapid annealing of a silicon carbide (SiC) wafer by atmospheric pressure thermal plasma jet (TPJ) irradiation for impurity activation. To reduce the temperature gradient in the SiC wafer, a DC current preheating system and the lateral back-and-forth motion of the wafer were introduced. A maximum surface temperature of 1835 °C within 2.4 s without sample breakage was achieved, and aluminum (Al), phosphorus (P), and arsenic (As) activations in SiC were demonstrated. We have investigated precise control of heating rate (Rh) and cooling rate (Rc) during rapid annealing of P+-implanted 4H-SiC and its impact on impurity activation. No dependence of resistivity on Rh was observed, while increasing Rc significantly decreased resistivity. A minimum resistivity of 0.0025 Ω·cm and a maximum carrier concentration of 2.9 × 1020 cm-3 were obtained at Rc = 568 °C/s.
Pyrolysis of tyre powder using microwave thermogravimetric analysis: Effect of microwave power.
Song, Zhanlong; Yang, Yaqing; Zhou, Long; Zhao, Xiqiang; Wang, Wenlong; Mao, Yanpeng; Ma, Chunyuan
2017-02-01
The pyrolytic characteristics of tyre powder treated under different microwave powers (300, 500, and 700 W) were studied via microwave thermogravimetric analysis. The product yields at different power levels were studied, along with comparative analysis of microwave pyrolysis and conventional pyrolysis. The feedstock underwent preheating, intense pyrolysis, and final pyrolysis in sequence. The main and secondary weight loss peaks observed during the intense pyrolysis stage were attributed to the decomposition of natural rubbers and synthetic rubbers, respectively. The total mass loss rates, bulk temperatures, and maximum temperatures were distinctively higher at higher powers. However, the maximum mass loss rate (0.005 s -1 ), the highest yields of liquid product (53%), and the minimum yields of residual solid samples (43.83%) were obtained at 500 W. Compared with conventional pyrolysis, microwave pyrolysis exhibited significantly different behaviour with faster reaction rates, which can decrease the decomposition temperatures of both natural and synthetic rubber by approximately 110 °C-140 °C.
NASA Astrophysics Data System (ADS)
Torres Beltran, M.
2016-02-01
The Scientific Committee on Oceanographic Research (SCOR) Working Group 144 "Microbial Community Responses to Ocean Deoxygenation" workshop held in Vancouver, British Columbia in July 2014 had the primary objective of kick-starting the establishment of a minimal core of technologies, techniques and standard operating procedures (SOPs) to enable compatible process rate and multi-molecular data (DNA, RNA and protein) collection in marine oxygen minimum zones (OMZs) and other oxygen starved waters. Experimental activities conducted in Saanich Inlet, a seasonally anoxic fjord on Vancouver Island British Columbia, were designed to compare and cross-calibrate in situ sampling devices (McLane PPS system) with conventional bottle sampling and incubation methods. Bottle effects on microbial community composition, and activity were tested using different filter combinations and sample volumes to compare PPS/IPS (0.4 µm) versus Sterivex (0.22 µm) filtration methods with and without prefilters (2.7 µm). Resulting biomass was processed for small subunit ribosomal RNA gene sequencing across all three domains of life on the 454 platform followed by downstream community structure analyses. Significant community shifts occurred within and between filter fractions for in situ versus on-ship processed samples. For instance, the relative abundance of several bacterial phyla including Bacteroidetes, Delta and Gammaproteobacteria decreased five-fold on-ship when compared to in situ filtration. Similarly, experimental mesocosms showed similar community structure and activity to in situ filtered samples indicating the need to cross-calibrate incubations to constrain bottle effects. In addition, alpha and beta diversity significantly changed as function of filter size and volume, as well as the operational taxonomic units identified using indicator species analysis for each filter size. Our results provide statistical support that microbial community structure is systematically biased by filter fraction methods and highlight the need for establishing compatible techniques among researchers that facilitate comparative and reproducible science for the whole community.
Landkamer, Lee L.; Harvey, Ronald W.; Scheibe, Timothy D.; Ryan, Joseph N.
2013-01-01
A colloid transport model is introduced that is conceptually simple yet captures the essential features of colloid transport and retention in saturated porous media when colloid retention is dominated by the secondary minimum because an electrostatic barrier inhibits substantial deposition in the primary minimum. This model is based on conventional colloid filtration theory (CFT) but eliminates the empirical concept of attachment efficiency. The colloid deposition rate is computed directly from CFT by assuming all predicted interceptions of colloids by collectors result in at least temporary deposition in the secondary minimum. Also, a new paradigm for colloid re-entrainment based on colloid population heterogeneity is introduced. To accomplish this, the initial colloid population is divided into two fractions. One fraction, by virtue of physiochemical characteristics (e.g., size and charge), will always be re-entrained after capture in a secondary minimum. The remaining fraction of colloids, again as a result of physiochemical characteristics, will be retained “irreversibly” when captured by a secondary minimum. Assuming the dispersion coefficient can be estimated from tracer behavior, this model has only two fitting parameters: (1) the fraction of the initial colloid population that will be retained “irreversibly” upon interception by a secondary minimum, and (2) the rate at which reversibly retained colloids leave the secondary minimum. These two parameters were correlated to the depth of the Derjaguin-Landau-Verwey-Overbeek (DLVO) secondary energy minimum and pore-water velocity, two physical forces that influence colloid transport. Given this correlation, the model serves as a heuristic tool for exploring the influence of physical parameters such as surface potential and fluid velocity on colloid transport.
Oxygen enhanced switching to combustion of lower rank fuels
Kobayashi, Hisashi; Bool, III, Lawrence E.; Wu, Kuang Tsai
2004-03-02
A furnace that combusts fuel, such as coal, of a given minimum energy content to obtain a stated minimum amount of energy per unit of time is enabled to combust fuel having a lower energy content, while still obtaining at least the stated minimum energy generation rate, by replacing a small amount of the combustion air fed to the furnace by oxygen. The replacement of oxygen for combustion air also provides reduction in the generation of NOx.
Yakimov, A I; Nikiforov, A I; Dvurechenskii, A V; Ulyanov, V V; Volodin, V A; Groetzschel, R
2006-09-28
The effect of Ge deposition rate on the morphology and structural properties of self-assembled Ge/Si(001) islands was studied. Ge/Si(001) layers were grown by solid-source molecular-beam epitaxy at 500 °C. We adjusted the Ge coverage, 6 monolayers (ML), and varied the Ge growth rate by a factor of 100, R = 0.02-2 ML s(-1), to produce films consisting of hut-shaped Ge islands. The samples were characterized by scanning tunnelling microscopy, Raman spectroscopy, and Rutherford backscattering measurements. The mean lateral size of Ge nanoclusters decreases from 14.1 nm at R = 0.02 ML s(-1) to 9.8 nm at R = 2 ML s(-1). The normalized width of the size distribution shows non-monotonic behaviour as a function of R and has a minimum value of 19% at R = 2 ML s(-1). Ge nanoclusters fabricated at the highest deposition rate demonstrate the best structural quality and the highest Ge content (∼0.9).
14 CFR Appendix C to Part 141 - Instrument Rating Course
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Instrument Rating Course C Appendix C to... Rating Course 1. Applicability. This appendix prescribes the minimum curriculum for an instrument rating course and an additional instrument rating course, required under this part, for the following ratings...
14 CFR Appendix C to Part 141 - Instrument Rating Course
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Instrument Rating Course C Appendix C to... Rating Course 1. Applicability. This appendix prescribes the minimum curriculum for an instrument rating course and an additional instrument rating course, required under this part, for the following ratings...
14 CFR Appendix C to Part 141 - Instrument Rating Course
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Instrument Rating Course C Appendix C to... Rating Course 1. Applicability. This appendix prescribes the minimum curriculum for an instrument rating course and an additional instrument rating course, required under this part, for the following ratings...
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Emerging technologies in medical applications of minimum volume vitrification
Zhang, Xiaohui; Catalano, Paolo N; Gurkan, Umut Atakan; Khimji, Imran; Demirci, Utkan
2011-01-01
Cell/tissue biopreservation has broad public health and socio-economic impact affecting millions of lives. Cryopreservation technologies provide an efficient way to preserve cells and tissues targeting the clinic for applications including reproductive medicine and organ transplantation. Among these technologies, vitrification has displayed significant improvement in post-thaw cell viability and function by eliminating harmful effects of ice crystal formation compared to the traditional slow freezing methods. However, high cryoprotectant agent concentrations are required, which induces toxicity and osmotic stress to cells and tissues. It has been shown that vitrification using small sample volumes (i.e., <1 μl) significantly increases cooling rates and hence reduces the required cryoprotectant agent levels. Recently, emerging nano- and micro-scale technologies have shown potential to manipulate picoliter to nanoliter sample sizes. Therefore, the synergistic integration of nanoscale technologies with cryogenics has the potential to improve biopreservation methods. PMID:21955080
A functional description of the Buffered Telemetry Demodulator (BTD)
NASA Technical Reports Server (NTRS)
Tsou, H.; Shah, B.; Lee, R.; Hinedi, S.
1993-01-01
This article gives a functional description of the buffered telemetry demodulator (BTD), which operates on recorded digital samples to extract the symbols from the received signal. The key advantages of the BTD are as follows: (1) its ability to reprocess the signal to reduce acquisition time; (2) its ability to use future information about the signal and to perform smoothing on past samples; and (3) its minimum transmission bandwidth requirement as each sub carrier harmonic is processed individually. The first application of the BTD would be the Galileo S-band contingency mission, where the signal is so weak that reprocessing to reduce the acquisition time is crucial. Moreover, in the event of employing antenna arraying with full spectrum combining, only the sub carrier harmonics need to be transmitted between sites, resulting in significant reduction in data rate transmission requirements. Software implementation of the BTD is described for various general-purpose computers.
Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M
2018-05-01
Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or datamore » paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.« less
14 CFR 91.1053 - Crewmember experience.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Fractional Ownership... and ratings: (1) Total flight time for all pilots: (i) Pilot in command—A minimum of 1,500 hours. (ii) Second in command—A minimum of 500 hours. (2) For multi-engine turbine-powered fixed-wing and powered...
No minimum threshold for ozone-induced changes in soybean canopy fluxes
USDA-ARS?s Scientific Manuscript database
Tropospheric ozone concentrations [O3] are increasing at rates that exceed any other pollutant. This highly reactive gas drives reductions in plant productivity and canopy water use while also increasing canopy temperature and sensible heat flux. It is not clear whether a minimum threshold of ozone ...
20 CFR 229.48 - Family maximum.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Family maximum. 229.48 Section 229.48... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.48 Family maximum. (a) Family... month on one person's earnings record is limited. This limited amount is called the family maximum. The...
46 CFR 164.023-13 - Production tests and inspections.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Constant Rate of Traverse tensile testing machine, capable of initial clamp separation of ten inches and a... the acceptance testing values but not less than the performance minimums. (2) Length/weight values must be within 5 percent of the acceptance testing values but not less than the performance minimums...
Time-Series Evidence of the Effect of the Minimum Wage on Youth Employment and Unemployment.
ERIC Educational Resources Information Center
Brown, Charles; And Others
1983-01-01
The study finds that a 10 percent increase in the federal minimum wage (or the coverage rate) would reduce teenage (16-19) employment by about one percent, which is at the lower end of the range of estimates from previous studies. (Author/SSH)
29 CFR 4.105 - The Act as amended.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provision for periodic adjustment of minimum wage rates and fringe benefits payable thereunder by the... the Act's coverage to white collar workers. Accordingly, the minimum wage protection of the Act now... to impose on successor contractors certain requirements (see § 4.1b) with respect to payment of wage...
Shrot, Yoav; Frydman, Lucio
2011-04-01
A topic of active investigation in 2D NMR relates to the minimum number of scans required for acquiring this kind of spectra, particularly when these are dictated by sampling rather than by sensitivity considerations. Reductions in this minimum number of scans have been achieved by departing from the regular sampling used to monitor the indirect domain, and relying instead on non-uniform sampling and iterative reconstruction algorithms. Alternatively, so-called "ultrafast" methods can compress the minimum number of scans involved in 2D NMR all the way to a minimum number of one, by spatially encoding the indirect domain information and subsequently recovering it via oscillating field gradients. Given ultrafast NMR's simultaneous recording of the indirect- and direct-domain data, this experiment couples the spectral constraints of these orthogonal domains - often calling for the use of strong acquisition gradients and large filter widths to fulfill the desired bandwidth and resolution demands along all spectral dimensions. This study discusses a way to alleviate these demands, and thereby enhance the method's performance and applicability, by combining spatial encoding with iterative reconstruction approaches. Examples of these new principles are given based on the compressed-sensed reconstruction of biomolecular 2D HSQC ultrafast NMR data, an approach that we show enables a decrease of the gradient strengths demanded in this type of experiments by up to 80%. Copyright © 2011 Elsevier Inc. All rights reserved.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
NASA Astrophysics Data System (ADS)
Cheng, Hu; Zhang, Junran; Li, Yanchun; Li, Gong; Li, Xiaodong; Liu, Jing
2018-01-01
We have designed and implemented a novel DLD for controlling pressure and compression/decompression rate. Combined with the use of the symmetric diamond anvil cells (DACs), the DLD adopts three piezo-electric (PE) actuators and three static load screws to remotely control pressure in accurate and consistent manner at room temperature. This device allows us to create different loading mechanisms and frames for a variety of existing and commonly used diamond cells rather than designing specialized or dedicated diamond cells with various drives. The sample pressure compression/decompression rate that we have achieved is up to 58.6/43.3 TPa/s, respectively. The minimum of load time is less than 1 ms. The DLD is a powerful tool for exploring the effects of rapid (de)compression on the structure of materials and the properties of materials.
Development of an optical inspection platform for surface defect detection in touch panel glass
NASA Astrophysics Data System (ADS)
Chang, Ming; Chen, Bo-Cheng; Gabayno, Jacque Lynn; Chen, Ming-Fu
2016-04-01
An optical inspection platform combining parallel image processing with high resolution opto-mechanical module was developed for defect inspection of touch panel glass. Dark field images were acquired using a 12288-pixel line CCD camera with 3.5 µm per pixel resolution and 12 kHz line rate. Key features of the glass surface were analyzed by parallel image processing on combined CPU and GPU platforms. Defect inspection of touch panel glass, which provided 386 megapixel image data per sample, was completed in roughly 5 seconds. High detection rate of surface scratches on the touch panel glass was realized with minimum defects size of about 10 µm after inspection. The implementation of a custom illumination source significantly improved the scattering efficiency on the surface, therefore enhancing the contrast in the acquired images and overall performance of the inspection system.
20 CFR 345.302 - Definition of terms and phrases used in experience-rating.
Code of Federal Regulations, 2011 CFR
2011-04-01
... pooled charge ratio thus picks up the cost of benefits paid to employees of employers whose rate of... raised in order to bring that rate to the minimum rate of zero is multiplied by the employer's 1-year...
Arslan, Erşan; Aras, Dicle
2016-01-01
[Purpose] The aim of this study was to compare the body composition, heart rate variability, and aerobic and anaerobic performance between competitive cyclists and triathletes. [Subjects] Six cyclists and eight triathletes with experience in competitions voluntarily participated in this study. [Methods] The subjects’ body composition was measured with an anthropometric tape and skinfold caliper. Maximal oxygen consumption and maximum heart rate were determined using the incremental treadmill test. Heart rate variability was measured by 7 min electrocardiographic recording. The Wingate test was conducted to determine anaerobic physical performance. [Results] There were significant differences in minimum power and relative minimum power between the triathletes and cyclists. Anthropometric characteristics and heart rate variability responses were similar among the triathletes and cyclists. However, triathletes had higher maximal oxygen consumption and lower resting heart rates. This study demonstrated that athletes in both sports have similar body composition and aerobic performance characteristics. PMID:27190476
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or recordsto be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or recordsto be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Code of Federal Regulations, 2012 CFR
2012-04-01
... of the PHA's quality control sample is as follows: Universe Minimum number of files or recordsto be... universe is: the number of admissions in the last year for each of the two quality control samples under...
Benthic macroinvertebrate field sampling effort required to ...
This multi-year pilot study evaluated a proposed field method for its effectiveness in the collection of a benthic macroinvertebrate sample adequate for use in the condition assessment of streams and rivers in the Neuquén Province, Argentina. A total of 13 sites, distributed across three rivers, were sampled. At each site, benthic macroinvertebrates were collected at 11 transects. Each sample was processed independently in the field and laboratory. Based on a literature review and resource considerations, the collection of 300 organisms (minimum) at each site was determined to be necessary to support a robust condition assessment, and therefore, selected as the criterion for judging the adequacy of the method. This targeted number of organisms was collected at all sites, at a minimum, when collections from all 11 transects were combined. Subsequent bootstrapping analysis of data was used to estimate whether collecting at fewer transects would reach the minimum target number of organisms for all sites. In a subset of sites, the total number of organisms frequently fell below the target when fewer than 11 transects collections were combined.Site conditions where <300 organisms might be collected are discussed. These preliminary results suggest that the proposed field method results in a sample that is adequate for robust condition assessment of the rivers and streams of interest. When data become available from a broader range of sites, the adequacy of the field
Robust Means and Covariance Matrices by the Minimum Volume Ellipsoid (MVE).
ERIC Educational Resources Information Center
Blankmeyer, Eric
P. Rousseeuw and A. Leroy (1987) proposed a very robust alternative to classical estimates of mean vectors and covariance matrices, the Minimum Volume Ellipsoid (MVE). This paper describes the MVE technique and presents a BASIC program to implement it. The MVE is a "high breakdown" estimator, one that can cope with samples in which as…
Generating a Simulated Fluid Flow over a Surface Using Anisotropic Diffusion
NASA Technical Reports Server (NTRS)
Rodriguez, David L. (Inventor); Sturdza, Peter (Inventor)
2016-01-01
A fluid-flow simulation over a computer-generated surface is generated using a diffusion technique. The surface is comprised of a surface mesh of polygons. A boundary-layer fluid property is obtained for a subset of the polygons of the surface mesh. A gradient vector is determined for a selected polygon, the selected polygon belonging to the surface mesh but not one of the subset of polygons. A maximum and minimum diffusion rate is determined along directions determined using the gradient vector corresponding to the selected polygon. A diffusion-path vector is defined between a point in the selected polygon and a neighboring point in a neighboring polygon. An updated fluid property is determined for the selected polygon using a variable diffusion rate, the variable diffusion rate based on the minimum diffusion rate, maximum diffusion rate, and the gradient vector.
The application of improved neural network in hydrocarbon reservoir prediction
NASA Astrophysics Data System (ADS)
Peng, Xiaobo
2013-03-01
This paper use BP neural network techniques to realize hydrocarbon reservoir predication easier and faster in tarim basin in oil wells. A grey - cascade neural network model is proposed and it is faster convergence speed and low error rate. The new method overcomes the shortcomings of traditional BP neural network convergence slow, easy to achieve extreme minimum value. This study had 220 sets of measured logging data to the sample data training mode. By changing the neuron number and types of the transfer function of hidden layers, the best work prediction model is analyzed. The conclusion is the model which can produce good prediction results in general, and can be used for hydrocarbon reservoir prediction.
Study of coatings for improved fire and decay resistance of mine timbers
NASA Technical Reports Server (NTRS)
Baum, B.
1977-01-01
The purpose of this program was to find a fire- and rot-retardant polymer/fungicide reaction product for coating mine timbers. Fire-retardant polymers were screened as films and coatings on fir wood. Curable polyimide appeared to be flame retardant and evolved a minimum of fumes when exposed to a flame. Several organic and metal, low toxicity, fungicides were reacted with the polyimide in-situ on the wood. These coated samples were screened for fungus resistance. All formulations rated well - even the polyimide film without additives was fungicidal. The fir wood control itself resisted internal damage during the ten weeks of fungus exposure. A more severe test for fungus resistance will be required.
NASA Astrophysics Data System (ADS)
Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki
The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).
An important issue surrounding assessment of riverine fish assemblages is the minimum amount of sampling distance needed to adequately determine biotic condition. Determining adequate sampling distance is important because sampling distance affects estimates of fish assemblage c...
Xirasagar, Sudha; Stoskopf, Carleen H; Shrader, William R; Glover, Saundra H
2004-01-01
This paper presents a qualitative analysis of states' small group health insurance reforms that impact small group premiums, mostly enacted by the states during 1996-99, following the federal Health Insurance Portability and Accountability Act in 1996. It draws from an intensive review of statutes of 48 states and the District of Columbia as of 1999. It analyses regulations related to insurer pricing and rating practices concerning rating criteria and rating bands, pricing incentives, premium stability from year to year, minimum loss rations, reinsurance and carve-out coverage for the medically uninsurable. It also covers regulations targeting employer purchasing and coverage practices such as pooled purchasing and adverse selection. This is the second of a two-part series analyzing states' small group market reforms, the first being devoted to state reforms to promote access and improving the value of health plans offered in this market (Xirasagar et al. 2004). The variety in pricing and rating reforms illustrate the differences in the depth of reforms across states, and represent a far wider range of potential actuarial combinations than the sample of reforms documented in past literature.
Struwe, Weston B; Agravat, Sanjay; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P; Costello, Catherine E; Dell, Anne; Ten Feizi; Haslam, Stuart M; Karlsson, Niclas G; Khoo, Kay-Hooi; Kolarich, Daniel; Liu, Yan; McBride, Ryan; Novotny, Milos V; Packer, Nicolle H; Paulson, James C; Rapp, Erdmann; Ranzinger, Rene; Rudd, Pauline M; Smith, David F; Tiemeyer, Michael; Wells, Lance; York, William S; Zaia, Joseph; Kettner, Carsten
2016-09-01
The minimum information required for a glycomics experiment (MIRAGE) project was established in 2011 to provide guidelines to aid in data reporting from all types of experiments in glycomics research including mass spectrometry (MS), liquid chromatography, glycan arrays, data handling and sample preparation. MIRAGE is a concerted effort of the wider glycomics community that considers the adaptation of reporting guidelines as an important step towards critical evaluation and dissemination of datasets as well as broadening of experimental techniques worldwide. The MIRAGE Commission published reporting guidelines for MS data and here we outline guidelines for sample preparation. The sample preparation guidelines include all aspects of sample generation, purification and modification from biological and/or synthetic carbohydrate material. The application of MIRAGE sample preparation guidelines will lead to improved recording of experimental protocols and reporting of understandable and reproducible glycomics datasets. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Horner, Katy M; Byrne, Nuala M; King, Neil A
2014-10-01
To determine whether changes in appetite and energy intake (EI) can be detected and play a role in the effectiveness of interventions, it is necessary to identify their variability under normal conditions. We assessed the reproducibility of subjective appetite ratings and ad libitum test meal EI after a standardised pre-load in overweight and obese males. Fifteen overweight and obese males (BMI 30.3 ± 4.9 kg/m(2), aged 34.9 ± 10.6 years) completed two identical test days, 7 days apart. Participants were provided with a standardised fixed breakfast (1676 kJ) and 5 h later an ad libitum pasta lunch. An electronic appetite rating system was used to assess subjective ratings before and after the fixed breakfast, and periodically during the postprandial period. EI was assessed at the ad libitum lunch meal. Sample size estimates for paired design studies were calculated. Appetite ratings demonstrated a consistent oscillating pattern between test days, and were more reproducible for mean postprandial than fasting ratings. The correlation between ad libitum EI on the two test days was r = 0.78 (P <0.01). Using a paired design and a power of 0.8, a minimum of 12 participants would be needed to detect a 10 mm change in 5 h postprandial mean ratings and 17 to detect a 500 kJ difference in ad libitum EI. Intra-individual variability of appetite and ad libitum test meal EI in overweight and obese males is comparable to previous reports in normal weight adults. Sample size requirements for studies vary depending on the parameter of interest and sensitivity needed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Treatment of unicameral bone cyst: systematic review and meta analysis.
Kadhim, Muayad; Thacker, Mihir; Kadhim, Amjed; Holmes, Laurens
2014-03-01
Different treatment modalities have been utilized to treat unicameral bone cyst (UBC), but evidence has not been fully described to support one treatment over another and the optimal treatment is controversial. The aim of this quantitative systematic review was to assess the effectiveness of different UBC treatment modalities. We utilized Pubmed to isolate retrospective studies on patients with UBC who received any kind of treatment. The included studies needed to have a minimum sample size of 15 patients, and have provided data on radiographic healing outcome. Sixty-two articles were selected for the meta-analysis from a total of 463 articles. The cumulative sample size was 3,211 patients with 3,217 UBC, and male to female ratio was 2.2:1. The summary or pool estimate of methylprednisolone acetate (MPA) injection resulted in a healing rate of (77.4 %) that was comparable to bone marrow injection (77.9 %). A higher healing rate was observed with MPA injection when inner wall disruption was performed. The pool estimate of bone marrow with demineralized bone matrix injection was high (98.7 %). UBC healing rate after surgical curettage was comparable whether autograft or allograft was utilized (90 %). UBC treatment with flexible intramedullary nails without curettage provided almost 100% healing rate, while continuous decompression with cannulated screws provided 89 % healing rate. Conservative treatment indicated a healing rate of 64.2, 95 % CI (26.7-101.8). Active treatment for UBC provided variable healing rates and the outcomes were favorable relative to conservative treatment. Due to the heterogeneity of the studies and reporting bias, the interpretation of these findings should be handled with caution.
Koehler, Kari; Center, Alyson; Cavender-Bares, Jeannine
2012-02-01
• It has long been hypothesized that species are limited to the north by minimum temperature and to the south by competition, resulting in a trade-off between freezing tolerance and growth rate. We investigated the extent to which the climatic origins of populations from four live oak species (Quercus series Virentes) were associated with freezing tolerance and growth rate, and whether species fitted a model of locally adapted populations, each with narrow climatic tolerances, or of broadly adapted populations with wide climatic tolerances. • Acorns from populations of four species across a tropical-temperate gradient were grown under common tropical and temperate conditions. Growth rate, seed mass, and leaf and stem freezing traits were compared with source minimum temperatures. • Maximum growth rates under tropical conditions were negatively correlated with freezing tolerance under temperate conditions. The minimum source temperature predicted the freezing tolerance of populations under temperate conditions. The tropical species Q. oleoides was differentiated from the three temperate species, and variation among species was greater than among populations. • The trade-off between freezing tolerance and growth rate supports the range limit hypothesis. Limited variation within species indicates that the distributions of species may be driven more strongly by broad climatic factors than by highly local conditions. © 2011 The Authors. New Phytologist © 2011 New Phytologist Trust.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Furutani, Eiko; Nishigaki, Yuki; Kanda, Chiaki; Takeda, Toshihiro; Shirakami, Gotaro
2013-01-01
This paper proposes a novel hypnosis control method using Auditory Evoked Potential Index (aepEX) as a hypnosis index. In order to avoid side effects of an anesthetic drug, it is desirable to reduce the amount of an anesthetic drug during surgery. For this purpose many studies of hypnosis control systems have been done. Most of them use Bispectral Index (BIS), another hypnosis index, but it has problems of dependence on anesthetic drugs and nonsmooth change near some particular values. On the other hand, aepEX has an ability of clear distinction between patient consciousness and unconsciousness and independence of anesthetic drugs. The control method proposed in this paper consists of two elements: estimating the minimum effect-site concentration for maintaining appropriate hypnosis and adjusting infusion rate of an anesthetic drug, propofol, using model predictive control. The minimum effect-site concentration is estimated utilizing the property of aepEX pharmacodynamics. The infusion rate of propofol is adjusted so that effect-site concentration of propofol may be kept near and always above the minimum effect-site concentration. Simulation results of hypnosis control using the proposed method show that the minimum concentration can be estimated appropriately and that the proposed control method can maintain hypnosis adequately and reduce the total infusion amount of propofol.
Brinda, John C; Stark, Lloyd R; Clark, Theresa A; Greenwood, Joshua L
2016-01-01
Embryonic sporophytes of the moss Aloina ambigua are inducibly desiccation tolerant (DT). Hardening to DT describes a condition of temporary tolerance to a rapid-drying event conferred by a previous slow-drying event. This paper aimed to determine whether sporophytic embryos of a moss can be hardened to DT, to assess how the rate of desiccation influences the post-rehydration dynamics of recovery, hardening and dehardening, and to determine the minimum rate of drying for embryos and shoots. Embryos were exposed to a range of drying rates using wetted filter paper in enclosed Petri dishes, monitoring relative humidity (RH) inside the dish and equilibrating tissues with 50% RH. Rehydrated embryos and shoots were subjected to a rapid-drying event at intervals, allowing assessments of recovery, hardening and dehardening times. The minimum rate of slow drying for embryonic survival was ∼3·5 h and for shoots ∼9 h. Hardening to DT was dependent upon the prior rate of drying. When the rate of drying was extended to 22 h, embryonic hardening was strong (>50% survival) with survival directly proportional to the post-rehydration interval preceding rapid drying. The recovery time (repair/reassembly) was so short as to be undetectable in embryos and shoots desiccated gradually; however, embryos dried in <3·5 h exhibited a lag time in development of ∼4 d, consistent with recovery. Dehardening resulted in embryos incapable of surviving a rapid-drying event. The ability of moss embryos to harden to DT and the influence of prior rate of drying on the dynamics of hardening are shown for the first time. The minimum rate of drying is introduced as a new metric for assessing ecological DT, defined as the minimum duration at sub-turgor during a drying event in which upon rehydration the plant organ of interest survives relatively undamaged from the desiccating event. © The Author 2015. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Brinda, John C.; Stark, Lloyd R.; Clark, Theresa A.; Greenwood, Joshua L.
2016-01-01
Background and Aims Embryonic sporophytes of the moss Aloina ambigua are inducibly desiccation tolerant (DT). Hardening to DT describes a condition of temporary tolerance to a rapid-drying event conferred by a previous slow-drying event. This paper aimed to determine whether sporophytic embryos of a moss can be hardened to DT, to assess how the rate of desiccation influences the post-rehydration dynamics of recovery, hardening and dehardening, and to determine the minimum rate of drying for embryos and shoots. Methods Embryos were exposed to a range of drying rates using wetted filter paper in enclosed Petri dishes, monitoring relative humidity (RH) inside the dish and equilibrating tissues with 50 % RH. Rehydrated embryos and shoots were subjected to a rapid-drying event at intervals, allowing assessments of recovery, hardening and dehardening times. Key Results The minimum rate of slow drying for embryonic survival was ∼3·5 h and for shoots ∼9 h. Hardening to DT was dependent upon the prior rate of drying. When the rate of drying was extended to 22 h, embryonic hardening was strong (>50 % survival) with survival directly proportional to the post-rehydration interval preceding rapid drying. The recovery time (repair/reassembly) was so short as to be undetectable in embryos and shoots desiccated gradually; however, embryos dried in <3·5 h exhibited a lag time in development of ∼4 d, consistent with recovery. Dehardening resulted in embryos incapable of surviving a rapid-drying event. Conclusions The ability of moss embryos to harden to DT and the influence of prior rate of drying on the dynamics of hardening are shown for the first time. The minimum rate of drying is introduced as a new metric for assessing ecological DT, defined as the minimum duration at sub-turgor during a drying event in which upon rehydration the plant organ of interest survives relatively undamaged from the desiccating event. PMID:26354931
Seven-Day Low Streamflows in the United States, 1940-2014
This map shows percentage changes in the minimum annual rate of water carried by rivers and streams across the country, based on the long-term rate of change from 1940 to 2014. Minimum streamflow is based on the consecutive seven-day period with the lowest average flow during a given year. Blue triangles represent an increase in low stream flow volumes, and brown triangles represent a decrease. Streamflow data were collected by the U.S. Geological Survey. For more information: www.epa.gov/climatechange/science/indicators
Code of Federal Regulations, 2010 CFR
2010-01-01
... sheet interest rate and foreign exchange rate contracts: a. Interest Rate Contracts i. Single currency... Contracts i. Cross-currency interest rate swaps. ii. Forward foreign exchange rate contracts. iii. Currency... exposure is zero. Mark-to-market values are measured in United States dollars, regardless of the currency...
Liu, Shaorong; Elkin, Christopher; Kapur, Hitesh
2003-11-01
We describe a microfabricated hybrid device that consists of a microfabricated chip containing multiple twin-T injectors attached to an array of capillaries that serve as the separation channels. A new fabrication process was employed to create two differently sized round channels in a chip. Twin-T injectors were formed by the smaller round channels that match the bore of the separation capillaries and separation capillaries were incorporated to the injectors through the larger round channels that match the outer diameter of the capillaries. This allows for a minimum dead volume and provides a robust chip/capillary interface. This hybrid design takes full advantage, such as sample stacking and purification and uniform signal intensity profile, of the unique chip injection scheme for DNA sequencing while employing long straight capillaries for the separations. In essence, the separation channel length is optimized for both speed and resolution since it is unconstrained by chip size. To demonstrate the reliability and practicality of this hybrid device, we sequenced over 1000 real-world samples from Human Chromosome 5 and Ciona intestinalis, prepared at Joint Genome Institute. We achieved average Phred20 read of 675 bases in about 70 min with a success rate of 91%. For the similar type of samples on MegaBACE 1000, the average Phred20 read is about 550-600 bases in 120 min separation time with a success rate of about 80-90%.
Detection of Nosema bombycis by FTA cards and loop-mediated isothermal amplification (LAMP).
Yan, Wei; Shen, Zhongyuan; Tang, Xudong; Xu, Li; Li, Qianlong; Yue, Yajie; Xiao, Shengyan; Fu, Xuliang
2014-10-01
We successfully established a detection method which exhibited a markedly higher sensitivity than previously developed detection methods for Nosema bombycis by combining glass beads, FTA card, and LAMP. Spores of N. bombycis were first broken by acid-washed glass beads; the DNA was subsequently extracted and purified with the FTA card, and LAMP was performed using primers (LSU296) designed based on the sequence of the LSU rRNA of N. bombycis. The minimum detection concentration was 10 spores/mL. When this method was used to detect pebrine disease in silkworm egg, the detection rate for 500 silkworm eggs, in which only one egg was infected with N. bombycis, was 100 % under our optimized conditions. If the number of eggs in the sample increased to 800 or 1,000, the sample was divided into two equal portions, and the eggs were smashed with glass beads after the addition of 1 mL of TE buffer. The liquid in two tubes was later mixed and applied to the FTA card, and the detection rates were 100 %. Furthermore, the LAMP method established in our study could detect N. bombycis infection in silkworm 24 h earlier than microscopy.
Gender differences in treatment progress of drug-addicted patients.
Fernández-Montalvo, Javier; López-Goñi, José J; Azanza, Paula; Arteaga, Alfonso; Cacho, Raúl
2017-03-01
The authors of this study explored the differences in treatment progress between men and women who were addicted to drugs. The differential rate of completion of/dropout from treatment in men and women with substance dependence was established. Moreover, comparisons between completers and dropouts, accounting for gender, were carried out for several variables related to treatment progress and clinical profile. A sample of 183 addicted patients (96 male and 87 female) who sought outpatient treatment between 2002 and 2006 was assessed. Information on socio-demographic, consumption, and associated characteristics was collected. A detailed tracking of each patient's progress was maintained for a minimum period of 8 years to assess treatment progression. The treatment dropout rate in the whole sample was 38.8%, with statistically significant differences between women (47.1%) and men (31.3%). Women who dropped out of treatment presented a more severe profile in most of the psychopathologic variables than women who completed it. Moreover, women who dropped out from treatment presented a more severe profile than men who dropped out. According to these results, drug-addicted women showed worse therapeutic progress than men with similar histories. Thus, women must be provided with additional targeted intervention to promote better treatment outcomes.
Ellipsoids for anomaly detection in remote sensing imagery
NASA Astrophysics Data System (ADS)
Grosklos, Guenchik; Theiler, James
2015-05-01
For many target and anomaly detection algorithms, a key step is the estimation of a centroid (relatively easy) and a covariance matrix (somewhat harder) that characterize the background clutter. For a background that can be modeled as a multivariate Gaussian, the centroid and covariance lead to an explicit probability density function that can be used in likelihood ratio tests for optimal detection statistics. But ellipsoidal contours can characterize a much larger class of multivariate density function, and the ellipsoids that characterize the outer periphery of the distribution are most appropriate for detection in the low false alarm rate regime. Traditionally the sample mean and sample covariance are used to estimate ellipsoid location and shape, but these quantities are confounded both by large lever-arm outliers and non-Gaussian distributions within the ellipsoid of interest. This paper compares a variety of centroid and covariance estimation schemes with the aim of characterizing the periphery of the background distribution. In particular, we will consider a robust variant of the Khachiyan algorithm for minimum-volume enclosing ellipsoid. The performance of these different approaches is evaluated on multispectral and hyperspectral remote sensing imagery using coverage plots of ellipsoid volume versus false alarm rate.
30 CFR 77.606-1 - Rubber gloves; minimum requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COAL MINES Trailing Cables § 77.606-1 Rubber gloves; minimum requirements. (a) Rubber gloves (lineman's gloves) worn while handling high-voltage trailing cables shall be rated at least 20,000 volts and shall... gloves (wireman's gloves) worn while handling trailing cables energized by 660 to 1,000 volts shall be...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.49 - Adjustment of benefits under family maximum for change in family group.
Code of Federal Regulations, 2011 CFR
2011-04-01
... for change in family group. 229.49 Section 229.49 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.49 Adjustment of benefits under family maximum for change in family group. (a...
ERIC Educational Resources Information Center
Manset, Genevieve; Washburn, Sandra J.
2000-01-01
This article reviews the research related to minimum competency testing (MCT) as a requirement for high school graduation for students with learning disabilities. It examines whether inclusive MCT requirements lead to positive educational outcomes, raises issues of accommodations and alternative diplomas, possible increased dropout rates, and…
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
20 CFR 229.53 - Reduction for social security benefits on employee's wage record.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for social security benefits on... UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.53 Reduction for social security benefits on employee's wage record. The total annuity...
Final Report on Minimum Work Expectations of Recent [Nursing] Graduates.
ERIC Educational Resources Information Center
Scott, Robert E.
To determine the importance of job tasks and/or activities for the nurse aide, the licensed practical nurse (LPN), and the associate degree nurse (ADN), nursing instructors, LPNs and employers were surveyed in Kansas in 1978 using a minimum work behavior expectation instrument. Respondents were asked to rate approximately 200 discrete job tasks…
García-Diego, Fernando-Juan; Verticchio, Elena; Beltrán, Pedro; Siani, Anna Maria
2016-08-15
Monitoring temperature and relative humidity of the environment to which artefacts are exposed is fundamental in preventive conservation studies. The common approach in setting measuring instruments is the choice of a high sampling rate to detect short fluctuations and increase the accuracy of statistical analysis. However, in recent cultural heritage standards the evaluation of variability is based on moving average and short fluctuations and therefore massive acquisition of data in slowly-changing indoor environments could end up being redundant. In this research, the sampling frequency to set a datalogger in a museum room and inside a microclimate frame is investigated by comparing the outcomes obtained from datasheets associated with different sampling conditions. Thermo-hygrometric data collected in the Sorolla room of the Pio V Museum of Valencia (Spain) were used and the widely consulted recommendations issued in UNI 10829:1999 and EN 15757:2010 standards and in the American Society of Heating, Air-Conditioning and Refrigerating Engineers (ASHRAE) guidelines were applied. Hourly sampling proved effective in obtaining highly reliable results. Furthermore, it was found that in some instances daily means of data sampled every hour can lead to the same conclusions as those of high frequency. This allows us to improve data logging design and manageability of the resulting datasheets.