ERIC Educational Resources Information Center
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
40 CFR 1042.310 - Engine selection for Category 1 and Category 2 engines.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Category 2 engines. (a) Determine minimum sample sizes as follows: (1) For Category 1 engines, the minimum sample size is one engine or one percent of the projected U.S.-directed production volume for all your Category 1 engine families, whichever is greater. (2) For Category 2 engines, the minimum sample size is...
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Exploratory Factor Analysis with Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.
2009-01-01
Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-05-01
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Post-stratified estimation: with-in strata and total sample size recommendations
James A. Westfall; Paul L. Patterson; John W. Coulston
2011-01-01
Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
NASA Technical Reports Server (NTRS)
Allton, J. H.; Bevill, T. J.
2003-01-01
The strategy of raking rock fragments from the lunar regolith as a means of acquiring representative samples has wide support due to science return, spacecraft simplicity (reliability) and economy [3, 4, 5]. While there exists widespread agreement that raking or sieving the bulk regolith is good strategy, there is lively discussion about the minimum sample size. Advocates of consor-tium studies desire fragments large enough to support petrologic and isotopic studies. Fragments from 5 to 10 mm are thought adequate [4, 5]. Yet, Jolliff et al. [6] demonstrated use of 2-4 mm fragments as repre-sentative of larger rocks. Here we make use of cura-torial records and sample catalogs to give a different perspective on minimum sample size for a robotic sample collector.
(I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.
van Rijnsoever, Frank J
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Small-Sample DIF Estimation Using SIBTEST, Cochran's Z, and Log-Linear Smoothing
ERIC Educational Resources Information Center
Lei, Pui-Wa; Li, Hongli
2013-01-01
Minimum sample sizes of about 200 to 250 per group are often recommended for differential item functioning (DIF) analyses. However, there are times when sample sizes for one or both groups of interest are smaller than 200 due to practical constraints. This study attempts to examine the performance of Simultaneous Item Bias Test (SIBTEST),…
Peel, Joanne R; Mandujano, María del Carmen
2014-12-01
The queen conch Strombus gigas represents one of the most important fishery resources of the Caribbean but heavy fishing pressure has led to the depletion of stocks throughout the region, causing the inclusion of this species into CITES Appendix II and IUCN's Red-List. In Mexico, the queen conch is managed through a minimum fishing size of 200 mm shell length and a fishing quota which usually represents 50% of the adult biomass. The objectives of this study were to determine the intrinsic population growth rate of the queen conch population of Xel-Ha, Quintana Roo, Mexico, and to assess the effects of a regulated fishing impact, simulating the extraction of 50% adult biomass on the population density. We used three different minimum size criteria to demonstrate the effects of minimum catch size on the population density and discuss biological implications. Demographic data was obtained through capture-mark-recapture sampling, collecting all animals encountered during three hours, by three divers, at four different sampling sites of the Xel-Ha inlet. The conch population was sampled each month between 2005 and 2006, and bimonthly between 2006 and 2011, tagging a total of 8,292 animals. Shell length and lip thickness were determined for each individual. The average shell length for conch with formed lip in Xel-Ha was 209.39 ± 14.18 mm and the median 210 mm. Half of the sampled conch with lip ranged between 200 mm and 219 mm shell length. Assuming that the presence of the lip is an indicator for sexual maturity, it can be concluded that many animals may form their lip at greater shell lengths than 200 mm and ought to be considered immature. Estimation of relative adult abundance and densities varied greatly depending on the criteria employed for adult classification. When using a minimum fishing size of 200 mm shell length, between 26.2% and up to 54.8% of the population qualified as adults, which represented a simulated fishing impact of almost one third of the population. When conch extraction was simulated using a classification criteria based on lip thickness, it had a much smaller impact on the population density. We concluded that the best management strategy for S. gigas is a minimum fishing size based on a lip thickness, since it has lower impact on the population density, and given that selective fishing pressure based on size may lead to the appearance of small adult individuals with reduced fecundity. Furthermore, based on the reproductive biology and the results of the simulated fishing, we suggest a minimum lip thickness of ≥ 15 mm, which ensures the protection of reproductive stages, reduces the risk of overfishing, leading to non-viable density reduction.
Zelt, Ronald B.; Hobza, Christopher M.; Burton, Bethany L.; Schaepe, Nathaniel J.; Piatak, Nadine
2017-11-16
Sediment management is a challenge faced by reservoir managers who have several potential options, including dredging, for mitigation of storage capacity lost to sedimentation. As sediment is removed from reservoir storage, potential use of the sediment for socioeconomic or ecological benefit could potentially defray some costs of its removal. Rivers that transport a sandy sediment load will deposit the sand load along a reservoir-headwaters reach where the current of the river slackens progressively as its bed approaches and then descends below the reservoir water level. Given a rare combination of factors, a reservoir deposit of alluvial sand has potential to be suitable for use as proppant for hydraulic fracturing in unconventional oil and gas development. In 2015, the U.S. Geological Survey began a program of researching potential sources of proppant sand from reservoirs, with an initial focus on the Missouri River subbasins that receive sand loads from the Nebraska Sand Hills. This report documents the methods and results of assessments of the suitability of river delta sediment as proppant for a pilot study area in the delta headwaters of Lewis and Clark Lake, Nebraska and South Dakota. Results from surface-geophysical surveys of electrical resistivity guided borings to collect 3.7-meter long cores at 25 sites on delta sandbars using the direct-push method to recover duplicate, 3.8-centimeter-diameter cores in April 2015. In addition, the U.S. Geological Survey collected samples of upstream sand sources in the lower Niobrara River valley.At the laboratory, samples were dried, weighed, washed, dried, and weighed again. Exploratory analysis of natural sand for determining its suitability as a proppant involved application of a modified subset of the standard protocols known as American Petroleum Institute (API) Recommended Practice (RP) 19C. The RP19C methods were not intended for exploration-stage evaluation of raw materials. Results for the washed samples are not directly applicable to evaluations of suitability for use as fracture sand because, except for particle-size distribution, the API-recommended practices for assessing proppant properties (sphericity, roundness, bulk density, and crush resistance) require testing of specific proppant size classes. An optical imaging particle-size analyzer was used to make measurements of particle-size distribution and particle shape. Measured samples were sieved to separate the dominant-size fraction, and the separated subsample was further tested for roundness, sphericity, bulk density, and crush resistance.For the bulk washed samples collected from the Missouri River delta, the geometric mean size averaged 0.27 millimeters (mm), 80 percent of the samples were predominantly sand in the API 40/70 size class, and 17 percent were predominantly sand in the API 70/140 size class. Distributions of geometric mean size among the four sandbar complexes were similar, but samples collected from sandbar complex B were slightly coarser sand than those from the other three complexes. The average geometric mean sizes among the four sandbar complexes ranged only from 0.26 to 0.30 mm. For 22 main-stem sampling locations along the lower Niobrara River, geometric mean size averaged 0.26 mm, an average of 61 percent was sand in the API 40/70 size class, and 28 percent was sand in the API 70/140 size class. Average composition for lower Niobrara River samples was 48 percent medium sand, 37 percent fine sand, and about 7 percent each very fine sand and coarse sand fractions. On average, samples were moderately well sorted.Particle shape and strength were assessed for the dominant-size class of each sample. For proppant strength, crush resistance was tested at a predetermined level of stress (34.5 megapascals [MPa], or 5,000 pounds-force per square inch). To meet the API minimum requirement for proppant, after the crush test not more than 10 percent of the tested sample should be finer than the precrush dominant-size class. For particle shape, all samples surpassed the recommended minimum criteria for sphericity and roundness, with most samples being well-rounded. For proppant strength, of 57 crush-resistance tested Missouri River delta samples of 40/70-sized sand, 23 (40 percent) were interpreted as meeting the minimum criterion at 34.5 MPa, or 5,000 pounds-force per square inch. Of 12 tested samples of 70/140-sized sand, 9 (75 percent) of the Missouri River delta samples had less than 10 percent fines by volume following crush testing, achieving the minimum criterion at 34.5 MPa. Crush resistance for delta samples was strongest at sandbar complex A, where 67 percent of tested samples met the 10-percent fines criterion at the 34.5-MPa threshold. This frequency was higher than was indicated by samples from sandbar complexes B, C, and D that had rates of 50, 46, and 42 percent, respectively. The group of sandbar complex A samples also contained the largest percentages of samples dominated by the API 70/140 size class, which overall had a higher percentage of samples meeting the minimum criterion compared to samples dominated by coarser size classes; however, samples from sandbar complex A that had the API 40/70 size class tested also had a higher rate for meeting the minimum criterion (57 percent) than did samples from sandbar complexes B, C, and D (50, 43, and 40 percent, respectively). For samples collected along the lower Niobrara River, of the 25 tested samples of 40/70-sized sand, 9 samples passed the API minimum criterion at 34.5 MPa, but only 3 samples passed the more-stringent criterion of 8 percent postcrush fines. All four tested samples of 70/140 sand passed the minimum criterion at 34.5 MPa, with postcrush fines percentage of at most 4.1 percent.For two reaches of the lower Niobrara River, where hydraulic sorting was energized artificially by the hydraulic head drop at and immediately downstream from Spencer Dam, suitability of channel deposits for potential use as fracture sand was confirmed by test results. All reach A washed samples were well-rounded and had sphericity scores above 0.65, and samples for 80 percent of sampled locations met the crush-resistance criterion at the 34.5-MPa stress level. A conservative lower-bound estimate of sand volume in the reach A deposits was about 86,000 cubic meters. All reach B samples were well-rounded but sphericity averaged 0.63, a little less than the average for upstream reaches A and SP. All four samples tested passed the crush-resistance test at 34.5 MPa. Of three reach B sandbars, two had no more than 3 percent fines after the crush test, surpassing more stringent criteria for crush resistance that accept a maximum of 6 percent fines following the crush test for the API 70/140 size class.Relative to the crush-resistance test results for the API 40/70 size fraction of two samples of mine output from Loup River settling-basin dredge spoils near Genoa, Nebr., four of five reach A sample locations compared favorably. The four samples had increases in fines composition of 1.6–5.9 percentage points, whereas fines in the two mine-output samples increased by an average 6.8 percentage points.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
(I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research
2017-01-01
I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
ERIC Educational Resources Information Center
Zwick, Rebecca
2012-01-01
Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…
NASA Astrophysics Data System (ADS)
Saruwatari, Shunsuke; Suzuki, Makoto; Morikawa, Hiroyuki
The paper shows a compact hard real-time operating system for wireless sensor nodes called PAVENET OS. PAVENET OS provides hybrid multithreading: preemptive multithreading and cooperative multithreading. Both of the multithreading are optimized for two kinds of tasks on wireless sensor networks, and those are real-time tasks and best-effort ones. PAVENET OS can efficiently perform hard real-time tasks that cannot be performed by TinyOS. The paper demonstrates the hybrid multithreading realizes compactness and low overheads, which are comparable to those of TinyOS, through quantitative evaluation. The evaluation results show PAVENET OS performs 100 Hz sensor sampling with 0.01% jitter while performing wireless communication tasks, whereas optimized TinyOS has 0.62% jitter. In addition, PAVENET OS has a small footprint and low overheads (minimum RAM size: 29 bytes, minimum ROM size: 490 bytes, minimum task switch time: 23 cycles).
Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni
2017-12-01
The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Population demographics and genetic diversity in remnant and translocated populations of sea otters
Bodkin, James L.; Ballachey, Brenda E.; Cronin, M.A.; Scribner, K.T.
1999-01-01
The effects of small population size on genetic diversity and subsequent population recovery are theoretically predicted, but few empirical data are available to describe those relations. We use data from four remnant and three translocated sea otter (Enhydra lutris) populations to examine relations among magnitude and duration of minimum population size, population growth rates, and genetic variation. Metochondrial (mt)DNA haplotype diversity was correlated with the number of years at minimum population size (r = -0.741, p = 0.038) and minimum population size (r = 0.709, p = 0.054). We found no relation between population growth and haplotype diversity, altough growth was significantly greater in translocated than in remnant populations. Haplotype diversity in populations established from two sources was higher than in a population established from a single source and was higher than in the respective source populations. Haplotype frequencies in translocated populations of founding sizes of 4 and 28 differed from expected, indicating genetic drift and differential reproduction between source populations, whereas haplotype frequencies in a translocated population with a founding size of 150 did not. Relations between population demographics and genetic characteristics suggest that genetic sampling of source and translocated populations can provide valuable inferences about translocations.
50 CFR 648.83 - Multispecies minimum fish sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Multispecies minimum fish sizes. 648.83... Measures for the NE Multispecies and Monkfish Fisheries § 648.83 Multispecies minimum fish sizes. (a) Minimum fish sizes. (1) Minimum fish sizes for recreational vessels and charter/party vessels that are not...
studies. Investigators must supply positive and negative controls. Current pricing for CIDR Program studies are for a minimum study size of 90 samples and increasing in multiples of 90. Please inquire for for the assay is included for CIDR Program studies. FFPE samples are supported for MethylationEPIC
A Comparison of the Fit of Empirical Data to Two Latent Trait Models. Report No. 92.
ERIC Educational Resources Information Center
Hutten, Leah R.
Goodness of fit of raw test score data were compared, using two latent trait models: the Rasch model and the Birnbaum three-parameter logistic model. Data were taken from various achievement tests and the Scholastic Aptitude Test (Verbal). A minimum sample size of 1,000 was required, and the minimum test length was 40 items. Results indicated that…
The Consequences of Indexing the Minimum Wage to Average Wages in the U.S. Economy.
ERIC Educational Resources Information Center
Macpherson, David A.; Even, William E.
The consequences of indexing the minimum wage to average wages in the U.S. economy were analyzed. The study data were drawn from the 1974-1978 May Current Population Survey (CPS) and the 180 monthly CPS Outgoing Rotation Group files for 1979-1993 (approximate annual sample sizes of 40,000 and 180,000, respectively). The effects of indexing on the…
50 CFR 648.103 - Minimum fish sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum fish sizes. 648.103 Section 648... Summer Flounder Fisheries § 648.103 Minimum fish sizes. (a) The minimum size for summer flounder is 14... carrying more than five crew members. (c) The minimum sizes in this section apply to whole fish or to any...
40 CFR 53.40 - General provisions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes and three wind speeds as specified in table D-2. A minimum of 3 replicate measurements of sampling... sampling effectiveness (percent) versus aerodynamic particle diameter (µm) for each of the three wind...
40 CFR 53.40 - General provisions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 50 percent cutpoint of a test sampler shall be determined in a wind tunnel using 10 particle sizes and three wind speeds as specified in table D-2. A minimum of 3 replicate measurements of sampling... sampling effectiveness (percent) versus aerodynamic particle diameter (µm) for each of the three wind...
50 CFR 648.124 - Minimum fish sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum fish sizes. 648.124 Section 648... Scup Fishery § 648.124 Minimum fish sizes. (a) The minimum size for scup is 9 inches (22.9 cm) TL for... charter boat, or more than five crew members if a party boat. (c) The minimum size applies to whole fish...
Muposhi, Victor K; Gandiwa, Edson; Bartels, Paul; Makuza, Stanley M; Madiri, Tinaapi H
2016-01-01
The selective nature of trophy hunting may cause changes in desirable phenotypic traits in harvested species. A decline in trophy size of preferred species may reduce hunting destination competitiveness thus compromising the sustainability of trophy hunting as a conservation tool. We explored the trophy quality and trends in harvesting patterns (i.e., 2004-2015) of Cape buffalo (Syncerus caffer), African elephant (Loxodonta africana), greater kudu (Tragelaphus strepsiceros) and sable (Hippotragus niger) in Matetsi Safari Area, northwest Zimbabwe. We used long-term data on horn and tusk size, age, quota size allocation and offtake levels of selected species. To analyse the effect of year, area and age on the trophy size, quota size and offtake levels, we used linear mixed models. One sample t-test was used to compare observed trophy size with Safari Club International (SCI) minimum score. Trophy sizes for Cape buffalo and African elephant were below the SCI minimum score. Greater kudu trophy sizes were within the minimum score threshold whereas sable trophy sizes were above the SCI minimum score between 2004 and 2015. Age at harvest for Cape buffalo, kudu and sable increased whilst that of elephant remained constant between 2004 and 2015. Quota size allocated for buffalo and the corresponding offtake levels declined over time. Offtake levels of African elephant and Greater kudu declined whilst the quota size did not change between 2004 and 2015. The quota size for sable increased whilst the offtake levels fluctuated without changing for the period 2004-2015. The trophy size and harvesting patterns in these species pose a conservation and management dilemma on the sustainability of trophy hunting in this area. We recommend: (1) temporal and spatial rotational resting of hunting areas to create refuge to improve trophy quality and maintenance of genetic diversity, and (2) introduction of variable trophy fee pricing system based on trophy size.
NASA Technical Reports Server (NTRS)
Rao, R. G. S.; Ulaby, F. T.
1977-01-01
The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.
Technical note: Alternatives to reduce adipose tissue sampling bias.
Cruz, G D; Wang, Y; Fadel, J G
2014-10-01
Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General... fish size requirements established in this section. Minimum Fish Sizes (Total Length/Tail Length) Total...
NASA Technical Reports Server (NTRS)
Beyerle, F. J.
1973-01-01
A biodetection grinder for sampling aerospace materials for microorganisms without killing them was constructed. The device employs a shearing action to generate controllable sized particles with a minimum of energy input. Tests were conducted on materials ranging from soft plastics to hard rocks.
50 CFR 648.165 - Bluefish minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Bluefish minimum fish sizes. 648.165... Measures for the Atlantic Bluefish Fishery § 648.165 Bluefish minimum fish sizes. If the MAFMC determines through its annual review or framework adjustment process that minimum fish sizes are necessary to ensure...
50 CFR 648.165 - Bluefish minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Bluefish minimum fish sizes. 648.165... Measures for the Atlantic Bluefish Fishery § 648.165 Bluefish minimum fish sizes. If the MAFMC determines through its annual review or framework adjustment process that minimum fish sizes are necessary to ensure...
50 CFR 648.165 - Bluefish minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Bluefish minimum fish sizes. 648.165... Measures for the Atlantic Bluefish Fishery § 648.165 Bluefish minimum fish sizes. If the MAFMC determines through its annual review or framework adjustment process that minimum fish sizes are necessary to ensure...
50 CFR 648.162 - Minimum fish sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum fish sizes. 648.162 Section 648... Atlantic Bluefish Fishery § 648.162 Minimum fish sizes. If the Council determines through its annual review or framework adjustment process that minimum fish sizes are necessary to assure that the fishing...
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
50 CFR 648.126 - Scup minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Scup minimum fish sizes. 648.126 Section... Scup Fishery § 648.126 Scup minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The... whole fish or any part of a fish found in possession, e.g., fillets. These minimum sizes may be adjusted...
50 CFR 648.126 - Scup minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Scup minimum fish sizes. 648.126 Section... Scup Fishery § 648.126 Scup minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The... whole fish or any part of a fish found in possession, e.g., fillets. These minimum sizes may be adjusted...
50 CFR 648.126 - Scup minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Scup minimum fish sizes. 648.126 Section... Scup Fishery § 648.126 Scup minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The... whole fish or any part of a fish found in possession, e.g., fillets. These minimum sizes may be adjusted...
Mechanical and Physical Properties of ASTM C33 Sand
2008-02-01
ERDC/GSL TR-08-2 7 Grain-size Distribution (1) (ASTM D 422) 1 test run on total sand sample Proctor Density Curves (2) (ASTM D 698 and D... Proctor (Figure 4). Because of the noncohesive nature of the SP material, a series of relative density tests measuring both minimum and maximum... density tests were conducted with moisture added to the sand. A summary of the minimum and maximum densities is given in Table 2. During Proctor
50 CFR 648.147 - Black sea bass minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Black sea bass minimum fish sizes. 648... Measures for the Black Sea Bass Fishery § 648.147 Black sea bass minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The minimum size for black sea bass is 11 inches (27.94 cm) total length for...
50 CFR 648.147 - Black sea bass minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Black sea bass minimum fish sizes. 648... Measures for the Black Sea Bass Fishery § 648.147 Black sea bass minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The minimum size for black sea bass is 11 inches (27.94 cm) total length for...
50 CFR 648.147 - Black sea bass minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Black sea bass minimum fish sizes. 648... Measures for the Black Sea Bass Fishery § 648.147 Black sea bass minimum fish sizes. (a) Moratorium (commercially) permitted vessels. The minimum size for black sea bass is 11 inches (27.94 cm) total length for...
50 CFR 648.124 - Minimum fish sizes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Minimum fish sizes. 648.124 Section 648... Scup Fishery § 648.124 Minimum fish sizes. Link to an amendment published at 76 FR 60633, Sept. 29... if a party boat. (c) The minimum size applies to whole fish or any part of a fish found in possession...
50 CFR 648.72 - Minimum surf clam size.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Atlantic Surf Clam and Ocean Quahog Fisheries § 648.72 Minimum surf clam size. (a) Minimum length. The minimum length for surf clams is 4.75 inches (12.065 cm). (b) Determination of compliance. No more than 50... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum surf clam size. 648.72 Section 648...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false F.o.b. Origin-Minimum Size... Clauses 52.247-61 F.o.b. Origin—Minimum Size of Shipments. As prescribed in 47.305-16(c), insert the following clause in solicitations and contracts when volume rates may apply: F.o.b. Origin—Minimum Size of...
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Lack of size selectivity for paddlefish captured in hobbled gillnets
Scholten, G.D.; Bettoli, P.W.
2007-01-01
A commercial fishery for paddlefish Polyodon spathula caviar exists in Kentucky Lake, a reservoir on the lower Tennessee River. A 152-mm (bar-measure) minimum mesh size restriction on entanglement gear was enacted in 2002 and the minimum size limit was increased to 864 mm eye-fork length to reduce the possibility of recruitment overfishing. Paddlefish were sampled in 2003-2004 using experimental monofilament gillnets with panels of 89, 102, 127, 152, 178, and 203-mm meshes and the efficacy of the mesh size restriction was evaluated. Following the standards of commercial gear used in that fishery, nets were "hobbled" (i.e., 128 m ?? 3.6 m nets were tied down to 2.4 m; 91 m ?? 9.1 m nets were tied down to 7.6 m). The mean lengths of paddlefish (Ntotal = 576 fish) captured in each mesh were similar among most meshes and bycatch rates of sublegal fish did not vary with mesh size. Selectivity curves could not be modeled because the mean and modal lengths of fish captured in each mesh did not increase with mesh size. Ratios of fish girth to mesh perimeter (G:P) for individual fish were often less than 1.0 as a result of the largest meshes capturing small paddlefish. It is unclear whether lack of size selectivity for paddlefish was because the gillnets were hobbled, the unique morphology of paddlefish, or the fact that they swim with their mouths agape when filter feeding. The lack of size selectivity by hobbled gillnets fished in Kentucky Lake means that managers cannot influence the size of paddlefish captured by commercial gillnet gear by changing minimum mesh size regulations. ?? 2006 Elsevier B.V. All rights reserved.
50 CFR 648.93 - Monkfish minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Monkfish minimum fish sizes. 648.93... Measures for the NE Multispecies and Monkfish Fisheries § 648.93 Monkfish minimum fish sizes. (a) General provisions. All monkfish caught by vessels issued a valid Federal monkfish permit must meet the minimum fish...
50 CFR 622.492 - Minimum size limit.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Queen Conch Resources of Puerto Rico and the U.S. Virgin Islands § 622.492 Minimum size limit. (a) The minimum size...
50 CFR 622.492 - Minimum size limit.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Queen Conch Resources of Puerto Rico and the U.S. Virgin Islands § 622.492 Minimum size limit. (a) The minimum size...
Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos
2018-01-03
Statistical power assessment is an important component of hypothesis-driven research but until relatively recently (mid-1990s) no methods were available for assessing power in experiments involving continuum data and in particular those involving one-dimensional (1D) time series. The purpose of this study was to describe how continuum-level power analyses can be used to plan hypothesis-driven biomechanics experiments involving 1D data. In particular, we demonstrate how theory- and pilot-driven 1D effect modeling can be used for sample-size calculations for both single- and multi-subject experiments. For theory-driven power analysis we use the minimum jerk hypothesis and single-subject experiments involving straight-line, planar reaching. For pilot-driven power analysis we use a previously published knee kinematics dataset. Results show that powers on the order of 0.8 can be achieved with relatively small sample sizes, five and ten for within-subject minimum jerk analysis and between-subject knee kinematics, respectively. However, the appropriate sample size depends on a priori justifications of biomechanical meaning and effect size. The main advantage of the proposed technique is that it encourages a priori justification regarding the clinical and/or scientific meaning of particular 1D effects, thereby robustly structuring subsequent experimental inquiry. In short, it shifts focus from a search for significance to a search for non-rejectable hypotheses. Copyright © 2017 Elsevier Ltd. All rights reserved.
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
[A Review on the Use of Effect Size in Nursing Research].
Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae
2015-10-01
The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.
An Efficient, Robust, and Inexpensive Grinding Device for Herbal Samples like Cinchona Bark.
Hansen, Steen Honoré; Holmfred, Else; Cornett, Claus; Maldonado, Carla; Rønsted, Nina
2015-01-01
An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery of analytes in extracts of Cinchona bark was investigated using HPLC.
50 CFR 648.83 - Multispecies minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 22 (55.9 cm) Haddock 18 (45.7 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 14 (35.6 cm) Yellowtail flounder 13 (33.0 cm) American plaice...
50 CFR 648.83 - Multispecies minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 19 (48.3 cm) Haddock 16 (40.6 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 13 (33 cm) Yellowtail flounder 12 (30.5 cm) American plaice (dab...
50 CFR 648.83 - Multispecies minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 19 (48.3 cm) Haddock 16 (40.6 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 13 (33 cm) Yellowtail flounder 12 (30.5 cm) American plaice (dab...
50 CFR 648.83 - Multispecies minimum fish sizes.
Code of Federal Regulations, 2011 CFR
2011-10-01
... vessels are subject to the following minimum fish sizes, determined by total length (TL): Minimum Fish Sizes (TL) for Commercial Vessels Species Size(inches) Cod 22 (55.9 cm) Haddock 18 (45.7 cm) Pollock 19 (48.3 cm) Witch flounder (gray sole) 14 (35.6 cm) Yellowtail flounder 13 (33.0 cm) American plaice...
2013-09-30
performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. (ii) Calibration errors: Size and power constraints in...acceptance parameters used to detect and classify events. For example, swim stroke detection requires parameters defining the minimum magnitude and the min...and max duration of a stroke . Species dependent parameters can be selected from existing DTAG data but other parameters depend on the size of the
Extraction of citral oil from lemongrass (Cymbopogon Citratus) by steam-water distillation technique
NASA Astrophysics Data System (ADS)
Alam, P. N.; Husin, H.; Asnawi, T. M.; Adisalamun
2018-04-01
In Indonesia, production of citral oil from lemon grass (Cymbopogon Cytratus) is done by a traditional technique whereby a low yield results. To improve the yield, an appropriate extraction technology is required. In this research, a steam-water distillation technique was applied to extract the essential oil from the lemongrass. The effects of sample particle size and bed volume on yield and quality of citral oil produced were investigated. The drying and refining time of 2 hours were used as fixed variables. This research results that minimum citral oil yield of 0.53% was obtained on sample particle size of 3 cm and bed volume of 80%, whereas the maximum yield of 1.95% on sample particle size of 15 cm and bed volume of 40%. The lowest specific gravity of 0.80 and the highest specific gravity of 0.905 were obtained on sample particle size of 8 cm with bed volume of 80% and particle size of 12 cm with bed volume of 70%, respectively. The lowest refractive index of 1.480 and the highest refractive index of 1.495 were obtained on sample particle size of 8 cm with bed volume of 70% and sample particle size of 15 cm with bed volume of 40%, respectively. The solubility of the produced citral oil in alcohol was 70% in ratio of 1:1, and the citral oil concentration obtained was around 79%.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lehmkuhl, John F.
1984-03-01
The concept of minimum populations of wildlife and plants has only recently been discussed in the literature. Population genetics has emerged as a basic underlying criterion for determining minimum population size. This paper presents a genetic framework and procedure for determining minimum viable population size and dispersion strategies in the context of multiple-use land management planning. A procedure is presented for determining minimum population size based on maintenance of genetic heterozygosity and reduction of inbreeding. A minimum effective population size ( N e ) of 50 breeding animals is taken from the literature as the minimum shortterm size to keep inbreeding below 1% per generation. Steps in the procedure adjust N e to account for variance in progeny number, unequal sex ratios, overlapping generations, population fluctuations, and period of habitat/population constraint. The result is an approximate census number that falls within a range of effective population size of 50 500 individuals. This population range defines the time range of short- to long-term population fitness and evolutionary potential. The length of the term is a relative function of the species generation time. Two population dispersion strategies are proposed: core population and dispersed population.
50 CFR 648.233 - Minimum Fish Sizes. [Reserved
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Minimum Fish Sizes. [Reserved] 648.233 Section 648.233 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND... Measures for the Spiny Dogfish Fishery § 648.233 Minimum Fish Sizes. [Reserved] ...
Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population
Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.
2001-01-01
For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.
An Efficient, Robust, and Inexpensive Grinding Device for Herbal Samples like Cinchona Bark
Hansen, Steen Honoré; Holmfred, Else; Cornett, Claus; Maldonado, Carla; Rønsted, Nina
2015-01-01
An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery of analytes in extracts of Cinchona bark was investigated using HPLC. PMID:26839823
50 CFR 622.208 - Minimum mesh size applicable to rock shrimp off Georgia and Florida.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Minimum mesh size applicable to rock... mesh size applicable to rock shrimp off Georgia and Florida. (a) The minimum mesh size for the cod end of a rock shrimp trawl net in the South Atlantic EEZ off Georgia and Florida is 17/8 inches (4.8 cm...
50 CFR 622.208 - Minimum mesh size applicable to rock shrimp off Georgia and Florida.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Minimum mesh size applicable to rock... mesh size applicable to rock shrimp off Georgia and Florida. (a) The minimum mesh size for the cod end of a rock shrimp trawl net in the South Atlantic EEZ off Georgia and Florida is 17/8 inches (4.8 cm...
Experimental and numerical modeling research of rubber material during microwave heating process
NASA Astrophysics Data System (ADS)
Chen, Hailong; Li, Tao; Li, Kunling; Li, Qingling
2018-05-01
This paper aims to investigate the heating behaviors of block rubber by experimental and simulated method. The COMSOL Multiphysics 5.0 software was utilized in numerical simulation work. The effects of microwave frequency, power and sample size on temperature distribution are examined. The effect of frequency on temperature distribution is obvious. The maximum and minimum temperatures of block rubber increase first and then decrease with frequency increasing. The microwave heating efficiency is maximum in the microwave frequency of 2450 MHz. However, more uniform temperature distribution is presented in other microwave frequencies. The influence of microwave power on temperature distribution is also remarkable. The smaller the power, the more uniform the temperature distribution on the block rubber. The effect of power on microwave heating efficiency is not obvious. The effect of sample size on temperature distribution is evidently found. The smaller the sample size, the more uniform the temperature distribution on the block rubber. However, the smaller the sample size, the lower the microwave heating efficiency. The results can serve as references for the research on heating rubber material by microwave technology.
Hydroxyapatite coatings containing Zn and Si on Ti-6Al-4Valloy by plasma electrolytic oxidation
NASA Astrophysics Data System (ADS)
Hwang, In-Jo; Choe, Han-Cheol
2018-02-01
In this study, hydroxyapatite coatings containing Zn and Si on Ti-6Al-4Valloy by plasma electrolytic oxidation were researched using various experimental instruments. The pore size is depended on the electrolyte concentration and the particle size and number of pore increase on surface part and pore part. In the case of Zn/Si sample, pore size was larger than that of Zn samples. The maximum size of pores decreased and minimum size of pores increased up to 10Zn/Si and Zn and Si affect the formation of pore shapes. As Zn ion concentration increases, the size of the particle tends to increase, the number of particles on the surface part is reduced, whereas the size of the particles and the number of particles on pore part increased. Zn is mainly detected at pore part, and Si is mainly detected at surface part. The crystallite size of anatase increased as the Zn ion concentration, whereas, in the case of Si ion added, crystallite size of anatase decreased.
7 CFR 51.2927 - Marking and packing requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Requirements § 51.2927 Marking and packing requirements. The minimum size or numerical count of the apricots in any package shall be plainly labeled, stenciled, or otherwise marked on the package. (a) Numerical count. When the numerical count is used the fruit in any sample shall not vary more than one-fourth inch...
7 CFR 51.2927 - Marking and packing requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Requirements § 51.2927 Marking and packing requirements. The minimum size or numerical count of the apricots in any package shall be plainly labeled, stenciled, or otherwise marked on the package. (a) Numerical count. When the numerical count is used the fruit in any sample shall not vary more than one-fourth inch...
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
NASA Astrophysics Data System (ADS)
Luthra, Deepali; Kumar, Sacheen
2018-05-01
Fingerprints are the very important evidence at the crime scene which must be developed clearly with shortest duration of time to solve the case. Metal oxide nanoparticles could be the mean to develop the latent fingerprints. Zinc oxide and Tin Oxide Nanoparticles were prepared by using chemical precipitation technique which were dried and characterized by X-ray diffraction, UV-Visible spectroscopy and FTIR. The size of zinc oxide crystallite was found to be 14.75 nm with minimum reflectance at 360 nm whereas tin oxide have the size of 90 nm and reflectance at minimum level 321 nm. By using these powdered samples on glass, plastic and glossy cardboard, latent fingerprints were developed. Zinc oxide was found to be better candidate than tin oxide for the fingerprint development on all the three types of substrates.
Experimental scheme and restoration algorithm of block compression sensing
NASA Astrophysics Data System (ADS)
Zhang, Linxia; Zhou, Qun; Ke, Jun
2018-01-01
Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.
50 CFR 622.454 - Minimum size limit.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Spiny Lobster Fishery of Puerto Rico and the U.S. Virgin Islands § 622.454 Minimum size limit. (a) The minimum...
50 CFR 622.454 - Minimum size limit.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Spiny Lobster Fishery of Puerto Rico and the U.S. Virgin Islands § 622.454 Minimum size limit. (a) The minimum...
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4...-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-4 Minimum cable conductor size. Each cable conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4...-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-4 Minimum cable conductor size. Each cable conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
50 CFR 648.75 - Shucking at sea and minimum surfclam size.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 50 Wildlife and Fisheries 12 2012-10-01 2012-10-01 false Shucking at sea and minimum surfclam size... Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.75 Shucking at sea and minimum surfclam size. (a) Shucking at sea—(1) Observers. (i) The Regional Administrator may allow the shucking of...
50 CFR 648.75 - Shucking at sea and minimum surfclam size.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false Shucking at sea and minimum surfclam size... Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.75 Shucking at sea and minimum surfclam size. (a) Shucking at sea—(1) Observers. (i) The Regional Administrator may allow the shucking of...
50 CFR 648.75 - Shucking at sea and minimum surfclam size.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Shucking at sea and minimum surfclam size... Measures for the Atlantic Surf Clam and Ocean Quahog Fisheries § 648.75 Shucking at sea and minimum surfclam size. (a) Shucking at sea—(1) Observers. (i) The Regional Administrator may allow the shucking of...
Khatkar, Mehar S; Nicholas, Frank W; Collins, Andrew R; Zenger, Kyall R; Cavanagh, Julie A L; Barris, Wes; Schnabel, Robert D; Taylor, Jeremy F; Raadsma, Herman W
2008-04-24
The extent of linkage disequilibrium (LD) within a population determines the number of markers that will be required for successful association mapping and marker-assisted selection. Most studies on LD in cattle reported to date are based on microsatellite markers or small numbers of single nucleotide polymorphisms (SNPs) covering one or only a few chromosomes. This is the first comprehensive study on the extent of LD in cattle by analyzing data on 1,546 Holstein-Friesian bulls genotyped for 15,036 SNP markers covering all regions of all autosomes. Furthermore, most studies in cattle have used relatively small sample sizes and, consequently, may have had biased estimates of measures commonly used to describe LD. We examine minimum sample sizes required to estimate LD without bias and loss in accuracy. Finally, relatively little information is available on comparative LD structures including other mammalian species such as human and mouse, and we compare LD structure in cattle with public-domain data from both human and mouse. We computed three LD estimates, D', Dvol and r2, for 1,566,890 syntenic SNP pairs and a sample of 365,400 non-syntenic pairs. Mean D' is 0.189 among syntenic SNPs, and 0.105 among non-syntenic SNPs; mean r2 is 0.024 among syntenic SNPs and 0.0032 among non-syntenic SNPs. All three measures of LD for syntenic pairs decline with distance; the decline is much steeper for r2 than for D' and Dvol. The value of D' and Dvol are quite similar. Significant LD in cattle extends to 40 kb (when estimated as r2) and 8.2 Mb (when estimated as D'). The mean values for LD at large physical distances are close to those for non-syntenic SNPs. Minor allelic frequency threshold affects the distribution and extent of LD. For unbiased and accurate estimates of LD across marker intervals spanning < 1 kb to > 50 Mb, minimum sample sizes of 400 (for D') and 75 (for r2) are required. The bias due to small samples sizes increases with inter-marker interval. LD in cattle is much less extensive than in a mouse population created from crossing inbred lines, and more extensive than in humans. For association mapping in Holstein-Friesian cattle, for a given design, at least one SNP is required for each 40 kb, giving a total requirement of at least 75,000 SNPs for a low power whole-genome scan (median r2 > 0.19) and up to 300,000 markers at 10 kb intervals for a high power genome scan (median r2 > 0.62). For estimation of LD by D' and Dvol with sufficient precision, a sample size of at least 400 is required, whereas for r2 a minimum sample of 75 is adequate.
Improved Time-Lapsed Angular Scattering Microscopy of Single Cells
NASA Astrophysics Data System (ADS)
Cannaday, Ashley E.
By measuring angular scattering patterns from biological samples and fitting them with a Mie theory model, one can estimate the organelle size distribution within many cells. Quantitative organelle sizing of ensembles of cells using this method has been well established. Our goal is to develop the methodology to extend this approach to the single cell level, measuring the angular scattering at multiple time points and estimating the non-nuclear organelle size distribution parameters. The diameters of individual organelle-size beads were successfully extracted using scattering measurements with a minimum deflection angle of 20 degrees. However, the accuracy of size estimates can be limited by the angular range detected. In particular, simulations by our group suggest that, for cell organelle populations with a broader size distribution, the accuracy of size prediction improves substantially if the minimum angle of detection angle is 15 degrees or less. The system was therefore modified to collect scattering angles down to 10 degrees. To confirm experimentally that size predictions will become more stable when lower scattering angles are detected, initial validations were performed on individual polystyrene beads ranging in diameter from 1 to 5 microns. We found that the lower minimum angle enabled the width of this delta-function size distribution to be predicted more accurately. Scattering patterns were then acquired and analyzed from single mouse squamous cell carcinoma cells at multiple time points. The scattering patterns exhibit angular dependencies that look unlike those of any single sphere size, but are well-fit by a broad distribution of sizes, as expected. To determine the fluctuation level in the estimated size distribution due to measurement imperfections alone, formaldehyde-fixed cells were measured. Subsequent measurements on live (non-fixed) cells revealed an order of magnitude greater fluctuation in the estimated sizes compared to fixed cells. With our improved and better-understood approach to single cell angular scattering, we are now capable of reliably detecting changes in organelle size predictions due to biological causes above our measurement error of 20 nm, which enables us to apply our system to future studies of the investigation of various single cell biological processes.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.
7 CFR 51.2952 - Size specifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... specifications. Size shall be specified in accordance with the facts in terms of one of the following classifications: (a) Mammoth size. Mammoth size means walnuts of which not over 12 percent, by count, pass through... foregoing classifications, size of walnuts may be specified in terms of minimum diameter, or minimum and...
7 CFR 51.2952 - Size specifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... specifications. Size shall be specified in accordance with the facts in terms of one of the following classifications: (a) Mammoth size. Mammoth size means walnuts of which not over 12 percent, by count, pass through... foregoing classifications, size of walnuts may be specified in terms of minimum diameter, or minimum and...
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) The numerical count or a count-size based on equivalent tray pack size designations or the minimum... numerical count is not shown the minimum diameter shall be plainly stamped, stenciled, or otherwise marked...
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) The numerical count or a count-size based on equivalent tray pack size designations or the minimum... numerical count is not shown the minimum diameter shall be plainly stamped, stenciled, or otherwise marked...
TWO-PHASE FORMATION IN SOLUTIONS OF TOBACCO MOSAIC VIRUS AND THE PROBLEM OF LONG-RANGE FORCES
Oster, Gerald
1950-01-01
In a nearly salt-free medium, a dilute tobacco mosaic virus solution of rod-shaped virus particles of uniform length forms two phases; the bottom optically anisotropic phase has a greater virus concentration than has the top optically isotropic phase. For a sample containing particles of various lengths, the bottom phase contains longer particles than does the top and the concentrations top and bottom are nearly equal. The longer the particles the less the minimum concentration necessary for two-phase formation. Increasing the salt concentration increases the minimum concentration. The formation of two phases is explained in terms of geometrical considerations without recourse to the concept of long-range attractive forces. The minimum concentration for two-phase formation is that concentration at which correlation in orientation between the rod-shaped particles begins to take place. This concentration is determined by the thermodynamically effective size and shape of the particles as obtained from the concentration dependence of the osmotic pressure of the solutions measured by light scattering. The effective volume of the particles is introduced into the theory of Onsager for correlation of orientation of uniform size rods and good agreement with experiment is obtained. The theory is extended to a mixture of non-uniform size rods and to the case in which the salt concentration is varied, and agreement with experiment is obtained. The thermodynamically effective volume of the particles and its dependence on salt concentration are explained in terms of the shape of the particles and the electrostatic repulsion between them. Current theories of the hydration of proteins and of long-range forces are critically discussed. The bottom layer of freshly purified tobacco mosaic virus samples shows Bragg diffraction of visible light. The diffraction data indicate that the virus particles in solution form three-dimensional crystals approximately the size of crystalline inclusion bodies found in the cells of plants suffering from the disease. PMID:15422102
Evaluating Goodness-of-Fit Indexes for Testing Measurement Invariance.
ERIC Educational Resources Information Center
Cheung, Gordon W.; Rensvold, Roger B.
2002-01-01
Examined 20 goodness-of-fit indexes based on the minimum fit function using a simulation under the 2-group situation. Results support the use of the delta comparative fit index, delta Gamma hat, and delta McDonald's Noncentrality Index to evaluation measurement invariance. These three approaches are independent of model complexity and sample size.…
Duret, Manon T; Pachiadaki, Maria G; Stewart, Frank J; Sarode, Neha; Christaki, Urania; Monchy, Sébastien; Srivastava, Ankita; Edgcomb, Virginia P
2015-05-01
Oxygen minimum zones (OMZs) caused by water column stratification appear to expand in parts of the world's ocean, with consequences for marine biogeochemical cycles. OMZ formation is often fueled by high surface primary production, and sinking organic particles can be hotspots of interactions and activity within microbial communities. This study investigated the diversity of OMZ protist communities in two biomass size fractions (>30 and 30-1.6 μm filters) from the world's largest permanent OMZ in the Eastern Tropical North Pacific. Diversity was quantified via Illumina MiSeq sequencing of V4 region of 18S SSU rRNA genes in samples spanning oxygen gradients at two stations. Alveolata and Rhizaria dominated the two size fractions at both sites along the oxygen gradient. Community composition at finer taxonomic levels was partially shaped by oxygen concentration, as communities associated with versus anoxic waters shared only ∼32% of operational taxonomic unit (OTU) (97% sequence identity) composition. Overall, only 9.7% of total OTUs were recovered at both stations and under all oxygen conditions sampled, implying structuring of the eukaryotic community in this area. Size-fractionated communities exhibited different taxonomical features (e.g. Syndiniales Group I in the 1.6-30 μm fraction) that could be explained by the microniches created on the surface-originated sinking particles. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Practical implementation of channelized hotelling observers: effect of ROI size
NASA Astrophysics Data System (ADS)
Ferrero, Andrea; Favazza, Christopher P.; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
Practical implementation of Channelized Hotelling Observers: Effect of ROI size.
Ferrero, Andrea; Favazza, Christopher P; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H
2017-03-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO's performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO's performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies.
A multifunctional force microscope for soft matter with in situ imaging
NASA Astrophysics Data System (ADS)
Roberts, Paul; Pilkington, Georgia A.; Wang, Yumo; Frechette, Joelle
2018-04-01
We present the multifunctional force microscope (MFM), a normal and lateral force-measuring instrument with in situ imaging. In the MFM, forces are calculated from the normal and lateral deflection of a cantilever as measured via fiber optic sensors. The motion of the cantilever is controlled normally by a linear micro-translation stage and a piezoelectric actuator, while the lateral motion of the sample is controlled by another linear micro-translation stage. The micro-translation stages allow for travel distances that span 25 mm with a minimum step size of 50 nm, while the piezo has a minimum step size of 0.2 nm, but a 100 μm maximum range. Custom-designed cantilevers allow for the forces to be measured over 4 orders of magnitude (from 50 μN to 1 N). We perform probe tack, friction, and hydrodynamic drainage experiments to demonstrate the sensitivity, versatility, and measurable force range of the instrument.
Magnetic properties of M0.3Fe2.7O4 (M = Fe, Zn and Mn) ferrites nanoparticles
NASA Astrophysics Data System (ADS)
Modaresi, Nahid; Afzalzadeh, Reza; Aslibeiki, Bagher; Kameli, Parviz
2018-06-01
In the present article a comparative study on the structural and magnetic properties of nano-sized M0.3Fe0.7Fe2O4 (M = Fe, Zn and Mn) ferrites have been reported. The X-ray diffraction (XRD) patterns show that the crystallite size depends on the cation distribution. The Rietveld refinement of XRD patterns using MAUD software determines the distribution of cations and unit cell dimensions. The magnetic measurements show that the maximum and minimum value of saturation magnetization is obtained for Zn and Mn doped samples, respectively. The peak temperature of AC magnetic susceptibility of Zn and Fe doped samples below 300 K shows the superparamagnetic behavior in these samples at room temperature. the AC susceptibility results confirm the presence of strong interactions between the nanoparticles which leads to a superspin glass state in the samples at low temperatures.
NASA Astrophysics Data System (ADS)
Lamont, Peter A.; Gage, John D.
2000-01-01
Morphological adaptation to low dissolved oxygen consisting of enlarged respiratory surface area is described in polychaete species belonging to the family Spionidae from the Oman margin where the oxygen minimum zone impinges on the continental slope. Similar adaptation is suggested for species in the family Cossuridae. Such morphological adaptation apparently has not been previously recorded among polychaetes living in hypoxic conditions. The response consists of enlargement in size and branching of the branchiae relative to similar species living in normal levels of dissolved oxygen. Specimens were examined in benthic samples from different depths along a transect through the oxygen minimum zone. There was a highly significant trend shown to increasing respiratory area relative to body size in two undescribed spionid species with decreasing depth and oxygen within the OMZ. Yet the size and number of branchiae are often used as taxonomic characters. These within-species differences in size and number of branchiae may be a direct response by the phenotype to intensity of hypoxia. The alternative explanations are that they either reflect a pattern of differential post-settlement selection among a highly variable genotype, or represent early genetic differentiation among depth-isolated sub-populations.
Shen, You-xin; Liu, Wei-li; Li, Yu-hui; Guan, Hui-lin
2014-01-01
A large number of small-sized samples invariably shows that woody species are absent from forest soil seed banks, leading to a large discrepancy with the seedling bank on the forest floor. We ask: 1) Does this conventional sampling strategy limit the detection of seeds of woody species? 2) Are large sample areas and sample sizes needed for higher recovery of seeds of woody species? We collected 100 samples that were 10 cm (length) × 10 cm (width) × 10 cm (depth), referred to as larger number of small-sized samples (LNSS) in a 1 ha forest plot, and placed them to germinate in a greenhouse, and collected 30 samples that were 1 m × 1 m × 10 cm, referred to as small number of large-sized samples (SNLS) and placed them (10 each) in a nearby secondary forest, shrub land and grass land. Only 15.7% of woody plant species of the forest stand were detected by the 100 LNSS, contrasting with 22.9%, 37.3% and 20.5% woody plant species being detected by SNLS in the secondary forest, shrub land and grassland, respectively. The increased number of species vs. sampled areas confirmed power-law relationships for forest stand, the LNSS and SNLS at all three recipient sites. Our results, although based on one forest, indicate that conventional LNSS did not yield a high percentage of detection for woody species, but SNLS strategy yielded a higher percentage of detection for woody species in the seed bank if samples were exposed to a better field germination environment. A 4 m2 minimum sample area derived from power equations is larger than the sampled area in most studies in the literature. Increased sample size also is needed to obtain an increased sample area if the number of samples is to remain relatively low.
24 CFR 984.105 - Minimum program size.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVELOPMENT SECTION 8 AND PUBLIC HOUSING FAMILY SELF-SUFFICIENCY PROGRAM General § 984.105 Minimum program... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Minimum program size. 984.105 Section 984.105 Housing and Urban Development Regulations Relating to Housing and Urban Development...
Practical implementation of Channelized Hotelling Observers: Effect of ROI size
Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H.
2017-01-01
Fundamental to the development and application of channelized Hotelling observer (CHO) models is the selection of the region of interest (ROI) to evaluate. For assessment of medical imaging systems, reducing the ROI size can be advantageous. Smaller ROIs enable a greater concentration of interrogable objects in a single phantom image, thereby providing more information from a set of images and reducing the overall image acquisition burden. Additionally, smaller ROIs may promote better assessment of clinical patient images as different patient anatomies present different ROI constraints. To this end, we investigated the minimum ROI size that does not compromise the performance of the CHO model. In this study, we evaluated both simulated images and phantom CT images to identify the minimum ROI size that resulted in an accurate figure of merit (FOM) of the CHO’s performance. More specifically, the minimum ROI size was evaluated as a function of the following: number of channels, spatial frequency and number of rotations of the Gabor filters, size and contrast of the object, and magnitude of the image noise. Results demonstrate that a minimum ROI size exists below which the CHO’s performance is grossly inaccurate. The minimum ROI size is shown to increase with number of channels and be dictated by truncation of lower frequency filters. We developed a model to estimate the minimum ROI size as a parameterized function of the number of orientations and spatial frequencies of the Gabor filters, providing a guide for investigators to appropriately select parameters for model observer studies. PMID:28943699
Noninferiority trial designs for odds ratios and risk differences.
Hilton, Joan F
2010-04-30
This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.
The minimum area requirements (MAR) for giant panda: an empirical study
Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang
2016-01-01
Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population’s long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive. PMID:27929520
The minimum area requirements (MAR) for giant panda: an empirical study.
Qing, Jing; Yang, Zhisong; He, Ke; Zhang, Zejun; Gu, Xiaodong; Yang, Xuyu; Zhang, Wen; Yang, Biao; Qi, Dunwu; Dai, Qiang
2016-12-08
Habitat fragmentation can reduce population viability, especially for area-sensitive species. The Minimum Area Requirements (MAR) of a population is the area required for the population's long-term persistence. In this study, the response of occupancy probability of giant pandas against habitat patch size was studied in five of the six mountain ranges inhabited by giant panda, which cover over 78% of the global distribution of giant panda habitat. The probability of giant panda occurrence was positively associated with habitat patch area, and the observed increase in occupancy probability with patch size was higher than that due to passive sampling alone. These results suggest that the giant panda is an area-sensitive species. The MAR for giant panda was estimated to be 114.7 km 2 based on analysis of its occupancy probability. Giant panda habitats appear more fragmented in the three southern mountain ranges, while they are large and more continuous in the other two. Establishing corridors among habitat patches can mitigate habitat fragmentation, but expanding habitat patch sizes is necessary in mountain ranges where fragmentation is most intensive.
Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B
2008-06-01
To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.
On the Importance of Cycle Minimum in Sunspot Cycle Prediction
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1996-01-01
The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.
Stress dependence of microstructures in experimentally deformed calcite
NASA Astrophysics Data System (ADS)
Platt, John P.; De Bresser, J. H. P.
2017-12-01
Optical measurements of microstructural features in experimentally deformed Carrara marble help define their dependence on stress. These features include dynamically recrystallized grain size (Dr), subgrain size (Sg), minimum bulge size (Lρ), and the maximum scale length for surface-energy driven grain-boundary migration (Lγ). Taken together with previously published data Dr defines a paleopiezometer over the range 15-291 MPa and temperature over the range 500-1000 °C, with a stress exponent of -1.09 (CI -1.27 to -0.95), showing no detectable dependence on temperature. Sg and Dr measured in the same samples are closely similar in size, suggesting that the new grains did not grow significantly after nucleation. Lρ and Lγ measured on each sample define a relationship to stress with an exponent of approximately -1.6, which helps define the boundary between a region of dominant strain-energy-driven grain-boundary migration at high stress, from a region of dominant surface-energy-driven grain-boundary migration at low stress.
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Muskellunge growth potential in northern Wisconsin: implications for trophy management
Faust, Matthew D.; Isermann, Daniel A.; Luehring, Mark A.; Hansen, Michael J.
2015-01-01
The growth potential of Muskellunge Esox masquinongy was evaluated by back-calculating growth histories from cleithra removed from 305 fish collected during 1995–2011 to determine whether it was consistent with trophy management goals in northern Wisconsin. Female Muskellunge had a larger mean asymptotic length (49.8 in) than did males (43.4 in). Minimum ultimate size of female Muskellunge (45.0 in) equaled the 45.0-in minimum length limit, but was less than the 50.0-in minimum length limit used on Wisconsin's trophy waters, while the minimum ultimate size of male Muskellunge (34.0 in) was less than the statewide minimum length limit. Minimum reproductive sizes for both sexes were less than Wisconsin's trophy minimum length limits. Mean growth potential of female Muskellunge in northern Wisconsin appears to be sufficient for meeting trophy management objectives and angler expectations. Muskellunge in northern Wisconsin had similar growth potential to those in Ontario populations, but lower growth potential than Minnesota's populations, perhaps because of genetic and environmental differences.
NASA Astrophysics Data System (ADS)
Torres Beltran, M.
2016-02-01
The Scientific Committee on Oceanographic Research (SCOR) Working Group 144 "Microbial Community Responses to Ocean Deoxygenation" workshop held in Vancouver, British Columbia in July 2014 had the primary objective of kick-starting the establishment of a minimal core of technologies, techniques and standard operating procedures (SOPs) to enable compatible process rate and multi-molecular data (DNA, RNA and protein) collection in marine oxygen minimum zones (OMZs) and other oxygen starved waters. Experimental activities conducted in Saanich Inlet, a seasonally anoxic fjord on Vancouver Island British Columbia, were designed to compare and cross-calibrate in situ sampling devices (McLane PPS system) with conventional bottle sampling and incubation methods. Bottle effects on microbial community composition, and activity were tested using different filter combinations and sample volumes to compare PPS/IPS (0.4 µm) versus Sterivex (0.22 µm) filtration methods with and without prefilters (2.7 µm). Resulting biomass was processed for small subunit ribosomal RNA gene sequencing across all three domains of life on the 454 platform followed by downstream community structure analyses. Significant community shifts occurred within and between filter fractions for in situ versus on-ship processed samples. For instance, the relative abundance of several bacterial phyla including Bacteroidetes, Delta and Gammaproteobacteria decreased five-fold on-ship when compared to in situ filtration. Similarly, experimental mesocosms showed similar community structure and activity to in situ filtered samples indicating the need to cross-calibrate incubations to constrain bottle effects. In addition, alpha and beta diversity significantly changed as function of filter size and volume, as well as the operational taxonomic units identified using indicator species analysis for each filter size. Our results provide statistical support that microbial community structure is systematically biased by filter fraction methods and highlight the need for establishing compatible techniques among researchers that facilitate comparative and reproducible science for the whole community.
Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M
2018-05-01
Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Hasan Rhaif Al-Sahlanee, Mayyadah; Maizan Ramli, Ramzun; Abdul Hassan Ali, Miami; Fadhil Tawfiq, Nada; Zahirah Noor Azman, Nurul; Abdul Rahman, Azhar; Shahrim Mustafa, Iskandar; Noor Ashikin Nik Abdul Razak, Nik; Zakiah Yahaya, Nor; Mohammed Al-Marri, Hana; Syuhada Ayob, Nur; Zakaria, Nabela
2017-10-01
Trace elements are essential nutritional components in humans and inconvenient tissue content that have a significant influence on infant size. The aim of this study is to evaluate the effects of concentration of elements (uranium (U), lead (Pb) and iron (Fe)) and absorption of Pb and Fe on maternal and umbilical cord blood samples. The concentration and absorption of Pb and Fe in blood samples were determined by using atomic absorption spectrophotometry device, while the uranium concentration was determined by using CR-39 detector. Fifty women of age 16-44 years are involved in this study. Results show that the maximum and minimum values of both concentration and absorption in the maternal samples were for Pb and Fe, respectively. In addition, for umbilical cord, the maximum values of concentration and absorption were for Fe and the minimum concentration and absorption were for U and Pb, respectively. A significant correlation between maternal and umbilical cord blood samples was found. This indicates that the Pb, U and Fe elements can easily transfer from maternal to the fetal body which impacts the growth of fetus.
1997-06-01
4.53 110.00 Maximum 4.33 Symmetry- Veta I 5.00 Symmetry- Veta I 1.20 Kurtosis- Veta II 6.70 Kurtosis- Veta II 1.40 Coeff. of Variation 33.2% Coeff of...Mean) 0.64 0.62 SE(SD) 0.02 0.62 SE(SD) 0.02 277.00 Minimum 10.91 264.00 Minimum 10.39 408.00 Maximum 16.06 379.00 Maximum 14.92 Symmetry- Veta I...0.60 Symmetry- Veta I 1.3{) Kurtosis- Veta II -0.40 Kurtosis- Veta II 0.30 Coeff. ofV ariation 4.9% Coeff. of Variation 5.2% Sample Size 4447 Sarn_ple
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire
2009-01-01
This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145
Methods for the preparation and analysis of solids and suspended solids for methylmercury
DeWild, John F.; Olund, Shane D.; Olson, Mark L.; Tate, Michael T.
2004-01-01
This report presents the methods and method performance data for the determination of methylmercury concentrations in solids and suspended solids. Using the methods outlined here, the U.S. Geological Survey's Wisconsin District Mercury Laboratory can consistently detect methylmercury in solids and suspended solids at environmentally relevant concentrations. Solids can be analyzed wet or freeze dried with a minimum detection limit of 0.08 ng/g (as-processed). Suspended solids must first be isolated from aqueous matrices by filtration. The minimum detection limit for suspended solids is 0.01 ng per filter resulting in a minimum reporting limit ranging from 0.2 ng/L for a 0.05 L filtered volume to 0.01 ng/L for a 1.0 L filtered volume. Maximum concentrations for both matrices can be extended to cover nearly any amount of methylmercury by limiting sample size.
Estimating the breeding population of long-billed curlew in the United States
Stanley, T.R.; Skagen, S.K.
2007-01-01
Determining population size and long-term trends in population size for species of high concern is a priority of international, national, and regional conservation plans. Long-billed curlews (Numenius americanus) are a species of special concern in North America due to apparent declines in their population. Because long-billed curlews are not adequately monitored by existing programs, we undertook a 2-year study with the goals of 1) determining present long-billed curlew distribution and breeding population size in the United States and 2) providing recommendations for a long-term long-billed curlew monitoring protocol. We selected a stratified random sample of survey routes in 16 western states for sampling in 2004 and 2005, and we analyzed count data from these routes to estimate detection probabilities and abundance. In addition, we evaluated habitat along roadsides to determine how well roadsides represented habitat throughout the sampling units. We estimated there were 164,515 (SE = 42,047) breeding long-billed curlews in 2004, and 109,533 (SE = 31,060) breeding individuals in 2005. These estimates far exceed currently accepted estimates based on expert opinion. We found that habitat along roadsides was representative of long-billed curlew habitat in general. We make recommendations for improving sampling methodology, and we present power curves to provide guidance on minimum sample sizes required to detect trends in abundance.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris D,
2010-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from TRMM data as a cluster of pixels with an 85 GHz polarization-corrected brightness temperature below 255 K and with an area at least 64 km 2. The study database consisted of convective systems in West Africa from May Sep for 1998-2007 and in the western Pacific from May Nov 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences among the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Sub-setting the database revealed some sensitivity in distribution shape to the size of the sampling area, length of sample period, and climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is wetter or drier than normal.
Sasaki, Michiya; Ogino, Haruyuki; Hattori, Takatoshi
2018-06-08
In order to prove a small increment in a risk of concern in an epidemiological study, a large sample of a population is generally required. Since the background risk of an end point of interest, such as cancer mortality, is affected by various factors, such as lifestyle (diet, smoking, etc.), adjustment for such factors is necessary. However, it is impossible to inclusively and completely adjust for such factors; therefore, uncertainty in the background risk remains for control and exposed populations, indicating that there is a minimum limit to the lower bound for the provable risk regardless of the sample size. In this case study, we developed and discussed the minimum provable risk considering the uncertainty in background risk for hypothetical populations by referring to recent Japanese statistical information to grasp the extent of the minimum provable risk. Risk of fatal diseases due to radiation exposure, which has recently been the focus of radiological protection, was also examined by comparative assessment of the minimum provable risk for cancer and circulatory diseases. It was estimated that the minimum provable risk for circulatory disease mortality was much greater than that for cancer mortality, approximately five to seven times larger; circulatory disease mortality is more difficult to prove as a radiation risk than cancer mortality under the conditions used in this case study.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-04-01
In the last three decades, an increasing number of studies analyzed spatial patterns in throughfall to investigate the consequences of rainfall redistribution for biogeochemical and hydrological processes in forests. In the majority of cases, variograms were used to characterize the spatial properties of the throughfall data. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and an appropriate layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation methods on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with heavy outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling), and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the numbers recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes << 200, our current knowledge about throughfall spatial variability stands on shaky ground.
NASA Astrophysics Data System (ADS)
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous throughfall studies relied on method-of-moments variogram estimation and sample sizes ≪200, currently available data are prone to large uncertainties.
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
Laser Surface Modification of H13 Die Steel using Different Laser Spot Sizes
NASA Astrophysics Data System (ADS)
Aqida, S. N.; Naher, S.; Brabazon, D.
2011-05-01
This paper presents a laser surface modification process of AISI H13 tool steel using three sizes of laser spot with an aim to achieve reduced grain size and surface roughness. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). Metallographic study and image analysis were done to measure the grain size and the modified surface roughness was measured using two-dimensional surface profilometer. From metallographic study, the smallest grain sizes measured by laser modified surface were between 0.51 μm and 2.54 μm. The minimum surface roughness, Ra, recorded was 3.0 μm. This surface roughness of the modified die steel is similar to the surface quality of cast products. The grain size correlation with hardness followed the findings correlate with Hall-Petch relationship. The potential found for increase in surface hardness represents an important method to sustain tooling life.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, Erik; Trolinger, James D.; Lacey, Ian
This work reports on the development of a binary pseudo-random test sample optimized to calibrate the MTF of optical microscopes. The sample consists of a number of 1-D and 2-D patterns, with different minimum sizes of spatial artifacts from 300 nm to 2 microns. We describe the mathematical background, fabrication process, data acquisition and analysis procedure to return spatial frequency based instrument calibration. We show that the developed samples satisfy the characteristics of a test standard: functionality, ease of specification and fabrication, reproducibility, and low sensitivity to manufacturing error. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading ofmore » the abstract is permitted for personal use only.« less
Online clustering algorithms for radar emitter classification.
Liu, Jun; Lee, Jim P Y; Senior; Li, Lingjie; Luo, Zhi-Quan; Wong, K Max
2005-08-01
Radar emitter classification is a special application of data clustering for classifying unknown radar emitters from received radar pulse samples. The main challenges of this task are the high dimensionality of radar pulse samples, small sample group size, and closely located radar pulse clusters. In this paper, two new online clustering algorithms are developed for radar emitter classification: One is model-based using the Minimum Description Length (MDL) criterion and the other is based on competitive learning. Computational complexity is analyzed for each algorithm and then compared. Simulation results show the superior performance of the model-based algorithm over competitive learning in terms of better classification accuracy, flexibility, and stability.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Are power calculations useful? A multicentre neuroimaging study
Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward
2014-01-01
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267
C.M. Free; R.M. Landis; J. Grogan; M.D. Schulze; M. Lentini; O. Dunisch; NO-VALUE
2014-01-01
Knowledge of tree age-size relationships is essential towards evaluating the sustainability of harvest regulations that include minimum diameter cutting limits and fixed-length cutting cycles. Although many tropical trees form annual growth rings and can be aged from discs or cores, destructive sampling is not always an option for valuable or threatened species. We...
An evaluation of space acquired data as a tool for wildlife management in Alaska
NASA Technical Reports Server (NTRS)
Vantries, B. J. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Interpretation of ERTS-1 imagery by color-coded densitometric displays and digital processes data verified that with adequate quadrat in situ sampling ERTS-1 data could be extrapolated to describe accurately the vegetative characteristics of analogous sites, and that surface acres of water for waterfowl production were obtainable for ponds a minimum of 5 acres in size.
Schröder, Winfried; Nickel, Stefan; Schönrock, Simon; Meyer, Michaela; Wosniok, Werner; Harmens, Harry; Frontasyeva, Marina V; Alber, Renate; Aleksiayenak, Julia; Barandovski, Lambe; Carballeira, Alejo; Danielsson, Helena; de Temmermann, Ludwig; Godzik, Barbara; Jeran, Zvonka; Karlsson, Gunilla Pihl; Lazo, Pranvera; Leblond, Sebastien; Lindroos, Antti-Jussi; Liiv, Siiri; Magnússon, Sigurður H; Mankovska, Blanka; Martínez-Abaigar, Javier; Piispanen, Juha; Poikolainen, Jarmo; Popescu, Ion V; Qarri, Flora; Santamaria, Jesus Miguel; Skudnik, Mitja; Špirić, Zdravko; Stafilov, Trajce; Steinnes, Eiliv; Stihi, Claudia; Thöni, Lotti; Uggerud, Hilde Thelle; Zechmeister, Harald G
2016-06-01
For analysing element input into ecosystems and associated risks due to atmospheric deposition, element concentrations in moss provide complementary and time-integrated data at high spatial resolution every 5 years since 1990. The paper reviews (1) minimum sample sizes needed for reliable, statistical estimation of mean values at four different spatial scales (European and national level as well as landscape-specific level covering Europe and single countries); (2) trends of heavy metal (HM) and nitrogen (N) concentrations in moss in Europe (1990-2010); (3) correlations between concentrations of HM in moss and soil specimens collected across Norway (1990-2010); and (4) canopy drip-induced site-specific variation of N concentration in moss sampled in seven European countries (1990-2013). While the minimum sample sizes on the European and national level were achieved without exception, for some ecological land classes and elements, the coverage with sampling sites should be improved. The decline in emission and subsequent atmospheric deposition of HM across Europe has resulted in decreasing HM concentrations in moss between 1990 and 2010. In contrast, hardly any changes were observed for N in moss between 2005, when N was included into the survey for the first time, and 2010. In Norway, both, the moss and the soil survey data sets, were correlated, indicating a decrease of HM concentrations in moss and soil. At the site level, the average N deposition inside of forests was almost three times higher than the average N deposition outside of forests.
Francis, Jill J; Johnston, Marie; Robertson, Clare; Glidewell, Liz; Entwistle, Vikki; Eccles, Martin P; Grimshaw, Jeremy M
2010-12-01
In interview studies, sample size is often justified by interviewing participants until reaching 'data saturation'. However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on the theory of planned behaviour, designed to identify three belief categories (Behavioural, Normative and Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or studywise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Studywise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Papelis, Charalambos; Um, Wooyong; Russel, Charles E.
2003-03-28
The specific surface area of natural and manmade solid materials is a key parameter controlling important interfacial processes in natural environments and engineered systems, including dissolution reactions and sorption processes at solid-fluid interfaces. To improve our ability to quantify the release of trace elements trapped in natural glasses, the release of hazardous compounds trapped in manmade glasses, or the release of radionuclides from nuclear melt glass, we measured the specific surface area of natural and manmade glasses as a function of particle size, morphology, and composition. Volcanic ash, volcanic tuff, tektites, obsidian glass, and in situ vitrified rock were analyzed.more » Specific surface area estimates were obtained using krypton as gas adsorbent and the BET model. The range of surface areas measured exceeded three orders of magnitude. A tektite sample had the highest surface area (1.65 m2/g), while one of the samples of in situ vitrified rock had the lowest surf ace area (0.0016 m2/g). The specific surface area of the samples was a function of particle size, decreasing with increasing particle size. Different types of materials, however, showed variable dependence on particle size, and could be assigned to one of three distinct groups: (1) samples with low surface area dependence on particle size and surface areas approximately two orders of magnitude higher than the surface area of smooth spheres of equivalent size. The specific surface area of these materials was attributed mostly to internal porosity and surface roughness. (2) samples that showed a trend of decreasing surface area dependence on particle size as the particle size increased. The minimum specific surface area of these materials was between 0.1 and 0.01 m2/g and was also attributed to internal porosity and surface roughness. (3) samples whose surface area showed a monotonic decrease with increasing particle size, never reaching an ultimate surface area limit within the particle size range examined. The surface area results were consistent with particle morphology, examined by scanning electron microscopy, and have significant implications for the release of radionuclides and toxic metals in the environment.« less
Effects of sample size on KERNEL home range estimates
Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.
1999-01-01
Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.
Klinkenberg, Don; Thomas, Ekelijn; Artavia, Francisco F Calvo; Bouma, Annemarie
2011-08-01
Design of surveillance programs to detect infections could benefit from more insight into sampling schemes. We address the effect of sampling schemes for Salmonella Enteritidis surveillance in laying hens. Based on experimental estimates for the transmission rate in flocks, and the characteristics of an egg immunological test, we have simulated outbreaks with various sampling schemes, and with the current boot swab program with a 15-week sampling interval. Declaring a flock infected based on a single positive egg was not possible because test specificity was too low. Thus, a threshold number of positive eggs was defined to declare a flock infected, and, for small sample sizes, eggs from previous samplings had to be included in a cumulative sample to guarantee a minimum flock level specificity. Effectiveness of surveillance was measured by the proportion of outbreaks detected, and by the number of contaminated table eggs brought on the market. The boot swab program detected 90% of the outbreaks, with 75% fewer contaminated eggs compared to no surveillance, whereas the baseline egg program (30 eggs each 15 weeks) detected 86%, with 73% fewer contaminated eggs. We conclude that a larger sample size results in more detected outbreaks, whereas a smaller sampling interval decreases the number of contaminated eggs. Decreasing sample size and interval simultaneously reduces the number of contaminated eggs, but not indefinitely: the advantage of more frequent sampling is counterbalanced by the cumulative sample including less recently laid eggs. Apparently, optimizing surveillance has its limits when test specificity is taken into account. © 2011 Society for Risk Analysis.
Minimum and Maximum Times Required to Obtain Representative Suspended Sediment Samples
NASA Astrophysics Data System (ADS)
Gitto, A.; Venditti, J. G.; Kostaschuk, R.; Church, M. A.
2014-12-01
Bottle sampling is a convenient method of obtaining suspended sediment measurements for the development of sediment budgets. While these methods are generally considered to be reliable, recent analysis of depth-integrated sampling has identified considerable uncertainty in measurements of grain-size concentration between grain-size classes of multiple samples. Point-integrated bottle sampling is assumed to represent the mean concentration of suspended sediment but the uncertainty surrounding this method is not well understood. Here we examine at-a-point variability in velocity, suspended sediment concentration, grain-size distribution, and grain-size moments to determine if traditional point-integrated methods provide a representative sample of suspended sediment. We present continuous hour-long observations of suspended sediment from the sand-bedded portion of the Fraser River at Mission, British Columbia, Canada, using a LISST laser-diffraction instrument. Spectral analysis suggests that there are no statistically significant peak in energy density, suggesting the absence of periodic fluctuations in flow and suspended sediment. However, a slope break in the spectra at 0.003 Hz corresponds to a period of 5.5 minutes. This coincides with the threshold between large-scale turbulent eddies that scale with channel width/mean velocity and hydraulic phenomena related to channel dynamics. This suggests that suspended sediment samples taken over a period longer than 5.5 minutes incorporate variability that is larger scale than turbulent phenomena in this channel. Examination of 5.5-minute periods of our time series indicate that ~20% of the time a stable mean value of volumetric concentration is reached within 30 seconds, a typical bottle sample duration. In ~12% of measurements a stable mean was not reached over the 5.5 minute sample duration. The remaining measurements achieve a stable mean in an even distribution over the intervening interval.
Maturation and sexual ontogeny in the spangled emperor Lethrinus nebulosus.
Marriott, R J; Jarvis, N D C; Adams, D J; Gallash, A E; Norriss, J; Newman, S J
2010-04-01
The reproductive development and sexual ontogeny of spangled emperor Lethrinus nebulosus populations in the Ningaloo Marine Park (NMP) were investigated to obtain an improved understanding of its evolved reproductive strategy and data for fisheries management. Evidence derived from (1) analyses of histological data and sampled sex ratios with size and age, (2) the identification of residual previtellogenic oocytes in immature and mature testes sampled during the spawning season and (3) observed changes in testis internal structure with increasing fish size and age, demonstrated a non-functional protogynous hermaphroditic strategy (or functional gonochorism). All the smallest and youngest fish sampled were female until they either changed sex to male at a mean 277.5 mm total length (L(T)) and 2.3 years old or remained female and matured at a larger mean L(T) (392.1 mm) and older age (3.5 years). Gonad masses were similar for males and females over the size range sampled and throughout long reproductive lives (up to a maximum estimated age of c. 31 years), which was another correlate of functional gonochorism. That the mean L(T) at sex change and female maturity were below the current minimum legal size (MLS) limit (410 mm) demonstrated that the current MLS limit is effective for preventing recreational fishers in the NMP retaining at least half of the juvenile males and females in their landed catches.
Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit
2016-02-01
The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.
Diefenbach, D.R.; Rosenberry, C.S.; Boyd, Robert C.
2004-01-01
Surveillance programs for Chronic Wasting Disease (CWD) in free-ranging cervids often use a standard of being able to detect 1% prevalence when determining minimum sample sizes. However, 1% prevalence may represent >10,000 infected animals in a population of 1 million, and most wildlife managers would prefer to detect the presence of CWD when far fewer infected animals exist. We wanted to detect the presence of CWD in white-tailed deer (Odocoileus virginianus) in Pennsylvania when the disease was present in only 1 of 21 wildlife management units (WMUs) statewide. We used computer simulation to estimate the probability of detecting CWD based on a sampling design to detect the presence of CWD at 0.1% and 1.0% prevalence (23-76 and 225-762 infected deer, respectively) using tissue samples collected from hunter-killed deer. The probability of detection at 0.1% prevalence was <30% with sample sizes of ???6,000 deer, and the probability of detection at 1.0% prevalence was 46-72% with statewide sample sizes of 2,000-6,000 deer. We believe that testing of hunter-killed deer is an essential part of any surveillance program for CWD, but our results demonstrated the importance of a multifaceted surveillance approach for CWD detection rather than sole reliance on testing hunter-killed deer.
NASA Astrophysics Data System (ADS)
Nowakowski, Pawel; Dallas, Jean-Pierre; Villain, Sylvie; Kopia, Agnieszka; Gavarri, Jean-Raymond
2008-05-01
Nanostructured powders of ruthenium dioxide RuO 2 were synthesized via a sol gel route involving acidic solutions with pH varying between 0.4 and 4.5. The RuO 2 nanopowders were characterized by X-ray diffraction, scanning and transmission electron microscopy (SEM and TEM). Rietveld refinement of mean crystal structure was performed on RuO 2 nanopowders and crystallized standard RuO 2 sample. Crystallite sizes measured from X-ray diffraction profiles and TEM analysis varied in the range of 4-10 nm, with a minimum of crystallite dimension for pH=1.5. A good agreement between crystallite sizes calculated from Williamson Hall approach of X-ray data and from direct TEM observations was obtained. The tetragonal crystal cell parameter (a) and cell volumes of nanostructured samples were characterized by values greater than the values of standard RuO 2 sample. In addition, the [Ru-O 6] oxygen octahedrons of rutile structure also depended on crystal size. Catalytic conversion of methane by these RuO 2 nanostructured catalysts was studied as a function of pH, catalytic interaction time, air methane composition, and catalysis temperature, by the way of Fourier transform infrared (FTIR) spectroscopy coupled to homemade catalytic cell. The catalytic efficiency defined as FTIR absorption band intensities I(CO 2) was maximum for sample prepared at pH=1.5, and mainly correlated to crystallite dimensions. No significant catalytic effect was observed from sintered RuO 2 samples.
NASA Technical Reports Server (NTRS)
Mcgwire, K.; Friedl, M.; Estes, J. E.
1993-01-01
This article describes research related to sampling techniques for establishing linear relations between land surface parameters and remotely-sensed data. Predictive relations are estimated between percentage tree cover in a savanna environment and a normalized difference vegetation index (NDVI) derived from the Thematic Mapper sensor. Spatial autocorrelation in original measurements and regression residuals is examined using semi-variogram analysis at several spatial resolutions. Sampling schemes are then tested to examine the effects of autocorrelation on predictive linear models in cases of small sample sizes. Regression models between image and ground data are affected by the spatial resolution of analysis. Reducing the influence of spatial autocorrelation by enforcing minimum distances between samples may also improve empirical models which relate ground parameters to satellite data.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures.
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model-dimensional or discrete-as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (d IG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought.
Eliciting mixed emotions: a meta-analysis comparing models, types, and measures
Berrios, Raul; Totterdell, Peter; Kellett, Stephen
2015-01-01
The idea that people can experience two oppositely valenced emotions has been controversial ever since early attempts to investigate the construct of mixed emotions. This meta-analysis examined the robustness with which mixed emotions have been elicited experimentally. A systematic literature search identified 63 experimental studies that instigated the experience of mixed emotions. Studies were distinguished according to the structure of the underlying affect model—dimensional or discrete—as well as according to the type of mixed emotions studied (e.g., happy-sad, fearful-happy, positive-negative). The meta-analysis using a random-effects model revealed a moderate to high effect size for the elicitation of mixed emotions (dIG+ = 0.77), which remained consistent regardless of the structure of the affect model, and across different types of mixed emotions. Several methodological and design moderators were tested. Studies using the minimum index (i.e., the minimum value between a pair of opposite valenced affects) resulted in smaller effect sizes, whereas subjective measures of mixed emotions increased the effect sizes. The presence of more women in the samples was also associated with larger effect sizes. The current study indicates that mixed emotions are a robust, measurable and non-artifactual experience. The results are discussed in terms of the implications for an affect system that has greater versatility and flexibility than previously thought. PMID:25926805
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris
2009-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from Tropical Rainfall Measuring Mission (TRMM) data as a cluster of pixels with an 85-GHz polarization-corrected brightness temperature below 255 K and with an area of at least 64 square kilometers. The study database consisted of convective systems in West Africa from May to September 1998-2007, and in the western Pacific from May to November 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences between the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Subsetting the database revealed some sensitivity in distribution shape to the size of the sampling area, the length of the sample period, and the climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is either wetter or drier than normal.
Gebauer, Roman; Řepka, Radomír; Šmudla, Radek; Mamoňová, Miroslava; Ďurkovič, Jaroslav
2016-01-01
Although spine variation within cacti species or populations is assumed to be large, the minimum sample size of different spine anatomical and morphological traits required for species description is less studied. There are studies where only 2 spines were used for taxonomical comparison amnog species. Therefore, the spine structure variation within areoles and individuals of one population of Gymnocalycium kieslingii subsp. castaneum (Ferrari) Slaba was analyzed. Fifteen plants were selected and from each plant one areole from the basal, middle and upper part of the plant body was sampled. A scanning electron microscopy was used for spine surface description and a light microscopy for measurements of spine width, thickness, cross-section area, fiber diameter and fiber cell wall thickness. The spine surface was more visible and damaged less in the upper part of the plant body than in the basal part. Large spine and fiber differences were found between upper and lower parts of the plant body, but also within single areoles. In general, the examined traits in the upper part had by 8-17% higher values than in the lower parts. The variation of spine and fiber traits within areoles was lower than the differences between individuals. The minimum sample size was largely influenced by the studied spine and fiber traits, ranging from 1 to 70 spines. The results provide pioneer information useful in spine sample collection in the field for taxonomical, biomechanical and structural studies. Nevertheless, similar studies should be carried out for other cacti species to make generalizations. The large spine and fiber variation within areoles observed in our study indicates a very complex spine morphogenesis.
Gebauer, Roman; Řepka, Radomír; Šmudla, Radek; Mamoňová, Miroslava; Ďurkovič, Jaroslav
2016-01-01
Abstract Although spine variation within cacti species or populations is assumed to be large, the minimum sample size of different spine anatomical and morphological traits required for species description is less studied. There are studies where only 2 spines were used for taxonomical comparison amnog species. Therefore, the spine structure variation within areoles and individuals of one population of Gymnocalycium kieslingii subsp. castaneum (Ferrari) Slaba was analyzed. Fifteen plants were selected and from each plant one areole from the basal, middle and upper part of the plant body was sampled. A scanning electron microscopy was used for spine surface description and a light microscopy for measurements of spine width, thickness, cross-section area, fiber diameter and fiber cell wall thickness. The spine surface was more visible and damaged less in the upper part of the plant body than in the basal part. Large spine and fiber differences were found between upper and lower parts of the plant body, but also within single areoles. In general, the examined traits in the upper part had by 8–17% higher values than in the lower parts. The variation of spine and fiber traits within areoles was lower than the differences between individuals. The minimum sample size was largely influenced by the studied spine and fiber traits, ranging from 1 to 70 spines. The results provide pioneer information useful in spine sample collection in the field for taxonomical, biomechanical and structural studies. Nevertheless, similar studies should be carried out for other cacti species to make generalizations. The large spine and fiber variation within areoles observed in our study indicates a very complex spine morphogenesis. PMID:27698579
NASA Astrophysics Data System (ADS)
Gañán-Calvo, A. M.; Rebollo-Muñoz, N.; Montanero, J. M.
2013-03-01
We aim to establish the scaling laws for both the minimum rate of flow attainable in the steady cone-jet mode of electrospray, and the size of the resulting droplets in that limit. Use is made of a small body of literature on Taylor cone-jets reporting precise measurements of the transported electric current and droplet size as a function of the liquid properties and flow rate. The projection of the data onto an appropriate non-dimensional parameter space maps a region bounded by the minimum rate of flow attainable in the steady state. To explain these experimental results, we propose a theoretical model based on the generalized concept of physical symmetry, stemming from the system time invariance (steadiness). A group of symmetries rising at the cone-to-jet geometrical transition determines the scaling for the minimum flow rate and related variables. If the flow rate is decreased below that minimum value, those symmetries break down, which leads to dripping. We find that the system exhibits two instability mechanisms depending on the nature of the forces arising against the flow: one dominated by viscosity and the other by the liquid polarity. In the former case, full charge relaxation is guaranteed down to the minimum flow rate, while in the latter the instability condition becomes equivalent to the symmetry breakdown by charge relaxation or separation. When cone-jets are formed without artificially imposing a flow rate, a microjet is issued quasi-steadily. The flow rate naturally ejected this way coincides with the minimum flow rate studied here. This natural flow rate determines the minimum droplet size that can be steadily produced by any electrohydrodynamic means for a given set of liquid properties.
NASA Astrophysics Data System (ADS)
Rezaei, M.; Kermanpur, A.; Sadeghi, F.
2018-03-01
Fabrication of single crystal (SC) Ni-based gas turbine blades with a minimum crystal misorientation has always been a challenge in gas turbine industry, due to its significant influence on high temperature mechanical properties. This paper reports an experimental investigation and numerical simulation of the SC solidification process of a Ni-based superalloy to study effects of withdrawal rate and starter block size on crystal orientation. The results show that the crystal misorientation of the sample with 40 mm starter block height is decreased with increasing withdrawal rate up to about 9 mm/min, beyond which the amount of misorientation is increased. It was found that the withdrawal rate, height of the starter block and temperature gradient are completely inter-dependent and indeed achieving a SC specimen with a minimum misorientation needs careful optimization of these process parameters. The height of starter block was found to have higher impact on crystal orientation compared to the withdrawal rate. A suitable withdrawal rate regime along with a sufficient starter block height was proposed to produce SC parts with the lowest misorientation.
A comparative appraisal of two equivalence tests for multiple standardized effects.
Shieh, Gwowen
2016-04-01
Equivalence testing is recommended as a better alternative to the traditional difference-based methods for demonstrating the comparability of two or more treatment effects. Although equivalent tests of two groups are widely discussed, the natural extensions for assessing equivalence between several groups have not been well examined. This article provides a detailed and schematic comparison of the ANOVA F and the studentized range tests for evaluating the comparability of several standardized effects. Power and sample size appraisals of the two grossly distinct approaches are conducted in terms of a constraint on the range of the standardized means when the standard deviation of the standardized means is fixed. Although neither method is uniformly more powerful, the studentized range test has a clear advantage in sample size requirements necessary to achieve a given power when the underlying effect configurations are close to the priori minimum difference for determining equivalence. For actual application of equivalence tests and advance planning of equivalence studies, both SAS and R computer codes are available as supplementary files to implement the calculations of critical values, p-values, power levels, and sample sizes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
50 CFR 622.50 - Caribbean spiny lobster import prohibitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 8 2010-10-01 2010-10-01 false Caribbean spiny lobster import... ATLANTIC Management Measures § 622.50 Caribbean spiny lobster import prohibitions. (a) Minimum size limits for imported spiny lobster. There are two minimum size limits that apply to importation of spiny...
50 CFR 622.50 - Caribbean spiny lobster import prohibitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 10 2011-10-01 2011-10-01 false Caribbean spiny lobster import... ATLANTIC Management Measures § 622.50 Caribbean spiny lobster import prohibitions. (a) Minimum size limits for imported spiny lobster. There are two minimum size limits that apply to importation of spiny...
50 CFR 648.72 - Minimum surf clam size.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Atlantic Surf Clam and Ocean Quahog Fisheries § 648.72 Minimum surf clam size. Link to an amendment... quahog specifications. (a) Establishing catch quotas. The amount of surfclams or ocean quahogs that may... paragraph (b) of this section. The amount of surfclams available for harvest annually must be specified...
Relation between inflammables and ignition sources in aircraft environments
NASA Technical Reports Server (NTRS)
Scull, Wilfred E
1951-01-01
A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and discussed. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings through which flame will not propagate are presented and discussed. Ignition temperatures and limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressures and minimum size of opening for flame propagation in gasoline-air mixtures are included; inerting of gasoline-air mixtures is discussed.
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Stress Dependence of Microstructures in Experimentally Deformed Calcite
NASA Astrophysics Data System (ADS)
Platt, J. P.; De Bresser, J. H. P.
2017-12-01
Measurements of dynamically recrystallized grain size (Dr), subgrain size (Sg), minimum bulge size (Blg), and the maximum scale length for surface-energy driven grain-boundary migration (γGBM) in experimentally deformed Cararra marble help define the dependence of these microstructural features on stress and temperature. Measurements were made optically on ultra-thin sections in order to allow these features to be defined during measurement on the basis of microstructural setting and geometry. Taken together with previously published data Dr defines a paleopiezometer with a stress exponent of -1.09. There is no discernible temperature dependence over the 500°C temperature range of the experiments. Recrystallization occured mainly by bulging and subgrain rotation, and the two processes operated together, so that it is not possible to separate grains nucleated by the two mechanisms. Sg and Dr measured in the same samples are closely similar in size, suggesting that new grains do not grow significantly after nucleation, and that subgrain size is likely to be the primary control on recrystallized grain size. Blg and γGBM measured on each sample define a relationship to stress with an exponent of approximately -1.6, which helps define the boundary in stress - grain-size space between a region of dominant strain-energy-driven grain-boundary migration at high stress, from a region of dominant surface-energy-driven grain-boundary migration at low stress.
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2011 CFR
2011-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2013 CFR
2013-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2014 CFR
2014-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
48 CFR 52.247-61 - F.o.b. Origin-Minimum Size of Shipments.
Code of Federal Regulations, 2012 CFR
2012-10-01
... be the highest applicable minimum weight which will result in the lowest freight rate (or per car... minimum weight, the Contractor agrees to ship such scheduled quantity in one shipment. The Contractor...
NASA Astrophysics Data System (ADS)
Williams, Rebecca L.; Wakeham, Stuart; McKinney, Rick; Wishner, Karen F.
2014-08-01
The unique physical and biogeochemical characteristics of oxygen minimum zones (OMZs) influence plankton ecology, including zooplankton trophic webs. Using carbon and nitrogen stable isotopes, this study examined zooplankton trophic webs in the Eastern Tropical North Pacific (ETNP) OMZ. δ13C values were used to indicate zooplankton food sources, and δ15N values were used to indicate zooplankton trophic position and nitrogen cycle pathways. Vertically stratified MOCNESS net tows collected zooplankton from 0 to 1000 m at two stations along a north-south transect in the ETNP during 2007 and 2008, the Tehuantepec Bowl and the Costa Rica Dome. Zooplankton samples were separated into four size fractions for stable isotope analyses. Particulate organic matter (POM), assumed to represent a primary food source for zooplankton, was collected with McLane large volume in situ pumps. The isotopic composition and trophic ecology of the ETNP zooplankton community had distinct spatial and vertical patterns influenced by OMZ structure. The most pronounced vertical isotope gradients occurred near the upper and lower OMZ oxyclines. Material with lower δ13C values was apparently produced in the upper oxycline, possibly by chemoautotrophic microbes, and was subsequently consumed by zooplankton. Between-station differences in δ15N values suggested that different nitrogen cycle processes were dominant at the two locations, which influenced the isotopic characteristics of the zooplankton community. A strong depth gradient in zooplankton δ15N values in the lower oxycline suggested an increase in trophic cycling just below the core of the OMZ. Shallow POM (0-110 m) was likely the most important food source for mixed layer, upper oxycline, and OMZ core zooplankton, while deep POM was an important food source for most lower oxycline zooplankton (except for samples dominated by the seasonally migrating copepod Eucalanus inermis). There was no consistent isotopic progression among the four zooplankton size classes for these bulk mixed assemblage samples, implying overlapping trophic webs within the total size range considered.
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
46 CFR 111.60-4 - Minimum cable conductor size.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Minimum cable conductor size. 111.60-4 Section 111.60-4 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS... conductor must be #18 AWG (0.82 mm2) or larger except— (a) Each power and lighting cable conductor must be...
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Well formed; and, (2) Clean and bright. (3) Free from: (i) Blanks; and, (ii) Broken or split shells. (4... minimum diameter, minimum and maximum diameters, or in accordance with one of the size classifications in Table I. Table I Size classifications Maximum size—Will pass through a round opening of the following...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-22
... Requirements on Valencia and Other Late Type Oranges AGENCY: Agricultural Marketing Service, USDA. ACTION...). The interim rule reduced the minimum size for Valencia and other late type oranges shipped to... interim rule also lowered the minimum grade for Valencia and other late type oranges shipped to interstate...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-14
... Requirements on Valencia and Other Late Type Oranges AGENCY: Agricultural Marketing Service, USDA. ACTION...). This rule reduces the minimum size requirement for Valencia and other late type oranges shipped to... also reduces the minimum grade requirement for Valencia and other late type oranges shipped to...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
..., as Modified by Amendment No. 1, to Reduce the Minimum Size of the Nominating and Governance Committee... proposed rule change to reduce the minimum size of the Nominating and Governance Committee (``NGC'') from... the original proposed rule change, it had not yet obtained formal approval from its Board of Directors...
7 CFR 51.2113 - Size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of range in count of whole almond kernels per ounce or in terms of minimum, or minimum and maximum diameter. When a range in count is specified, the whole kernels shall be fairly uniform in size, and the average count per ounce shall be within the range specified. Doubles and broken kernels shall not be used...
A USANS/SANS study of the accessibility of pores in the Barnett Shale to methane and water
Ruppert, Leslie F.; Sakurovs, Richard; Blach, Tomasz P.; He, Lilin; Melnichenko, Yuri B.; Mildner, David F.; Alcantar-Lopez, Leo
2013-01-01
Shale is an increasingly important source of natural gas in the United States. The gas is held in fine pores that need to be accessed by horizontal drilling and hydrofracturing techniques. Understanding the nature of the pores may provide clues to making gas extraction more efficient. We have investigated two Mississippian Barnett Shale samples, combining small-angle neutron scattering (SANS) and ultrasmall-angle neutron scattering (USANS) to determine the pore size distribution of the shale over the size range 10 nm to 10 μm. By adding deuterated methane (CD4) and, separately, deuterated water (D2O) to the shale, we have identified the fraction of pores that are accessible to these compounds over this size range. The total pore size distribution is essentially identical for the two samples. At pore sizes >250 nm, >85% of the pores in both samples are accessible to both CD4 and D2O. However, differences in accessibility to CD4 are observed in the smaller pore sizes (~25 nm). In one sample, CD4 penetrated the smallest pores as effectively as it did the larger ones. In the other sample, less than 70% of the smallest pores (4, but they were still largely penetrable by water, suggesting that small-scale heterogeneities in methane accessibility occur in the shale samples even though the total porosity does not differ. An additional study investigating the dependence of scattered intensity with pressure of CD4 allows for an accurate estimation of the pressure at which the scattered intensity is at a minimum. This study provides information about the composition of the material immediately surrounding the pores. Most of the accessible (open) pores in the 25 nm size range can be associated with either mineral matter or high reflectance organic material. However, a complementary scanning electron microscopy investigation shows that most of the pores in these shale samples are contained in the organic components. The neutron scattering results indicate that the pores are not equally proportioned in the different constituents within the shale. There is some indication from the SANS results that the composition of the pore-containing material varies with pore size; the pore size distribution associated with mineral matter is different from that associated with organic phases.
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
NASA Astrophysics Data System (ADS)
Pries, V. V.; Proskuriakov, N. E.
2018-04-01
To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
Size dependent compressibility of nano-ceria: Minimum near 33 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodenbough, Philip P.; Chemistry Department, Columbia University, New York, New York 10027; Song, Junhua
2015-04-20
We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite sizemore » decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size.« less
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, ćaǧlar Ö.; Zettl, A.
2007-11-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Soldering to a single atomic layer
NASA Astrophysics Data System (ADS)
Girit, Caglar; Zettl, Alex
2008-03-01
The standard technique to make electrical contact to nanostructures is electron beam lithography. This method has several drawbacks including complexity, cost, and sample contamination. We present a simple technique to cleanly solder submicron sized, Ohmic contacts to nanostructures. To demonstrate, we contact graphene, a single atomic layer of carbon, and investigate low- and high-bias electronic transport. We set lower bounds on the current carrying capacity of graphene. A simple model allows us to obtain device characteristics such as mobility, minimum conductance, and contact resistance.
Relation Between Inflammables and Ignition Sources in Aircraft Environments
NASA Technical Reports Server (NTRS)
Scull, Wilfred E
1950-01-01
A literature survey was conducted to determine the relation between aircraft ignition sources and inflammables. Available literature applicable to the problem of aircraft fire hazards is analyzed and, discussed herein. Data pertaining to the effect of many variables on ignition temperatures, minimum ignition pressures, and minimum spark-ignition energies of inflammables, quenching distances of electrode configurations, and size of openings incapable of flame propagation are presented and discussed. The ignition temperatures and the limits of inflammability of gasoline in air in different test environments, and the minimum ignition pressure and the minimum size of openings for flame propagation of gasoline - air mixtures are included. Inerting of gasoline - air mixtures is discussed.
Biodiversity and body size are linked across metazoans
McClain, Craig R.; Boyer, Alison G.
2009-01-01
Body size variation across the Metazoa is immense, encompassing 17 orders of magnitude in biovolume. Factors driving this extreme diversification in size and the consequences of size variation for biological processes remain poorly resolved. Species diversity is invoked as both a predictor and a result of size variation, and theory predicts a strong correlation between the two. However, evidence has been presented both supporting and contradicting such a relationship. Here, we use a new comprehensive dataset for maximum and minimum body sizes across all metazoan phyla to show that species diversity is strongly correlated with minimum size, maximum size and consequently intra-phylum variation. Similar patterns are also observed within birds and mammals. The observations point to several fundamental linkages between species diversification and body size variation through the evolution of animal life. PMID:19324730
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
NASA Astrophysics Data System (ADS)
Luo, Jialiang; Pan, Shunkang; Qiao, Ziqiang; Cheng, Lichun; Wang, Zhenzhong; Lin, Peihao; Chang, Junqing
2018-01-01
The polycrystalline samples Pr x Ho2- x Fe17 ( x = 0.0, 0.1, 0.2, 0.3, 0.4) were prepared by arc melting and high-energy ball milling method. The influences of Pr substitution on phase structure, morphology, saturation magnetization and electromagnetic parameters were investigated by x-ray diffraction, scanning electron microscopy, vibrating-sample magnetometry and vector network analyzer, respectively. The results show that the particle size increased and the saturation magnetization decreased with increasing Pr content. The minimum absorption peak frequency shifted towards a lower-frequency region with increasing Pr concentration. The minimum RL of Pr0.3Ho1.7Fe17 powder was -41.03 dB at 6.88 GHz with a coating thickness of 2.0 mm. With different thickness of 1.8-2.8 mm, the minimum reflection loss (RL) of Pr0.3Ho1.7Fe17 powder was less than -20 dB in the whole C-band (4-8 GHz). The microwave-absorbing properties of the composite with different weight ratios of Pr0.3Ho1.7Fe17/Co were researched. The microwave-absorbing peaks of the composites shifted to a lower frequency with increasing Co content. The minimum RL of Pr0.3Ho1.7Fe17/Co(10%) was -42.51 dB at 4.72 GHz with a coating thickness of 2.6 mm. This suggests that the Pr-Ho-Fe will be a promising microwave absorption material in higher-gigahertz frequency, especially in the C-band.
VizieR Online Data Catalog: AKARI IRC asteroid sample diameters & albedos (Ali-Lagoa+, 2018)
NASA Astrophysics Data System (ADS)
Ali-Lagoa, V.; Mueller, T. G.; Usui, F.; Hasegawa, S.
2017-11-01
Table 1 contains the best-fitting values of size and beaming parameter and corresponding visible geometric albedos for the full AKARI IRC sample. We fitted the near-Earth asteroid thermal model (NEATM) of Harris (1998Icar..131..291H) to the AKARI IRC thermal infrared data (Murakami et al., 2007PASJ...59S.369M, Onaka et al., 2007PASJ...59S.401O, Ishihara et al., 2010A&A...514A...1I, Cat. II/297, Usui et al., 2011PASJ...63.1117U, Cat. J/PASJ/63/1117, Takita et al., 2012PASJ...64..126T, Hasegawa et al., 2013PASJ...65...34H, Cat. J/PASJ/65/34). The NEATM implementation is described in Ali-Lagoa and Delbo' (2017A&A...603A..55A, cat. J/A+A/603/A55). Minimum relative errors of 10, 15, and 20 percent are given for size, beaming parameter and albedo in those cases where the beaming parameter could be fitted. Otherwise, a default value of the beaming parameter is assumed based on Eq. 1 in the article, and the minimum relative errors in size and albedo increase to 20 and 40 percent (see the discussions in Mainzer et al., 2011ApJ...736..100M, Ali-Lagoa et al., 2016A&A...591A..14A, Cat. J/A+A/591/A14). We also provide the asteroid absolute magnitudes and G12 slope parameters retrieved from Oszkiewicz et al. (2012), the number of observations used in each IRC band (S9W and L18W), plus the heliocentric and geocentric distances and phase angle (r, Delta, alpha) based on the ephemerides taken from the MIRIADE service (http://vo.imcce.fr/webservices/miriade/?ephemph). (1 data file).
Masses, Dimensionless Kerr Parameters, and Emission Regions in GeV Gamma-Ray-loud Blazars
NASA Astrophysics Data System (ADS)
Xie, G.-Z.; Ma, L.; Liang, E.-W.; Zhou, S.-B.; Xie, Z.-H.
2003-11-01
We have compiled sample of 17 GeV γ-ray-loud blazars, for which rapid optical variability and γ-ray fluxes are well observed, from the literature. We derive estimates of the masses, the minimum Kerr parameters amin, and the size of the emission regions of the supermassive black holes (SMBHs) for the blazars in the sample from their minimum optical variability timescales and γ-ray fluxes. The results show that (1) the masses derived from the optical variability timescale (MH) are significantly correlated with the masses from the γ-ray luminosity (MKNH); (2) the values of amin of the SMBHs with masses MH>=108.3 Msolar (three out of 17 objects) range from ~0.5 to ~1.0, suggesting that these SMBHs are likely to be Kerr black holes. For the SMBHs with MH<108.3 Msolar, however, amin=0, suggesting that a nonrotating black hole model cannot be ruled out for these objects. In addition, the values of the size of the emission region, r*, for the two kinds of SMBHs are significantly different. For the SMBHs with amin>0, the sizes of the emission regions are almost within the horizon (2rG) and marginally bound orbit (4rG), while for those with amin=0 they are in the range (4.3-66.4)rG, extending beyond the marginally stable orbit (6rG). These results may imply that (1) the rotational state, the radiating regions, and the physical processes in the inner regions for the two kinds of SMBH are significantly different and (2) the emission mechanisms of GeV γ-ray blazars are related to the SMBHs in their centers but are not related to the two different kinds of SMBH.
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Disk Density Tuning of a Maximal Random Packing.
Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A
2016-08-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
The Minimum Binding Energy and Size of Doubly Muonic D3 Molecule
NASA Astrophysics Data System (ADS)
Eskandari, M. R.; Faghihi, F.; Mahdavi, M.
The minimum energy and size of doubly muonic D3 molecule, which two of the electrons are replaced by the much heavier muons, are calculated by the well-known variational method. The calculations show that the system possesses two minimum positions, one at typically muonic distance and the second at the atomic distance. It is shown that at the muonic distance, the effective charge, zeff is 2.9. We assumed a symmetric planar vibrational model between two minima and an oscillation potential energy is approximated in this region.
Foreign body detection in food materials using compton scattered x-rays
NASA Astrophysics Data System (ADS)
McFarlane, Nigel James Bruce
This thesis investigated the application of X-ray Compton scattering to the problem of foreign body detection in food. The methods used were analytical modelling, simulation and experiment. A criterion was defined for detectability, and a model was developed for predicting the minimum time required for detection. The model was used to predict the smallest detectable cubes of air, glass, plastic and steel. Simulations and experiments were performed on voids and glass in polystyrene phantoms, water, coffee and muesli. Backscatter was used to detect bones in chicken meat. The effects of geometry and multiple scatter on contrast, signal-to-noise, and detection time were simulated. Compton scatter was compared with transmission, and the effect of inhomogeneity was modelled. Spectral shape was investigated as a means of foreign body detection. A signal-to-noise ratio of 7.4 was required for foreign body detection in food. A 0.46 cm cube of glass or a 1.19 cm cube of polystyrene were detectable in a 10 cm cube of water in one second. The minimum time to scan a whole sample varied as the 7th power of the foreign body size, and the 5th power of the sample size. Compton scatter inspection produced higher contrasts than transmission, but required longer measurement times because of the low number of photon counts. Compton scatter inspection of whole samples was very slow compared to production line speeds in the food industry. There was potential for Compton scatter in applications which did not require whole-sample scanning, such as surface inspection. There was also potential in the inspection of inhomogeneous samples. The multiple scatter fraction varied from 25% to 55% for 2 to 10 cm cubes of water, but did not have a large effect on the detection time. The spectral shape gave good contrasts and signal-to-noise ratios in the detection of chicken bones.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-20
... to Proposed Rule Change To Amend FINRA Rule 6433 (Minimum Quotation Size Requirements for OTC Equity... proposed rule change to amend FINRA Rule 6433 (Minimum Quotation Size Requirements for OTC Equity... investors, three from an inter-dealer quotation system and two from a member firm.\\4\\ FINRA responded to...
50 CFR 648.104 - Summer flounder minimum fish sizes.
Code of Federal Regulations, 2012 CFR
2012-10-01
....99 cm) TL for all vessels that do not qualify for a moratorium permit under § 648.4(a)(3), and... (commercial) permitted vessels. The minimum size for summer flounder is 14 inches (35.6 cm) TL for all vessels issued a moratorium permit under § 648.4(a)(3), except on board party and charter boats carrying...
50 CFR 648.104 - Summer flounder minimum fish sizes.
Code of Federal Regulations, 2013 CFR
2013-10-01
... cm) TL for all vessels that do not qualify for a moratorium permit under § 648.4(a)(3), and charter... (commercial) permitted vessels. The minimum size for summer flounder is 14 inches (35.6 cm) TL for all vessels issued a moratorium permit under § 648.4(a)(3), except on board party and charter boats carrying...
50 CFR 648.104 - Summer flounder minimum fish sizes.
Code of Federal Regulations, 2014 CFR
2014-10-01
... cm) TL for all vessels that do not qualify for a moratorium permit under § 648.4(a)(3), and charter... (commercial) permitted vessels. The minimum size for summer flounder is 14 inches (35.6 cm) TL for all vessels issued a moratorium permit under § 648.4(a)(3), except on board party and charter boats carrying...
Lessio, Federico; Alma, Alberto
2006-04-01
The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.
NASA Technical Reports Server (NTRS)
Kuhlman, J. M.; Ku, T. J.
1981-01-01
A two dimensional advanced panel far-field potential flow model of the undistorted, interacting wakes of multiple lifting surfaces was developed which allows the determination of the spanwise bound circulation distribution required for minimum induced drag. This model was implemented in a FORTRAN computer program, the use of which is documented in this report. The nonplanar wakes are broken up into variable sized, flat panels, as chosen by the user. The wake vortex sheet strength is assumed to vary linearly over each of these panels, resulting in a quadratic variation of bound circulation. Panels are infinite in the streamwise direction. The theory is briefly summarized herein; sample results are given for multiple, nonplanar, lifting surfaces, and the use of the computer program is detailed in the appendixes.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun
2018-06-01
The era of big data is coming, and evidence-based medicine is attracting increasing attention to improve decision making in medical practice via integrating evidence from well designed and conducted clinical research. Meta-analysis is a statistical technique widely used in evidence-based medicine for analytically combining the findings from independent clinical trials to provide an overall estimation of a treatment effectiveness. The sample mean and standard deviation are two commonly used statistics in meta-analysis but some trials use the median, the minimum and maximum values, or sometimes the first and third quartiles to report the results. Thus, to pool results in a consistent format, researchers need to transform those information back to the sample mean and standard deviation. In this article, we investigate the optimal estimation of the sample mean for meta-analysis from both theoretical and empirical perspectives. A major drawback in the literature is that the sample size, needless to say its importance, is either ignored or used in a stepwise but somewhat arbitrary manner, e.g. the famous method proposed by Hozo et al. We solve this issue by incorporating the sample size in a smoothly changing weight in the estimators to reach the optimal estimation. Our proposed estimators not only improve the existing ones significantly but also share the same virtue of the simplicity. The real data application indicates that our proposed estimators are capable to serve as "rules of thumb" and will be widely applied in evidence-based medicine.
Uddin, Rokon; Burger, Robert; Donolato, Marco; Fock, Jeppe; Creagh, Michael; Hansen, Mikkel Fougt; Boisen, Anja
2016-11-15
We present a biosensing platform for the detection of proteins based on agglutination of aptamer coated magnetic nano- or microbeads. The assay, from sample to answer, is integrated on an automated, low-cost microfluidic disc platform. This ensures fast and reliable results due to a minimum of manual steps involved. The detection of the target protein was achieved in two ways: (1) optomagnetic readout using magnetic nanobeads (MNBs); (2) optical imaging using magnetic microbeads (MMBs). The optomagnetic readout of agglutination is based on optical measurement of the dynamics of MNB aggregates whereas the imaging method is based on direct visualization and quantification of the average size of MMB aggregates. By enhancing magnetic particle agglutination via application of strong magnetic field pulses, we obtained identical limits of detection of 25pM with the same sample-to-answer time (15min 30s) using the two differently sized beads for the two detection methods. In both cases a sample volume of only 10µl is required. The demonstrated automation, low sample-to-answer time and portability of both detection instruments as well as integration of the assay on a low-cost disc are important steps for the implementation of these as portable tools in an out-of-lab setting. Copyright © 2016 Elsevier B.V. All rights reserved.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
Analysis and sizing of Mars aerobrake structure
NASA Technical Reports Server (NTRS)
Raju, I. S.; Craft, W. J.
1993-01-01
A cone-sphere aeroshell structure for aerobraking into Martian atmosphere is studied. Using this structural configuration, a space frame load-bearing structure is proposed. To generate this structure efficiently and to perform a variety of studies of several configurations, a mesh generator that utilizes only a few configurational parameters is developed. A finite element analysis program that analyzes space frame structures was developed. A sizing algorithm that arrives at a minimum mass configuration was developed and integrated into the finite element analysis program. A typical 135-ft-diam aerobrake configuration was analyzed and sized. The minimum mass obtained in this study using high modulus graphite/epoxy composite material members is compared with the masses obtained from two other aerobrake structures using lightweight erectable tetrahedral truss and part-spherical truss configurations. Excellent agreement for the minimum mass was obtained with the three different aerobrake structures. Also, the minimum mass using the present structure was obtained when the supports were not at the base but at about 75 percent of the base diameter.
The Power of Neuroimaging Biomarkers for Screening Frontotemporal Dementia
McMillan, Corey T.; Avants, Brian B.; Cook, Philip; Ungar, Lyle; Trojanowski, John Q.; Grossman, Murray
2014-01-01
Frontotemporal dementia (FTD) is a clinically and pathologically heterogeneous neurodegenerative disease that can result from either frontotemporal lobar degeneration (FTLD) or Alzheimer’s disease (AD) pathology. It is critical to establish statistically powerful biomarkers that can achieve substantial cost-savings and increase feasibility of clinical trials. We assessed three broad categories of neuroimaging methods to screen underlying FTLD and AD pathology in a clinical FTD series: global measures (e.g., ventricular volume), anatomical volumes of interest (VOIs) (e.g., hippocampus) using a standard atlas, and data-driven VOIs using Eigenanatomy. We evaluated clinical FTD patients (N=93) with cerebrospinal fluid, gray matter (GM) MRI, and diffusion tensor imaging (DTI) to assess whether they had underlying FTLD or AD pathology. Linear regression was performed to identify the optimal VOIs for each method in a training dataset and then we evaluated classification sensitivity and specificity in an independent test cohort. Power was evaluated by calculating minimum sample sizes (mSS) required in the test classification analyses for each model. The data-driven VOI analysis using a multimodal combination of GM MRI and DTI achieved the greatest classification accuracy (89% SENSITIVE; 89% SPECIFIC) and required a lower minimum sample size (N=26) relative to anatomical VOI and global measures. We conclude that a data-driven VOI approach employing Eigenanatomy provides more accurate classification, benefits from increased statistical power in unseen datasets, and therefore provides a robust method for screening underlying pathology in FTD patients for entry into clinical trials. PMID:24687814
76 FR 56120 - Atlantic Highly Migratory Species; North and South Atlantic Swordfish Quotas
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... Contracting Parties. Contracting Parties may restrict fishermen to a minimum size of 25 kg live weight OR 125... restrict fishermen to a minimum size of 15 kg live weight OR 119 cm LJFL with no tolerance. In 2009, NMFS... quota, among other things. Per the ATCA, the United States is obligated to implement ICCAT-approved...
Cheng, Jiyi; Gu, Chenglin; Zhang, Dapeng; Wang, Dien; Chen, Shih-Chi
2016-04-01
In this Letter, we present an ultrafast nonmechanical axial scanning method for two-photon excitation (TPE) microscopy based on binary holography using a digital micromirror device (DMD), achieving a scanning rate of 4.2 kHz, scanning range of ∼180 μm, and scanning resolution (minimum step size) of ∼270 nm. Axial scanning is achieved by projecting the femtosecond laser to a DMD programmed with binary holograms of spherical wavefronts of increasing/decreasing radii. To guide the scanner design, we have derived the parametric relationships between the DMD parameters (i.e., aperture and pixel size), and the axial scanning characteristics, including (1) maximum optical power, (2) minimum step size, and (3) scan range. To verify the results, the DMD scanner is integrated with a custom-built TPE microscope that operates at 60 frames per second. In the experiment, we scanned a pollen sample via both the DMD scanner and a precision z-stage. The results show the DMD scanner generates images of equal quality throughout the scanning range. The overall efficiency of the TPE system was measured to be ∼3%. With the high scanning rate, the DMD scanner may find important applications in random-access imaging or high-speed volumetric imaging that enables visualization of highly dynamic biological processes in 3D with submillisecond temporal resolution.
Experimental Effects on IR Reflectance Spectra: Particle Size and Morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiswenger, Toya N.; Myers, Tanya L.; Brauer, Carolyn S.
For geologic and extraterrestrial samples it is known that both particle size and morphology can have strong effects on the species’ infrared reflectance spectra. Due to such effects, the reflectance spectra cannot be predicted from the absorption coefficients alone. This is because reflectance is both a surface as well as a bulk phenomenon, incorporating both dispersion as well as absorption effects. The same spectral features can even be observed as either a maximum or minimum. The complex effects depend on particle size and preparation, as well as the relative amplitudes of the optical constants n and k, i.e. the realmore » and imaginary components of the complex refractive index. While somewhat oversimplified, upward-going amplitude in the reflectance spectrum usually result from surface scattering, i.e. rays that have been reflected from the surface without penetration, whereas downward-going peaks are due to either absorption or volume scattering, i.e. rays that have penetrated or refracted into the sample interior and are not reflected. While the effects are well known, we report seminal measurements of reflectance along with quantified particle size of the samples, the sizing obtained from optical microscopy measurements. The size measurements are correlated with the reflectance spectra in the 1.3 – 16 micron range for various bulk materials that have a combination of strong and weak absorption bands in order to understand the effects on the spectral features as a function of the mean grain size of the sample. We report results for both sodium sulfate Na2SO4 as well as ammonium sulfate (NH4)2SO4; the optical constants have been measured for (NH4)2SO4. To go a step further from the field to the laboratory we explore our understanding of particle size effects on reflectance spectra in the field using standoff detection. This has helped identify weaknesses and strengths in detection using standoff distances of up 160 meters away from the Target. The studies have shown that particle size has an enormous influence on the measured reflectance spectra of such materials; successful identification requires sufficient, representative reflectance data to include the particle sizes of interest.« less
Christiansen, Heather M; Hussey, Nigel E; Wintner, Sabine P; Cliff, Geremy; Dudley, Sheldon F J; Fisk, Aaron T
2014-03-15
Bulk stable isotope analysis (SIA) provides an important tool for the study of animal ecology. Elasmobranch vertebral centra can be serially sampled to obtain an isotopic history of an individual over ontogeny. The measured total δ(13)C value, however, may be misinterpreted due to the inclusion of the (13)C-rich inorganic portion. Hydrochloric acid (HCl) is commonly used to remove the inorganic portion of hydroxyapatite structures before undertaking SIA, but more recently ethylenediaminetetraacetic acid (EDTA) has been recommended for elasmobranch vertebrae. These acid treatments may introduce uncertainty on measured δ(13)C and δ(15)N values above instrument precision and the effect of small sample size remains untested for elasmobranch vertebrae. Using a non-dilution program on an isotope ratio mass spectrometer the minimum sample weight of vertebrae required to obtain accurate isotopic values was determined for three shark species: white (Carcharodon carcharias), tiger (Galeocerdo cuvier), and sand tiger (Carcharias taurus). To examine if acid treatment completely removes the inorganic component of the vertebrae or whether the technique introduces its own uncertainty on measured δ(13)C and δ(15)N values, vertebrae samples were analyzed untreated and following EDTA treatment. The minimum sample weight required for accurate stable isotope values and the percentage sample yield following EDTA treatment varied within and among species. After EDTA treatment, white shark vertebrae were all enriched in (13)C and depleted in (15) N, tiger shark vertebrae showed both enrichment and depletion of (13)C and (15)N, and sand tiger shark vertebrae were all depleted in (13)C and (15)N. EDTA treatment of elasmobranch vertebrae produces unpredictable effects (i.e. non-linear and non-correctable) among species in both the percentage sample yield and the measured δ(13)C and δ(15)N values. Prior to initiating a large-scale study, we strongly recommend investigating (i) the minimum weight of vertebral material required to obtain consistent isotopic values and (ii) the effects of EDTA treatment, specific to the study species and the isotope ratio mass spectrometer employed. Copyright © 2014 John Wiley & Sons, Ltd.
Transition to collective oscillations in finite Kuramoto ensembles
NASA Astrophysics Data System (ADS)
Peter, Franziska; Pikovsky, Arkady
2018-03-01
We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.
NASA Technical Reports Server (NTRS)
Heinemann, K.
1985-01-01
The interaction of 100 and 200 keV electron beams with amorphous alumina, titania, and aluminum nitride substrates and nanometer-size palladium particulate deposits was investigated for the two extreme cases of (1) large-area electron-beam flash-heating and (2) small-area high-intensity electron-beam irradiation. The former simulates a short-term heating effect with minimum electron irradiation exposure, the latter simulates high-dosage irradiation with minimum heating effect. All alumina and titania samples responded to the flash-heating treatment with significant recrystallization. However, the size, crystal structure, shape, and orientation of the grains depended on the type and thickness of the films and the thickness of the Pd deposit. High-dosage electron irradiation also readily crystallized the alumina substrate films but did not affect the titania films. The alumina recrystallization products were usually either all in the alpha phase, or they were a mixture of small grains in a number of low-temperature phases including gamma, delta, kappa, beta, theta-alumina. Palladium deposits reacted heavily with the alumina substrates during either treatment, but they were very little effected when supported on titania. Both treatments had the same, less prominent localized crystallization effect on aluminum nitride films.
Reliability of risk-adjusted outcomes for profiling hospital surgical quality.
Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B
2014-05-01
Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.
Dental size variation in the Atapuerca-SH Middle Pleistocene hominids.
Bermúdez de Castro, J M; Sarmiento, S; Cunha, E; Rosas, A; Bastir, M
2001-09-01
The Middle Pleistocene Atapuerca-Sima de los Huesos (SH) site in Spain has yielded the largest sample of fossil hominids so far found from a single site and belonging to the same biological population. The SH dental sample includes a total of 452 permanent and deciduous teeth, representing a minimum of 27 individuals. We present a study of the dental size variation in these hominids, based on the analysis of the mandibular permanent dentition: lateral incisors, n=29; canines, n=27; third premolars, n=30; fourth premolars, n=34; first molars, n=38; second molars, n=38. We have obtained the buccolingual diameter and the crown area (measured on occlusal photographs) of these teeth, and used the bootstrap method to assess the amount of variation in the SH sample compared with the variation of a modern human sample from the Museu Antropologico of the Universidade of Coimbra (Portugal). The SH hominids have, in general terms, a dental size variation higher than that of the modern human sample. The analysis is especially conclusive for the canines. Furthermore, we have estimated the degree of sexual dimorphism of the SH sample by obtaining male and female dental subsamples by means of sexing the large sample of SH mandibular specimens. We obtained the index of sexual dimorphism (ISD=male mean/female mean) and the values were compared with those obtained from the sexed modern human sample from Coimbra, and with data found in the literature concerning several recent human populations. In all tooth classes the ISD of the SH hominids was higher than that of modern humans, but the differences were generally modest, except for the canines, thus suggesting that canine size sexual dimorphism in Homo heidelbergensis was probably greater than that of modern humans. Since the approach of sexing fossil specimens has some obvious limitations, these results should be assessed with caution. Additional data from SH and other European Middle Pleistocene sites would be necessary to test this hypothesis. Copyright 2001 Academic Press.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Dolphin and Wahoo Fishery Off the Atlantic States § 622.275 Size limits. All size limits in this section are minimum size...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range... the apples determined by the smallest opening through which it will pass. Application of Standards ...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range... the apples determined by the smallest opening through which it will pass. Application of Standards ...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Apples for Processing Size § 51.344 Size. (a) The minimum and maximum sizes or range... the apples determined by the smallest opening through which it will pass. Application of Standards ...
Three Essays In and Tests of Theoretical Urban Economics
NASA Astrophysics Data System (ADS)
Zhao, Weihua
This dissertation consists of three essays on urban economics. The three essays are related to urban spatial structure change, energy consumption, greenhouse gas emissions, and housing redevelopment. Chapter 1 answers the question: Does the classic Standard Urban Model still describe the growth of cities? Chapter 2 derives the implications of telework on urban spatial structure, energy consumption, and greenhouse gas emissions. Chapter 3 investigates the long run effects of minimum lot size zoning on neighborhood redevelopment. Chapter 1 identifies a new implication of the classic Standard Urban Model, the "unitary elasticity property (UEP)", which is the sum of the elasticity of central density and the elasticity of land area with respect to population change is approximately equal to unity. When this implication of the SUM is tested, it fits US cities fairly well. Further analysis demonstrates that topographic barriers and age of housing stock are the key factors explaining deviation from the UEP. Chapter 2 develops a numerical urban simulation model with households that are able to telework to investigate the urban form, congestion, energy consumption and greenhouse gas emission implications of telework. Simulation results suggest that by reducing transportation costs, telework causes sprawl, with associated longer commutes and consumption of larger homes, both of which increase energy consumption. Overall effects depend on who captures the gains from telework (workers versus firms), urban land use regulation such as height limits or greenbelts, and the fraction of workers participating in telework. The net effects of telework on energy use and GHG emissions are generally negligible. Chapter 3 applies dynamic programming to investigate the long run effects of minimum lot size zoning on neighborhood redevelopment. With numerical simulation, comparative dynamic results show that minimum lot size zoning can delay initial land conversion and slow down demolition and housing redevelopment. Initially, minimum lot size zoning is not binding. However, as city grows, it becomes binding and can effectively distort housing supply. It can lower both floor area ratio and residential density, and reduce aggregate housing supply. Overall, minimum lot size zoning can stabilize the path of structure/land ratios, housing service levels, structure density, and housing prices. In addition, minimum lot size zoning provides more incentive for developer to maintain the building, slow structure deterioration, and raise the minimum level of housing services provided over the life cycle of development.
Li, Peng; Redden, David T.
2014-01-01
SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738
Tracey, Amanda J; Aarssen, Lonnie W
2014-01-01
The selection consequences of competition in plants have been traditionally interpreted based on a “size-advantage” hypothesis – that is, under intense crowding/competition from neighbors, natural selection generally favors capacity for a relatively large plant body size. However, this conflicts with abundant data, showing that resident species body size distributions are usually strongly right-skewed at virtually all scales within vegetation. Using surveys within sample plots and a neighbor-removal experiment, we tested: (1) whether resident species that have a larger maximum potential body size (MAX) generally have more successful local individual recruitment, and thus greater local abundance/density (as predicted by the traditional size-advantage hypothesis); and (2) whether there is a general between-species trade-off relationship between MAX and capacity to produce offspring when body size is severely suppressed by crowding/competition – that is, whether resident species with a larger MAX generally also need to reach a larger minimum reproductive threshold size (MIN) before they can reproduce at all. The results showed that MIN had a positive relationship with MAX across resident species, and local density – as well as local density of just reproductive individuals – was generally greater for species with smaller MIN (and hence smaller MAX). In addition, the cleared neighborhoods of larger target species (which had relatively large MIN) generally had – in the following growing season – a lower ratio of conspecific recruitment within these neighborhoods relative to recruitment of other (i.e., smaller) species (which had generally smaller MIN). These data are consistent with an alternative hypothesis based on a ‘reproductive-economy-advantage’ – that is, superior fitness under competition in plants generally requires not larger potential body size, but rather superior capacity to recruit offspring that are in turn capable of producing grand-offspring – and hence transmitting genes to future generations – despite intense and persistent (cross-generational) crowding/competition from near neighbors. Selection for the latter is expected to favor relatively small minimum reproductive threshold size and hence – as a tradeoff – relatively small (not large) potential body size. PMID:24772274
Entropic determination of the phase transition in a coevolving opinion-formation model.
Burgos, E; Hernández, Laura; Ceva, H; Perazzo, R P J
2015-03-01
We study an opinion formation model by the means of a coevolving complex network where the vertices represent the individuals, characterized by their evolving opinions, and the edges represent the interactions among them. The network adapts to the spreading of opinions in two ways: not only connected agents interact and eventually change their thinking but an agent may also rewire one of its links to a neighborhood holding the same opinion as his. The dynamics, based on a global majority rule, depends on an external parameter that controls the plasticity of the network. We show how the information entropy associated to the distribution of group sizes allows us to locate the phase transition between a phase of full consensus and another, where different opinions coexist. We also determine the minimum size of the most informative sampling. At the transition the distribution of the sizes of groups holding the same opinion is scale free.
Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use
Arthur, Steve M.; Schwartz, Charles C.
1999-01-01
We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area <1%/additional location) and precise (CV < 50%). Although the radiotracking data appeared unbiased, except for the relationship between area and sample size, these data failed to indicate some areas that likely were important to bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.
Bee (Hymenoptera: Apoidea) Diversity and Sampling Methodology in a Midwestern USA Deciduous Forest.
McCravy, Kenneth W; Ruholl, Jared D
2017-08-04
Forests provide potentially important bee habitat, but little research has been done on forest bee diversity and the relative effectiveness of bee sampling methods in this environment. Bee diversity and sampling methodology were studied in an Illinois, USA upland oak-hickory forest using elevated and ground-level pan traps, malaise traps, and vane traps. 854 bees and 55 bee species were collected. Elevated pan traps collected the greatest number of bees (473), but ground-level pan traps collected greater species diversity (based on Simpson's diversity index) than did elevated pan traps. Elevated and ground-level pan traps collected the greatest bee species richness, with 43 and 39 species, respectively. An estimated sample size increase of over 18-fold would be required to approach minimum asymptotic richness using ground-level pan traps. Among pan trap colors/elevations, elevated yellow pan traps collected the greatest number of bees (266) but the lowest diversity. Malaise traps were relatively ineffective, collecting only 17 bees. Vane traps collected relatively low species richness (14 species), and Chao1 and abundance coverage estimators suggested that minimum asymptotic species richness was approached for that method. Bee species composition differed significantly between elevated pan traps, ground-level pan traps, and vane traps. Indicator species were significantly associated with each of these trap types, as well as with particular pan trap colors/elevations. These results indicate that Midwestern deciduous forests provide important bee habitat, and that the performance of common bee sampling methods varies substantially in this environment.
LQAS usefulness in an emergency department.
de la Orden, Susana Granado; Rodríguez-Rieiro, Cristina; Sánchez-Gómez, Amaya; García, Ana Chacón; Hernández-Fernández, Tomás; Revilla, Angel Abad; Escribano, Dolores Vigil; Pérez, Paz Rodríguez
2008-01-01
This paper aims to explore lot quality assurance sampling (LQAS) applicability and usefulness in the evaluation of quality indicators in a hospital emergency department (ED) and to determine the degree of compliance with quality standards according to this sampling method. Descriptive observational research in the Hospital General Universitario Gregorio Marañón (HGUGM) emergency department (ED). Patients older than 15 years, diagnosed with dyspnoea, chest pain, urinary tract colic or bronchial asthma attending the HGUGM ED from December 2005 to May 2006, and patients admitted during 2005 with exacerbation of chronic obstructive pulmonary disease or acute meningitis were included in the study. Sample sizes were calculated using LQAS. Different quality indicators, one for each process, were selected. The upper (acceptable quality level (AQL)) and lower thresholds (rejectable quality level (RQL)) were established considering risk alpha = 5 per cent and beta = 20 per cent, and the minimum number of observations required was calculated. It was impossible to reach the necessary sample size for bronchial asthma and urinary tract colic patients. For chest pain, acute exacerbation of chronic obstructive pulmonary disease, and acute meningitis, quality problems were detected. The lot was accepted only for the dyspnoea indicator. The usefulness of LQAS to detect quality problems in the management of health processes in one hospital's ED. The LQAS could complement traditional sampling methods.
Riccioni, Giulia; Landi, Monica; Ferrara, Giorgia; Milano, Ilaria; Cariani, Alessia; Zane, Lorenzo; Sella, Massimo; Barbujani, Guido; Tinti, Fausto
2010-01-01
Fishery genetics have greatly changed our understanding of population dynamics and structuring in marine fish. In this study, we show that the Atlantic Bluefin tuna (ABFT, Thunnus thynnus), an oceanic predatory species exhibiting highly migratory behavior, large population size, and high potential for dispersal during early life stages, displays significant genetic differences over space and time, both at the fine and large scales of variation. We compared microsatellite variation of contemporary (n = 256) and historical (n = 99) biological samples of ABFTs of the central-western Mediterranean Sea, the latter dating back to the early 20th century. Measures of genetic differentiation and a general heterozygote deficit suggest that differences exist among population samples, both now and 96–80 years ago. Thus, ABFTs do not represent a single panmictic population in the Mediterranean Sea. Statistics designed to infer changes in population size, both from current and past genetic variation, suggest that some Mediterranean ABFT populations, although still not severely reduced in their genetic potential, might have suffered from demographic declines. The short-term estimates of effective population size are straddled on the minimum threshold (effective population size = 500) indicated to maintain genetic diversity and evolutionary potential across several generations in natural populations. PMID:20080643
Strategies for Improving Power in School-Randomized Studies of Professional Development.
Kelcey, Ben; Phelps, Geoffrey
2013-12-01
Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. The analyses suggested that teachers' knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies. © The Author(s) 2014.
Dou, Haiyang; Lee, Yong-Ju; Jung, Euo Chang; Lee, Byung-Chul; Lee, Seungho
2013-08-23
In field-flow fractionation (FFF), there is the 'steric transition' phenomenon where the sample elution mode changes from the normal to steric/hyperlayer mode. Accurate analysis by FFF requires understanding of the steric transition phenomenon, particularly when the sample has a broad size distribution, for which the effect by combination of different modes may become complicated to interpret. In this study, the steric transition phenomenon in asymmetrical flow FFF (AF4) was studied using polystyrene (PS) latex beads. The retention ratio (R) gradually decreases as the particle size increases (normal mode) and reaches a minimum (Ri) at diameter around 0.5μm, after which R increases with increasing diameter (steric/hyperlayer mode). It was found that the size-based selectivity (Sd) tends to increase as the channel thickness (w) increases. The retention behavior of cyclo-1,3,5-trimethylene-2,4,6-trinitramine (commonly called 'research department explosive' (RDX)) particles in AF4 was investigated by varying experimental parameters including w and flow rates. AF4 showed a good reproducibility in size determination of RDX particles with the relative standard deviation of 4.1%. The reliability of separation obtained by AF4 was evaluated by transmission electron microscopy (TEM). Copyright © 2013 Elsevier B.V. All rights reserved.
Veale, David; Miles, Sarah; Bramley, Sally; Muir, Gordon; Hodsoll, John
2015-06-01
To systematically review and create nomograms of flaccid and erect penile size measurements. Study key eligibility criteria: measurement of penis size by a health professional using a standard procedure; a minimum of 50 participants per sample. samples with a congenital or acquired penile abnormality, previous surgery, complaint of small penis size or erectile dysfunction. Synthesis methods: calculation of a weighted mean and pooled standard deviation (SD) and simulation of 20,000 observations from the normal distribution to generate nomograms of penis size. Nomograms for flaccid pendulous [n = 10,704, mean (SD) 9.16 (1.57) cm] and stretched length [n = 14,160, mean (SD) 13.24 (1.89) cm], erect length [n = 692, mean (SD) 13.12 (1.66) cm], flaccid circumference [n = 9407, mean (SD) 9.31 (0.90) cm], and erect circumference [n = 381, mean (SD) 11.66 (1.10) cm] were constructed. Consistent and strongest significant correlation was between flaccid stretched or erect length and height, which ranged from r = 0.2 to 0.6. relatively few erect measurements were conducted in a clinical setting and the greatest variability between studies was seen with flaccid stretched length. Penis size nomograms may be useful in clinical and therapeutic settings to counsel men and for academic research. © 2014 The Authors. BJU International © 2014 BJU International.
2010-01-01
Background Breeding programs are usually reluctant to evaluate and use germplasm accessions other than the elite materials belonging to their advanced populations. The concept of core collections has been proposed to facilitate the access of potential users to samples of small sizes, representative of the genetic variability contained within the gene pool of a specific crop. The eventual large size of a core collection perpetuates the problem it was originally proposed to solve. The present study suggests that, in addition to the classic core collection concept, thematic core collections should be also developed for a specific crop, composed of a limited number of accessions, with a manageable size. Results The thematic core collection obtained meets the minimum requirements for a core sample - maintenance of at least 80% of the allelic richness of the thematic collection, with, approximately, 15% of its size. The method was compared with other methodologies based on the M strategy, and also with a core collection generated by random sampling. Higher proportions of retained alleles (in a core collection of equal size) or similar proportions of retained alleles (in a core collection of smaller size) were detected in the two methods based on the M strategy compared to the proposed methodology. Core sub-collections constructed by different methods were compared regarding the increase or maintenance of phenotypic diversity. No change on phenotypic diversity was detected by measuring the trait "Weight of 100 Seeds", for the tested sampling methods. Effects on linkage disequilibrium between unlinked microsatellite loci, due to sampling, are discussed. Conclusions Building of a thematic core collection was here defined by prior selection of accessions which are diverse for the trait of interest, and then by pairwise genetic distances, estimated by DNA polymorphism analysis at molecular marker loci. The resulting thematic core collection potentially reflects the maximum allele richness with the smallest sample size from a larger thematic collection. As an example, we used the development of a thematic core collection for drought tolerance in rice. It is expected that such thematic collections increase the use of germplasm by breeding programs and facilitate the study of the traits under consideration. The definition of a core collection to study drought resistance is a valuable contribution towards the understanding of the genetic control and the physiological mechanisms involved in water use efficiency in plants. PMID:20576152
Moerbeek, Mirjam
2018-01-01
Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807
7 CFR 51.2836 - Size classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters Maximum...
7 CFR 51.2836 - Size classifications.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Size classifications. 51.2836 Section 51.2836...-Granex-Grano and Creole Types) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum...
7 CFR 51.2836 - Size classifications.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Size classifications. 51.2836 Section 51.2836...-Granex-Grano and Creole Types) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum...
7 CFR 51.2836 - Size classifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters Maximum...
7 CFR 51.2836 - Size classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.2836 Section 51.2836...) Size Classifications § 51.2836 Size classifications. The size of onions may be specified in accordance with one of the following classifications. Size designation Minimum diameter Inches Millimeters Maximum...
Vanamail, P; Subramanian, S; Srividya, A; Ravi, R; Krishnamoorthy, K; Das, P K
2006-08-01
Lot quality assurance sampling (LQAS) with two-stage sampling plan was applied for rapid monitoring of coverage after every round of mass drug administration (MDA). A Primary Health Centre (PHC) consisting of 29 villages in Thiruvannamalai district, Tamil Nadu was selected as the study area. Two threshold levels of coverage were used: threshold A (maximum: 60%; minimum: 40%) and threshold B (maximum: 80%; minimum: 60%). Based on these thresholds, one sampling plan each for A and B was derived with the necessary sample size and the number of allowable defectives (i.e. defectives mean those who have not received the drug). Using data generated through simple random sampling (SRSI) of 1,750 individuals in the study area, LQAS was validated with the above two sampling plans for its diagnostic and field applicability. Simultaneously, a household survey (SRSH) was conducted for validation and cost-effectiveness analysis. Based on SRSH survey, the estimated coverage was 93.5% (CI: 91.7-95.3%). LQAS with threshold A revealed that by sampling a maximum of 14 individuals and by allowing four defectives, the coverage was >or=60% in >90% of villages at the first stage. Similarly, with threshold B by sampling a maximum of nine individuals and by allowing four defectives, the coverage was >or=80% in >90% of villages at the first stage. These analyses suggest that the sampling plan (14,4,52,25) of threshold A may be adopted in MDA to assess if a minimum coverage of 60% has been achieved. However, to achieve the goal of elimination, the sampling plan (9, 4, 42, 29) of threshold B can identify villages in which the coverage is <80% so that remedial measures can be taken. Cost-effectiveness analysis showed that both options of LQAS are more cost-effective than SRSH to detect a village with a given level of coverage. The cost per village was US dollars 76.18 under SRSH. The cost of LQAS was US dollars 65.81 and 55.63 per village for thresholds A and B respectively. The total financial cost of classifying a village correctly with the given threshold level of LQAS could be reduced by 14% and 26% of the cost of conventional SRSH method.
Effects of Laser Energies on Wear and Tensile Properties of Biomimetic 7075 Aluminum Alloy
NASA Astrophysics Data System (ADS)
Yuan, Yuhuan; Zhang, Peng; Zhao, Guoping; Gao, Yang; Tao, Lixi; Chen, Heng; Zhang, Jianlong; Zhou, Hong
2018-03-01
Inspired by the non-smooth surface of certain animals, a biomimetic coupling unit with various sizes, microstructure, and hardness was prepared on the surface of 7075 aluminum alloy. Following experimental studies were conducted to investigate the wear and tensile properties with various laser energy inputs. The results demonstrated that the non-smooth surface with biomimetic coupling units had a positive effect on both the wear resistance and tensile property of 7075 aluminum alloy. In addition, the sample with the unit fabricated by the laser energy of 420.1 J/cm2 exhibited the most significant improvement on the wear and tensile properties owing to the minimum grain size and the highest microhardness. Also, the weight loss of the sample was one-third of the untreated one's, and the yield strength, the ultimate tensile strength, and the elongation improved by 20, 20, and 34% respectively. Moreover, the mechanisms of wear and tensile properties improvement were also analyzed.
A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts
2015-04-30
fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values
Rakhshan, Vahid
2013-10-01
No meta-analyses or systematic reviews have been conducted to evaluate numerous potential biasing factors contributing to the controversial results on congenitally missing teeth (CMT). We aimed to perform a rather comprehensive meta-analysis and systematic review on this subject. A thorough search was performed during September 2012 until April 2013 to find the available literature regarding CMT prevalence. Besides qualitatively discussing the literature, the meta-sample homogeneity, publication bias, and the effects of sample type, sample size, minimum and maximum ages of included subjects, gender imbalances, and scientific credit of the publishing journals on the reported CMT prevalence were statistically analyzed using Q-test, Egger regression, Spearman coefficient, Kruskal-Wallis, Welch t test (α=0.05), and Mann-Whitney U test (α=0.016, α=0.007). A total of 111 reports were collected. Metadata were heterogeneous (P=0.000). There was not a significant publication bias (Egger Regression P=0.073). Prevalence rates differed in different types of populations (Kruskal-Wallis P=0.001). Studies on orthodontic patients might report slightly (about 1%) higher prevalence (P=0.009, corrected α=0.016). Non-orthodontic dental patients showed a significant 2% decline [P=0.007 (Mann-Whitney U)]. Enrolling more males in researches might significantly reduce the observed prevalence (Spearman ρ=-0.407, P=0.001). Studies with higher minimums of subjects' age showed always slightly less CMT prevalence. This reached about -1.6% around the ages 10 to 13 and was significant for ages 10 to 12 (Welch t test P<0.05). There seems to be no limit over the maximum age (Welch t test P>0.2). Studies' sample sizes were correlated negatively with CMT prevalence (ρ=-0.250, P=0.009). It was not verified whether higher CMT rates have better chances of being published (ρ=0.132, P=0.177). CMT definition should be unified. Samples should be sex-balanced. Enrolling both orthodontic and dental patients in similar proportions might be preferable over sampling from each of those groups. Sampling from children over 12 years seems advantageous. Two or more observers should examine larger samples to reduce the false negative error tied with such samples.
A gold nanoparticle-based immunochromatographic assay: the influence of nanoparticulate size.
Lou, Sha; Ye, Jia-ying; Li, Ke-qiang; Wu, Aiguo
2012-03-07
Four different sized gold nanoparticles (14 nm, 16 nm, 35 nm and 38 nm) were prepared to conjugate an antibody for a gold nanoparticle-based immunochromatographic assay which has many applications in both basic research and clinical diagnosis. This study focuses on the conjugation efficiency of the antibody with different sized gold nanoparticles. The effect of factors such as pH value and concentration of antibody has been quantificationally discussed using spectra methods after adding 1 wt% NaCl which induced gold nanoparticle aggregation. It was found that different sized gold nanoparticles had different conjugation efficiencies under different pH values and concentrations of antibody. Among the four sized gold nanoparticles, the 16 nm gold nanoparticles have the minimum requirement for antibody concentrations to avoid aggregation comparing to other sized gold nanoparticles but are less sensitive for detecting the real sample compared to the 38 nm gold nanoparticles. Consequently, different sized gold nanoparticles should be labeled with antibody under optimal pH value and optimal concentrations of antibody. It will be helpful for the application of antibody-labeled gold nanoparticles in the fields of clinic diagnosis, environmental analysis and so on in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbold, E. B.; Walton, O.; Homel, M. A.
2015-10-26
This document serves as a final report to a small effort where several improvements were added to a LLNL code GEODYN-L to develop Discrete Element Method (DEM) algorithms coupled to Lagrangian Finite Element (FE) solvers to investigate powder-bed formation problems for additive manufacturing. The results from these simulations will be assessed for inclusion as the initial conditions for Direct Metal Laser Sintering (DMLS) simulations performed with ALE3D. The algorithms were written and performed on parallel computing platforms at LLNL. The total funding level was 3-4 weeks of an FTE split amongst two staff scientists and one post-doc. The DEM simulationsmore » emulated, as much as was feasible, the physical process of depositing a new layer of powder over a bed of existing powder. The DEM simulations utilized truncated size distributions spanning realistic size ranges with a size distribution profile consistent with realistic sample set. A minimum simulation sample size on the order of 40-particles square by 10-particles deep was utilized in these scoping studies in order to evaluate the potential effects of size segregation variation with distance displaced in front of a screed blade. A reasonable method for evaluating the problem was developed and validated. Several simulations were performed to show the viability of the approach. Future investigations will focus on running various simulations investigating powder particle sizing and screen geometries.« less
Minimum length from quantum mechanics and classical general relativity.
Calmet, Xavier; Graesser, Michael; Hsu, Stephen D H
2004-11-19
We derive fundamental limits on measurements of position, arising from quantum mechanics and classical general relativity. First, we show that any primitive probe or target used in an experiment must be larger than the Planck length lP. This suggests a Planck-size minimum ball of uncertainty in any measurement. Next, we study interferometers (such as LIGO) whose precision is much finer than the size of any individual components and hence are not obviously limited by the minimum ball. Nevertheless, we deduce a fundamental limit on their accuracy of order lP. Our results imply a device independent limit on possible position measurements.
Thompson, William L.; Miller, Amy E.; Mortenson, Dorothy C.; Woodward, Andrea
2011-01-01
Monitoring natural resources in Alaskan national parks is challenging because of their remoteness, limited accessibility, and high sampling costs. We describe an iterative, three-phased process for developing sampling designs based on our efforts to establish a vegetation monitoring program in southwest Alaska. In the first phase, we defined a sampling frame based on land ownership and specific vegetated habitats within the park boundaries and used Path Distance analysis tools to create a GIS layer that delineated portions of each park that could be feasibly accessed for ground sampling. In the second phase, we used simulations based on landcover maps to identify size and configuration of the ground sampling units (single plots or grids of plots) and to refine areas to be potentially sampled. In the third phase, we used a second set of simulations to estimate sample size and sampling frequency required to have a reasonable chance of detecting a minimum trend in vegetation cover for a specified time period and level of statistical confidence. Results of the first set of simulations indicated that a spatially balanced random sample of single plots from the most common landcover types yielded the most efficient sampling scheme. Results of the second set of simulations were compared with field data and indicated that we should be able to detect at least a 25% change in vegetation attributes over 31. years by sampling 8 or more plots per year every five years in focal landcover types. This approach would be especially useful in situations where ground sampling is restricted by access.
Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains
NASA Astrophysics Data System (ADS)
Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.
2013-12-01
Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses with LAI and clip harvest data to determine whether LAI can be used as a suitable proxy for aboveground standing biomass. We also compared optimal sample sizes derived from LAI data, and clip-harvest data from two different size clip harvest areas (0.1m by 1m vs. 0.1m by 2m). Sample sizes were calculated in order to estimate the mean to within a standardized level of uncertainty that will be used to guide sampling effort across all vegetation types (i.e. estimated within × 10% with 95% confidence). Finally, we employed a Semivariogram approach to determine optimal sample size and spacing.
Burgess, George H.; Bruce, Barry D.; Cailliet, Gregor M.; Goldman, Kenneth J.; Grubbs, R. Dean; Lowe, Christopher G.; MacNeil, M. Aaron; Mollet, Henry F.; Weng, Kevin C.; O'Sullivan, John B.
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in “central California” at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats. PMID:24932483
Burgess, George H; Bruce, Barry D; Cailliet, Gregor M; Goldman, Kenneth J; Grubbs, R Dean; Lowe, Christopher G; MacNeil, M Aaron; Mollet, Henry F; Weng, Kevin C; O'Sullivan, John B
2014-01-01
White sharks are highly migratory and segregate by sex, age and size. Unlike marine mammals, they neither surface to breathe nor frequent haul-out sites, hindering generation of abundance data required to estimate population size. A recent tag-recapture study used photographic identifications of white sharks at two aggregation sites to estimate abundance in "central California" at 219 mature and sub-adult individuals. They concluded this represented approximately one-half of the total abundance of mature and sub-adult sharks in the entire eastern North Pacific Ocean (ENP). This low estimate generated great concern within the conservation community, prompting petitions for governmental endangered species designations. We critically examine that study and find violations of model assumptions that, when considered in total, lead to population underestimates. We also use a Bayesian mixture model to demonstrate that the inclusion of transient sharks, characteristic of white shark aggregation sites, would substantially increase abundance estimates for the adults and sub-adults in the surveyed sub-population. Using a dataset obtained from the same sampling locations and widely accepted demographic methodology, our analysis indicates a minimum all-life stages population size of >2000 individuals in the California subpopulation is required to account for the number and size range of individual sharks observed at the two sampled sites. Even accounting for methodological and conceptual biases, an extrapolation of these data to estimate the white shark population size throughout the ENP is inappropriate. The true ENP white shark population size is likely several-fold greater as both our study and the original published estimate exclude non-aggregating sharks and those that independently aggregate at other important ENP sites. Accurately estimating the central California and ENP white shark population size requires methodologies that account for biases introduced by sampling a limited number of sites and that account for all life history stages across the species' range of habitats.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Accurate and fast multiple-testing correction in eQTL studies.
Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm
2015-06-04
In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Quantitative Research on the Minimum Wage
ERIC Educational Resources Information Center
Goldfarb, Robert S.
1975-01-01
The article reviews recent research examining the impact of minimum wage requirements on the size and distribution of teenage employment and earnings. The studies measure income distribution, employment levels and effect on unemployment. (MW)
Fieldpath Lunar Meteorite Graves Nunataks 06157, a Magnesian Piece of the Lunar Highlands Crust
NASA Technical Reports Server (NTRS)
Zeigler, Ryan A.; Korotev, R. L.; Korotev, R. L.
2012-01-01
To date, 49 feldspathic lunar meteorites (FLMs) have been recovered, likely representing a minimum of 35 different sample locations in the lunar highlands. The compositional variability among FLMs far exceeds the variability observed among highland samples in the Apollo and Luna sample suites. Here we will discuss in detail one of the compositional end members of the FLM suite, Graves Nunataks (GRA) 06157, which was collected by the 2006-2007 ANSMET field team. At 0.79 g, GRA 06157 is the smallest lunar meteorite so far recovered. Despite its small size, its highly feldspathic and highly magnesian composition are intriguing. Although preliminary bulk compositions have been reported, thus far no petrographic descriptions are in the literature. Here we expand upon the bulk compositional data, including major-element compositions, and provide a detailed petrographic description of GRA 06157.
* Minimum # Experimental Samples DNA Volume (ul) Genomic DNA Concentration (ng/ul) Low Input DNA Volume (ul . **Please inquire about additional cost for low input option. Genotyping Minimum # Experimental Samples DNA sample quality. If you do submit WGA samples, you should anticipate a higher non-random missing data rate
The determination of specific forms of aluminum in natural water
Barnes, R.B.
1975-01-01
A procedure for analysis and pretreatment of natural-water samples to determine very low concentrations of Al is described which distinguishes the rapidly reacting equilibrium species from the metastable or slowly reacting macro ions and colloidal suspended material. Aluminum is complexed with 8-hydroxyquinoline (oxine), pH is adjusted to 8.3 to minimize interferences, and the aluminum oxinate is extracted with methyl isobutyl ketone (MIBK) prior to analysis by atomic absorption. To determine equilibrium species only, the contact time between sample and 8-hydroxyquinoline is minimized. The Al may be extracted at the sample site with a minimum of equipment and the MIBK extract stored for several weeks prior to atomic absorption analysis. Data obtained from analyses of 39 natural groundwater samples indicate that filtration through a 0.1-??m pore size filter is not an adequate means of removing all insoluble and metastable Al species present, and extraction of Al immediately after collection is necessary if only dissolved and readily reactive species are to be determined. An average of 63% of the Al present in natural waters that had been filtered through 0.1-??m pore size filters was in the form of monomeric ions. The total Al concentration, which includes all forms that passed through a 0.1-??m pore size filter, ranged 2-70 ??g/l. The concentration of Al in the form of monomeric ions ranged from below detection to 57 ??g/l. Most of the natural water samples used in this study were collected from thermal springs and oil wells. ?? 1975.
Code of Federal Regulations, 2012 CFR
2012-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2011 CFR
2011-07-01
... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...
Code of Federal Regulations, 2014 CFR
2014-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Code of Federal Regulations, 2013 CFR
2013-07-01
... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...
Estimation of the simple correlation coefficient.
Shieh, Gwowen
2010-11-01
This article investigates some unfamiliar properties of the Pearson product-moment correlation coefficient for the estimation of simple correlation coefficient. Although Pearson's r is biased, except for limited situations, and the minimum variance unbiased estimator has been proposed in the literature, researchers routinely employ the sample correlation coefficient in their practical applications, because of its simplicity and popularity. In order to support such practice, this study examines the mean squared errors of r and several prominent formulas. The results reveal specific situations in which the sample correlation coefficient performs better than the unbiased and nearly unbiased estimators, facilitating recommendation of r as an effect size index for the strength of linear association between two variables. In addition, related issues of estimating the squared simple correlation coefficient are also considered.
Yu, Jing; Zhu, Yi Feng; Dai, Mei Xia; Lin, Xia; Mao, Shuo Qian
2017-05-18
Utilizing the plankton 1 (505 Μm), 2 (160 Μm), 3 (77 Μm) nets to seasonally collect zooplankton samples at 10 stations and the corresponding abundance data was obtained. Based on individual zooplankton biovolume, size groups were classified to test the changes in spatiotemporal characteristics of both Sheldon and normalized biovolume size spectra in thermal discharge seawaters near the Guohua Power Plant, so as to explore the effects of temperature increase on zooplankton size spectra in the seawaters. The results showed that the individual biovolume of zooplankton ranged from 0.00012 to 127.0 mm 3 ·ind -1 , which could be divided into 21 size groups, and corresponding logarithmic ranges were from -13.06 to 6.99. According to Sheldon size spectra, the predominant species to form main peaks of the size spectrum in different months were Copepodite larvae, Centropages mcmurrichi, Calanus sinicus, fish larvae, Sagitta bedoti, Sagitta nagae and Pleurobrachia globosa, and minor peaks mostly consisted of individuals with smaller larvae, Cyclops and Paracalanus aculeatus. In different warming sections, Copepodite larvae, fish eggs and Cyclops were mostly unaffected by the temperature increase, while the macrozooplankton such as S. bedoti, S. nagae, P. globosa, C. sinicus and Beroe cucumis had an obvious tendency to avoid the outfall of the power plant. Based on the results of normalized size spectra, the intercepts from low to high occurred in November, February, May and August, respectively. At the same time, the minimum slope was found in February, and similarly bigger slopes were observed in May and August. These results indicated that the proportion of small zooplankton was highest in February, while the proportions of the meso- and macro-zooplankton were relatively high in May and August. Among different sections, the slope in the 0.2 km section was minimum, which increased with the increase of section distance to the outfall. The result obviously demonstrated that the closer the distance was from outfall of the power plant, the smaller the zooplankton became. On the whole, the average intercept of normalized size spectrum in Xiangshan Bay was 4.68, and the slope was -0.655.
7 CFR 51.3198 - Size classifications.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...
7 CFR 51.3198 - Size classifications.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...
7 CFR 51.3198 - Size classifications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Size classifications. 51.3198 Section 51.3198... STANDARDS) United States Standards for Grades of Bermuda-Granex-Grano Type Onions Size Classifications § 51.3198 Size classifications. Size shall be specified in connection with the grade in terms of minimum...
Optimal design of porous structures for the fastest liquid absorption.
Shou, Dahua; Ye, Lin; Fan, Jintu; Fu, Kunkun
2014-01-14
Porous materials engineered for rapid liquid absorption are useful in many applications, including oil recovery, spacecraft life-support systems, moisture management fabrics, medical wound dressings, and microfluidic devices. Dynamic absorption in capillary tubes and porous media is driven by the capillary pressure, which is inversely proportional to the pore size. On the other hand, the permeability of porous materials scales with the square of the pore size. The dynamic competition between these two superimposed mechanisms for liquid absorption through a heterogeneous porous structure may lead to an overall minimum absorption time. In this work, we explore liquid absorption in two different heterogeneous porous structures [three-dimensional (3D) circular tubes and porous layers], which are composed of two sections with variations in radius/porosity and height. The absorption time to fill the voids of porous constructs is expressed as a function of radius/porosity and height of local sections, and the absorption process does not follow the classic Washburn's law. Under given height and void volume, these two-section structures with a negative gradient of radius/porosity against the absorption direction are shown to have faster absorption rates than control samples with uniform radius/porosity. In particular, optimal structural parameters, including radius/porosity and height, are found that account for the minimum absorption time. The liquid absorption in the optimized porous structure is up to 38% faster than in a control sample. The results obtained can be used a priori for the design of porous structures with excellent liquid management property in various fields.
Marques, J M C; Pais, A A C C; Abreu, P E
2012-02-05
The efficiency of the so-called big-bang method for the optimization of atomic clusters is analysed in detail for Morse pair potentials with different ranges; here, we have used Morse potentials with four different ranges, from long- ρ = 3) to short-ranged ρ = 14) interactions. Specifically, we study the efficacy of the method in discovering low-energy structures, including the putative global minimum, as a function of the potential range and the cluster size. A new global minimum structure for long-ranged ρ = 3) Morse potential at the cluster size of n= 240 is reported. The present results are useful to assess the maximum cluster size for each type of interaction where the global minimum can be discovered with a limited number of big-bang trials. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wu, Ming; Cheng, Zhou; Wu, Jianfeng; Wu, Jichun
2017-06-01
Representative elementary volume (REV) is important to determine properties of porous media and those involved in migration of contaminants especially dense nonaqueous phase liquids (DNAPLs) in subsurface environment. In this study, an experiment of long-term migration of the commonly used DNAPL, perchloroethylene (PCE), is performed in a two dimensional (2D) sandbox where several system variables including porosity, PCE saturation (Soil) and PCE-water interfacial area (AOW) are accurately quantified by light transmission techniques over the entire PCE migration process. Moreover, the REVs for these system variables are estimated by a criterion of relative gradient error (εgi) and results indicate that the frequency of minimum porosity-REV size closely follows a Gaussian distribution in the range of 2.0 mm and 8.0 mm. As experiment proceeds in PCE infiltration process, the frequency and cumulative frequency of both minimum Soil-REV and minimum AOW-REV sizes change their shapes from the irregular and random to the regular and smooth. When experiment comes into redistribution process, the cumulative frequency of minimum Soil-REV size reveals a linear positive correlation, while frequency of minimum AOW-REV size tends to a Gaussian distribution in the range of 2.0 mm-7.0 mm and appears a peak value in 13.0 mm-14.0 mm. Undoubtedly, this study will facilitate the quantification of REVs for materials and fluid properties in a rapid, handy and economical manner, which helps enhance our understanding of porous media and DNAPL properties at micro scale, as well as the accuracy of DNAPL contamination modeling at field-scale.
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Stehly, G.R.; Gingerich, W.H.
1999-01-01
A preliminary evaluation of efficacy and minimum toxic concentration of AQUI-S(TM), a fish anaesthetic/sedative, was determined in two size classes of six species of fish important to US public aquaculture (bluegill, channel catfish, lake trout, rainbow trout, walleye and yellow perch). In addition, efficacy and minimum toxic concentration were determined in juvenile-young adult (fish aged 1 year or older) rainbow trout acclimated to water at 7 ??C, 12 ??C and 17 ??C. Testing concentrations were based on determinations made with range-finding studies for both efficacy and minimum toxic concentration. Most of the tested juvenile-young adult fish species were induced in 3 min or less at a nominal AQUI-S(TM) concentration of 20 mg L-1. In juvenile-young adult fish, the minimum toxic concentration was at least 2.5 times the selected efficacious concentration. Three out of five species of fry-fingerlings (1.25-12.5 cm in length and < 1 year old) were induced in ??? 4.1 min at a nominal concentration of 20 mg L-1 AQUI-S(TM), with the other two species requiring nominal concentrations of 25 and 35 mg L-1 for similar times of induction. Recovery times were ??? 7.3 rain for all species in the two size classes. In fry-fingerlings, the minimum toxic concentration was at least 1.4 times the selected efficacious concentration. There appeared to be little relationship between size of fish and concentrations or times to induction, recovery times and minimum toxic concentration. The times required for induction and for recovery were increased in rainbow trout as the acclimation temperature was reduced.
NASA Astrophysics Data System (ADS)
Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine
2018-05-01
The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.
Continuous Time Level Crossing Sampling ADC for Bio-Potential Recording Systems
Tang, Wei; Osman, Ahmad; Kim, Dongsoo; Goldstein, Brian; Huang, Chenxi; Martini, Berin; Pieribone, Vincent A.
2013-01-01
In this paper we present a fixed window level crossing sampling analog to digital convertor for bio-potential recording sensors. This is the first proposed and fully implemented fixed window level crossing ADC without local DACs and clocks. The circuit is designed to reduce data size, power, and silicon area in future wireless neurophysiological sensor systems. We built a testing system to measure bio-potential signals and used it to evaluate the performance of the circuit. The bio-potential amplifier offers a gain of 53 dB within a bandwidth of 200 Hz-20 kHz. The input-referred rms noise is 2.8 µV. In the asynchronous level crossing ADC, the minimum delta resolution is 4 mV. The input signal frequency of the ADC is up to 5 kHz. The system was fabricated using the AMI 0.5 µm CMOS process. The chip size is 1.5 mm by 1.5 mm. The power consumption of the 4-channel system from a 3.3 V supply is 118.8 µW in the static state and 501.6 µW with a 240 kS/s sampling rate. The conversion efficiency is 1.6 nJ/conversion. PMID:24163640
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
NASA Astrophysics Data System (ADS)
Wu, A. M.; Nater, E. A.; Dalzell, B. J.; Perry, C. H.
2014-12-01
The USDA Forest Service's Forest Inventory Analysis (FIA) program is a national effort assessing current forest resources to ensure sustainable management practices, to assist planning activities, and to report critical status and trends. For example, estimates of carbon stocks and stock change in FIA are reported as the official United States submission to the United Nations Framework Convention on Climate Change. While the main effort in FIA has been focused on aboveground biomass, soil is a critical component of this system. FIA sampled forest soils in the early 2000s and has remeasurement now underway. However, soil sampling is repeated on a 10-year interval (or longer), and it is uncertain what magnitude of changes in soil organic carbon (SOC) may be detectable with the current sampling protocol. We aim to identify the sensitivity and variability of SOC in the FIA database, and to determine the amount of SOC change that can be detected with the current sampling scheme. For this analysis, we attempt to answer the following questions: 1) What is the sensitivity (power) of SOC data in the current FIA database? 2) How does the minimum detectable change in forest SOC respond to changes in sampling intervals and/or sample point density? Soil samples in the FIA database represent 0-10 cm and 10-20 cm depth increments with a 10-year sampling interval. We are investigating the variability of SOC and its change over time for composite soil data in each FIA region (Pacific Northwest, Interior West, Northern, and Southern). To guide future sampling efforts, we are employing statistical power analysis to examine the minimum detectable change in SOC storage. We are also investigating the sensitivity of SOC storage changes under various scenarios of sample size and/or sample frequency. This research will inform the design of future FIA soil sampling schemes and improve the information available to international policy makers, university and industry partners, and the public.
Electromagnetic and Microwave-Absorbing Properties of Plate-Like Nd-Ce-Fe Powder
NASA Astrophysics Data System (ADS)
Qiao, Ziqiang; Pan, Shunkang; Xiong, Jilei; Cheng, Lichun; Lin, Peihao; Luo, Jialiang
2017-01-01
Plate-like Ce x Nd2- x Fe17 ( x = 0.0, 0.1, 0.2, 0.3, 0.4) powders have been synthesized by an arc melting and high-energy ball milling method. The structure of the Nd-Ce-Fe powders was investigated by x-ray diffraction analysis. Their morphology and particle size distribution were evaluated by scanning electron microscopy and laser particle analysis. The saturation magnetization and electromagnetic parameters of the powders were characterized using vibrating-sample magnetometry and vector network analysis, respectively. The results reveal that the Ce x Nd2- x Fe17 ( x = 0.0, 0.1, 0.2, 0.3, 0.4) powders consisted of Nd2Fe17 single phase with different Ce contents. The particle size and saturation magnetization decreased with increasing Ce content. The resonant frequencies of ɛ″ and μ″ moved towards lower frequency with increasing Ce concentration. The minimum reflection loss value decreased as the Ce content was increased. The minimum reflection loss and absorption peak frequency of Ce0.2Nd1.8Fe17 with coating thickness of 1.8 mm were -22.5 dB and 7 GHz, respectively. Increasing the values of the complex permittivity and permeability could result in materials with good microwave absorption properties.
Spatiotemporal variability of suspended sediment particle size in a mixed-land-use watershed.
Kellner, Elliott; Hubbart, Jason A
2018-02-15
Given existing knowledge gaps, there is a need for research that quantitatively characterizes spatiotemporal variation of suspended sediment particle size distribution (PSD) in contemporary watersheds. A five-year study was conducted in a representative watershed of the central United States utilizing a nested-scale experimental watershed study design, comprising five gauging sites partitioning the catchment into five sub-watersheds. Streamwater grab samples were collected four times per week, at each gauging site, for the duration of the study period (Oct. 2009-Feb. 2014). Samples were analyzed using laser particle diffraction. Significantly different (p<0.05) suspended sediment PSDs were observed at monitoring sites throughout the course of the study. For example, results indicated greater proportions of silt at site #5 (65%), relative to other sites (41, 32, 29, and 43%, for sites #1-#4, respectively). Likewise, results showed greater proportions of sand at sites #2 and #3 (66 and 68%, respectively), relative to other sites (57, 55, and 34%, for sites #1, #4, and #5, respectively). PSD spatial variability was not fully explained by hydroclimate or sub-watershed land use/land cover characteristics. Rather, results were strengthened by consideration of surficial geology (e.g. supply-controlled spatial variation of particle size). PSD displayed consistent seasonality during the study, characterized by peaks in the proportion of sand (and aggregates) during the winter (i.e. 70-90%), and minimums during the summer (i.e. 12-38%); and peaks in the proportion of silt particles in the summer (i.e. 61-88%) and minimums in the winter (i.e. 10-23%). Likely explanations of results include seasonal streamflow differences. Results comprise distinct observations of spatiotemporal variation of PSD, thereby improving understanding of lotic suspended sediment regimes and advancing future management practices in mixed-land-use watersheds. Copyright © 2017 Elsevier B.V. All rights reserved.
46 CFR 76.10-90 - Installations contracted for prior to May 26, 1965.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Not over Minimum number of pumps Minimum hose and hydrant size, inches Nozzle orifice size, inches Length of hose, feet 100 4,000 2 1 11/2 1 5/8 1 50 4,000 3 1 11/2 1 5/8 1 50 1 May use 50 feet of 21/2-inch hose with 7/8-inch nozzles for exterior stations. May use 75 feet of 11/2-inch hose with 5/8-inch...
ZERODUR - bending strength: review of achievements
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2017-08-01
Increased demand for using the glass ceramic ZERODUR® with high mechanical loads called for strength data based on larger statistical samples. Design calculations for failure probability target value below 1: 100 000 cannot be made reliable with parameters derived from 20 specimen samples. The data now available for a variety of surface conditions, ground with different grain sizes and acid etched for full micro crack removal, allow stresses by factors four to ten times higher than before. The large sample revealed that breakage stresses of ground surfaces follow the three parameter Weibull distribution instead of the two parameter version. This is more reasonable considering that the micro cracks of such surfaces have a maximum depth which is reflected in the existence of a threshold breakage stress below which breakage probability is zero. This minimum strength allows calculating minimum lifetimes. Fatigue under load can be taken into account by using the stress corrosion coefficient for the actual environmental humidity. For fully etched surfaces Weibull statistics fails. The precondition of the Weibull distribution, the existence of one unique failure mechanism, is not given anymore. ZERODUR® with fully etched surfaces free from damages introduced after etching endures easily 100 MPa tensile stress. The possibility to use ZERODUR® for combined high precision and high stress application was confirmed by the successful launch and continuing operation of LISA Pathfinder the precursor experiment for the gravitational wave antenna satellite array eLISA.
Tunable microwave absorbing nano-material for X-band applications
NASA Astrophysics Data System (ADS)
Sadiq, Imran; Naseem, Shahzad; Ashiq, Muhammad Naeem; Khan, M. A.; Niaz, Shanawer; Rana, M. U.
2016-03-01
The effect of rare earth elements substitution in Sr1.96RE0.04Co2Fe27.80Mn0.2O46 (RE=Ce, Gd, Nd, La and Sm) X-type hexagonal ferrites prepared by using sol gel autocombustion method was studied. The XRD and FTIR analysis show the single phase of the prepared material. The lattice constants a (Å) and c (Å) varies with the additives. The particle size measured by Scherer formula for all the samples varies in the range of 54-100 nm and confirmed by the TEM analysis. The average grain size measured by SEM analysis lies in the range of 0.672-1.01 μm for all the samples. The Gd-substituted ferrite has higher value of coercivity (526.06 G) among all the samples which could be a good material for longitudinal recording media. The results also indicate that the Gd-substituted sample has maximum reflection loss of -25.2 dB at 11.878 GHz, can exhibit the best microwave absorption properties among all the substituted samples. Furthermore, the minimum value of reflection loss shifts towards the lower and higher frequencies with the substitution of rare earth elements which confirms that the microwave absorption properties can be tuned with the substitution of rare earth elements in pure ferrites. The peak value of attenuation constant at higher frequency agrees well the reflection loss data.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23
NASA Technical Reports Server (NTRS)
Willson, Robert M.; Hathaway, David H.
2004-01-01
On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).
Schumacher, E L; Owens, B D; Uyeno, T A; Clark, A J; Reece, J S
2017-08-01
This study tests for interspecific evidence of Heincke's law among hagfishes and advances the field of research on body size and depth of occurrence in fishes by including a phylogenetic correction and by examining depth in four ways: maximum depth, minimum depth, mean depth of recorded specimens and the average of maximum and minimum depths of occurrence. Results yield no evidence for Heincke's law in hagfishes, no phylogenetic signal for the depth at which species occur, but moderate to weak phylogenetic signal for body size, suggesting that phylogeny may play a role in determining body size in this group. © 2017 The Fisheries Society of the British Isles.
Smith, W.P.; Wiedenfeld, D.A.; Hanel, P.B.; Twedt, D.J.; Ford, R.P.; Cooper, R.J.; Smith, Winston Paul
1993-01-01
To quantify efficacy of point count sampling in bottomland hardwood forests, we examined the influence of point count duration on corresponding estimates of number of individuals and species recorded. To accomplish this we conducted a totalof 82 point counts 7 May-16 May 1992distributed among three habitats (Wet, Mesic, Dry) in each of three regions within the lower Mississippi Alluvial Valley (MAV). Each point count consisted of recording the number of individual birds (all species) seen or heard during the initial three minutes and per each minute thereafter for a period totaling ten minutes. In addition, we included 384 point counts recorded during an 8-week period in each of 3 years (1985-1987) among 56 randomly-selected forest patches within the bottomlands of western Tennessee. Each point count consisted of recording the number of individuals (excluding migrating species) during each of four, 5 minute intervals for a period totaling 20 minutes. To estimate minimum sample size, we determined sampling variation at each level (region, habitat, and locality) with the 82 point counts from the lower (MAV) and applied the procedures of Neter and Wasserman (1974:493; Applied linear statistical models). Neither the cumulative number of individuals nor number of species per sampling interval attained an asymptote after 10 or 20 minutes of sampling. For western Tennessee bottomlands, total individual and species counts relative to point count duration were similar among years and comparable to the pattern observed throughout the lower MAV. Across the MAV, we recorded a total of 1,62 1 birds distributed among 52 species with the majority (8721/1621) representing 8 species. More birds were recorded within 25-50 m than in either of the other distance categories. There was significant variation in numbers of individuals and species among point counts. For both, significant differences between region and patch (nested within region) occurred; neither habitat nor interaction between habitat and region was significant. For = 0.05 and L3 = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total birds (MSE = 9.28) and species (MSE = 3.79), respectively; 25 percent of the mean could be achieved with 5 counts per factor level. Corresponding sample sizes required to detect differences of rarer species (e.g., Wood Thrush) were 500; for common species (e.g., Northern Cardinal) this same level of precision could be achieved with 100 counts.
1985-12-01
statistics, each of the a levels fall. The mirror image of this is to work with the percentiles, or the I - a levels . These then become the minimum...To be valid, the power would have to be close to the *-levels, and that Is the case. The powers are not exactly equal to the a - levels , but that is a...Information available increases with sample size. When a - levels are analyzed, for a = .0 1, the only reasonable power Is 33 L 4 against the
Yi, Honghong; Hao, Jiming; Duan, Lei; Li, Xinghua; Guo, Xingming
2006-09-01
In this investigation, the collection efficiency of particulate emission control devices (PECDs), particulate matter (PM) emissions, and PM size distribution were determined experimentally at the inlet and outlet of PECDs at five coal-fired power plants. Different boilers, coals, and PECDs are used in these power plants. Measurement in situ was performed by an electrical low-pressure impactor with a sampling system, which consisted of an isokinetic sampler probe, precut cyclone, and two-stage dilution system with a sample line to the instruments. The size distribution was measured over a range from 0.03 to 10 microm. Before and after all of the PECDs, the particle number size distributions display a bimodal distribution. The PM2.5 fraction emitted to atmosphere includes a significant amount of the mass from the coarse particle mode. The controlled and uncontrolled emission factors of total PM, inhalable PM (PM10), and fine PM P(M2.5) were obtained. Electrostatic precipitator (ESP) and baghouse total collection efficiencies are 96.38-99.89% and 99.94%, respectively. The minimum collection efficiency of the ESP and the baghouse both appear in the particle size range of 0.1-1 microm. In this size range, ESP and baghouse collection efficiencies are 85.79-98.6% and 99.54%. Real-time measurement shows that the mass and number concentration of PM10 will be greatly affected by the operating conditions of the PECDs. The number of emitted particles increases with increasing boiler load level because of higher combustion temperature. During test run periods, the data reproducibility is satisfactory.
The Effect of Minimum Wages on Youth Employment in Canada: A Panel Study.
ERIC Educational Resources Information Center
Yuen, Terence
2003-01-01
Canadian panel data 1988-90 were used to compare estimates of minimum-wage effects based on a low-wage/high-worker sample and a low-wage-only sample. Minimum-wage effect for the latter is nearly zero. Different results for low-wage subgroups suggest a significant effect for those with longer low-wage histories. (Contains 26 references.) (SK)
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Juan; Zou, Qingze, E-mail: qzzou@rci.rutgers.edu
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized inmore » a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.« less
High-speed adaptive contact-mode atomic force microscopy imaging with near-minimum-force.
Ren, Juan; Zou, Qingze
2014-07-01
In this paper, an adaptive contact-mode imaging approach is proposed to replace the traditional contact-mode imaging by addressing the major concerns in both the speed and the force exerted to the sample. The speed of the traditional contact-mode imaging is largely limited by the need to maintain precision tracking of the sample topography over the entire imaged sample surface, while large image distortion and excessive probe-sample interaction force occur during high-speed imaging. In this work, first, the image distortion caused by the topography tracking error is accounted for in the topography quantification. Second, the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line topography is integrated to the topography feeedback loop to enhance the sample topography tracking. The proposed approach is demonstrated and evaluated through imaging a calibration sample of square pitches at both high speeds (e.g., scan rate of 75 Hz and 130 Hz) and large sizes (e.g., scan size of 30 μm and 80 μm). The experimental results show that compared to the traditional constant-force contact-mode imaging, the imaging speed can be increased by over 30 folds (with the scanning speed at 13 mm/s), and the probe-sample interaction force can be reduced by more than 15% while maintaining the same image quality.
NASA Astrophysics Data System (ADS)
Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter
1987-03-01
The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.
Antibiotic loading and release studies of LSMO nanoparticles embedded in an acrylic polymer
NASA Astrophysics Data System (ADS)
Biswas, Sonali; Keshri, Sunita; Goswami, Sudipta; Isaac, Jinu; Ganguly, Swastika; Perov, Nikolai
2016-12-01
In this paper, we present the drug loading and release works of ? (LSMO) manganite nanoparticles (NPs). The LSMO NPs, grown using the sol-gel method, were embedded in an acrylic interpenetrating polymer network to make the sample applicable for biomedical purposes. The results of scanning electron microscopy showed that these NPs were well dispersed in the polymer. The grain size of these NPs lies in the range of 25-45 nm, as confirmed by transmission electron microscopy. The measurements of DC magnetization and hysteresis loops reveal that the basic magnetic behaviour of the LSMO NPs remained almost unaltered even after embedding in polymer, but with lower saturation value of magnetization. The drug loading and release studies of the grown sample were carried out using an antibiotic, ciprofloxacin. The minimum inhibitory effect of the sample loaded with this drug has exhibited high activity against different strains of bacteria, comparable to the pure ciprofloxacin.
Measurement of the thermal expansion of melt-textured YBCO using optical fibre grating sensors
NASA Astrophysics Data System (ADS)
Zeisberger, M.; Latka, I.; Ecke, W.; Habisreuther, T.; Litzkendorf, D.; Gawalek, W.
2005-02-01
In this paper we present measurements of the thermal expansion of melt-textured YBaCuO in the temperature range 30-300 K by means of optical fibre sensors. The sample, which had a size of 38 × 38 × 18 mm3, was prepared by our standard melt-texturing process using SmBaCuO seeds. One fibre containing three Bragg gratings which act as strain sensors was glued to the sample surface with two sensors parallel to the ab-plane and one sensor parallel to the c-axis. The sample was cooled down to a minimum temperature of 30 K in a vacuum chamber using a closed cycle refrigerator. In the temperature range we used, the thermal expansion coefficients are in the range of (3-9) × 10-6 K-1 (ab-direction) and (5-13) × 10-6 K-1 (c-direction).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pombet, Denis; Desnoyers, Yvon; Charters, Grant
2013-07-01
The TruPro{sup R} process enables to collect a significant number of samples to characterize radiological materials. This innovative and alternative technique is experimented for the ANDRA quality-control inspection of cemented packages. It proves to be quicker and more prolific than the current methodology. Using classical statistics and geo-statistics approaches, the physical and radiological characteristics of two hulls containing immobilized wastes (sludges or concentrates) in a hydraulic binder are assessed in this paper. The waste homogeneity is also evaluated in comparison to ANDRA criterion. Sensibility to sample size (support effect), presence of extreme values, acceptable deviation rate and minimum number ofmore » data are discussed. The final objectives are to check the homogeneity of the two characterized radwaste packages and also to validate and reinforce this alternative characterization methodology. (authors)« less
Atomistic modeling of dropwise condensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikarwar, B. S., E-mail: bssikarwar@amity.edu; Singh, P. L.; Muralidhar, K.
The basic aim of the atomistic modeling of condensation of water is to determine the size of the stable cluster and connect phenomena occurring at atomic scale to the macroscale. In this paper, a population balance model is described in terms of the rate equations to obtain the number density distribution of the resulting clusters. The residence time is taken to be large enough so that sufficient time is available for all the adatoms existing in vapor-phase to loose their latent heat and get condensed. The simulation assumes clusters of a given size to be formed from clusters of smallermore » sizes, but not by the disintegration of the larger clusters. The largest stable cluster size in the number density distribution is taken to be representative of the minimum drop radius formed in a dropwise condensation process. A numerical confirmation of this result against predictions based on a thermodynamic model has been obtained. Results show that the number density distribution is sensitive to the surface diffusion coefficient and the rate of vapor flux impinging on the substrate. The minimum drop radius increases with the diffusion coefficient and the impinging vapor flux; however, the dependence is weak. The minimum drop radius predicted from thermodynamic considerations matches the prediction of the cluster model, though the former does not take into account the effect of the surface properties on the nucleation phenomena. For a chemically passive surface, the diffusion coefficient and the residence time are dependent on the surface texture via the coefficient of friction. Thus, physical texturing provides a means of changing, within limits, the minimum drop radius. The study reveals that surface texturing at the scale of the minimum drop radius does not provide controllability of the macro-scale dropwise condensation at large timescales when a dynamic steady-state is reached.« less
The effect of rock fabric on P-wave velocity distribution in amphibolites
NASA Astrophysics Data System (ADS)
Vajdová, V.; Přikryl, R.; Pros, Z.; Klíma, K.
1999-07-01
This study presents contribution to the laboratory investigation of elastic properties and rock fabric of amphibolites. P-wave velocity was determined on four spherical samples prepared from a shallow borehole core. The measurement was conducted in 132 directions under various conditions of hydrostatic pressure (up to 400 MPa). The rock fabric was investigated by image analysis of thin sections that enabled precise determination of grain size, modal composition and shape parameters of rock-forming minerals. Laboratory measurement of P-waves revealed pseudoorthorhombic symmetry of rock fabric in amphibolites studied. This symmetry reflects rocks' macro- and microfabric. Maximum P-wave velocity corresponds to the macroscopically visible stretching lineation. Minimum P-wave velocity is oriented perpendicular to the foliation plane. The average grain size is the main microstructural factor controlling mean P-wave velocity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smitherman, C; Chen, B; Samei, E
2014-06-15
Purpose: This work involved a comprehensive modeling of task-based performance of CT across a wide range of protocols. The approach was used for optimization and consistency of dose and image quality within a large multi-vendor clinical facility. Methods: 150 adult protocols from the Duke University Medical Center were grouped into sub-protocols with similar acquisition characteristics. A size based image quality phantom (Duke Mercury Phantom) was imaged using these sub-protocols for a range of clinically relevant doses on two CT manufacturer platforms (Siemens, GE). The images were analyzed to extract task-based image quality metrics such as the Task Transfer Function (TTF),more » Noise Power Spectrum, and Az based on designer nodule task functions. The data were analyzed in terms of the detectability of a lesion size/contrast as a function of dose, patient size, and protocol. A graphical user interface (GUI) was developed to predict image quality and dose to achieve a minimum level of detectability. Results: Image quality trends with variations in dose, patient size, and lesion contrast/size were evaluated and calculated data behaved as predicted. The GUI proved effective to predict the Az values representing radiologist confidence for a targeted lesion, patient size, and dose. As an example, an abdomen pelvis exam for the GE scanner, with a task size/contrast of 5-mm/50-HU, and an Az of 0.9 requires a dose of 4.0, 8.9, and 16.9 mGy for patient diameters of 25, 30, and 35 cm, respectively. For a constant patient diameter of 30 cm, the minimum detected lesion size at those dose levels would be 8.4, 5, and 3.9 mm, respectively. Conclusion: The designed CT protocol optimization platform can be used to evaluate minimum detectability across dose levels and patient diameters. The method can be used to improve individual protocols as well as to improve protocol consistency across CT scanners.« less
7 CFR 51.1416 - Optional determinations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... throughout the lot. (a) Edible kernel content. A minimum sample of at least 500 grams of in-shell pecans shall be used for determination of edible kernel content. After the sample is weighed and shelled... determine edible kernel content for the lot. (b) Poorly developed kernel content. A minimum sample of at...
Code of Federal Regulations, 2013 CFR
2013-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...
Code of Federal Regulations, 2014 CFR
2014-10-01
..., DEPARTMENT OF COMMERCE FISHERIES OF THE CARIBBEAN, GULF OF MEXICO, AND SOUTH ATLANTIC Shrimp Fishery of the Gulf of Mexico § 622.56 Size limits. Shrimp not in compliance with the applicable size limit as... shrimp harvested in the Gulf EEZ are subject to the minimum-size landing and possession limits of...
Code of Federal Regulations, 2011 CFR
2011-01-01
... cherry tomatoes and Pyriforme type tomatoes commonly referred to as pear shaped tomatoes, and other... Standards for Fresh Tomatoes 1 Size § 51.1859 Size. (a) The size of tomatoes packed in any standard type... measurement for minimum diameter shall be the largest diameter of the tomato measured at right angles to a...
Code of Federal Regulations, 2010 CFR
2010-01-01
... cherry tomatoes and Pyriforme type tomatoes commonly referred to as pear shaped tomatoes, and other... Standards for Fresh Tomatoes 1 Size § 51.1859 Size. (a) The size of tomatoes packed in any standard type... measurement for minimum diameter shall be the largest diameter of the tomato measured at right angles to a...
Aad, G.; Abbott, B.; Abdallah, J.; ...
2015-10-01
The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
The paper presents studies of Bose–Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range pT> 100 MeV and |η|< 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 μb -1, 190 μb -1 and 12.4 nb -1 for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. Amore » saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. In conclusion, the dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.« less
Similarities and differences in dream content at the cross-cultural, gender, and individual levels.
William Domhoff, G; Schneider, Adam
2008-12-01
The similarities and differences in dream content at the cross-cultural, gender, and individual levels provide one starting point for carrying out studies that attempt to discover correspondences between dream content and various types of waking cognition. Hobson and Kahn's (Hobson, J. A., & Kahn, D. (2007). Dream content: Individual and generic aspects. Consciousness and Cognition, 16, 850-858.) conclusion that dream content may be more generic than most researchers realize, and that individual differences are less salient than usually thought, provides the occasion for a review of findings based on the Hall and Van de Castle (Hall, C., & Van de Castle, R. (1966). The content analysis of dreams. New York: Appleton-Century-Crofts.) coding system for the study of dream content. Then new findings based on a computationally intensive randomization strategy are presented to show the minimum sample sizes needed to detect gender and individual differences in dream content. Generally speaking, sample sizes of 100-125 dream reports are needed because most dream elements appear in less than 50% of dream reports and the magnitude of the differences usually is not large.
Gebler, J.B.
2004-01-01
The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.
NASA Astrophysics Data System (ADS)
Won, An-Na; Song, Hae-Eun; Yang, Young-Kwon; Park, Jin-Chul; Hwang, Jung-Ha
2017-07-01
After the outbreak of the MERS (Middle East Respiratory Syndrome) epidemic, issues were raised regarding response capabilities of medical institutions, including the lack of isolation rooms at hospitals. Since then, the government of Korea has been revising regulations to enforce medical laws in order to expand the operation of isolation rooms and to strengthen standards regarding their mandatory installation at hospitals. Among general and tertiary hospitals in Korea, a total of 159 are estimated to be required to install isolation rooms to meet minimum standards. For the purpose of contributing to hospital construction plans in the future, this study conducted a questionnaire survey of experts and analysed the environment and devices necessary in isolation rooms, to determine their appropriate minimum size to treat patients. The result of the analysis is as follows: First, isolation rooms at hospitals are required to have a minimum 3,300mm minor axis and a minimum 5,000mm major axis for the isolation room itself, and a minimum 1,800mm minor axis for the antechamber where personal protective equipment is donned and removed. Second, the 15 ㎡-or-larger standard for the floor area of isolation rooms will have to be reviewed and standards for the minimum width of isolation rooms will have to be established.
Quantifying environmental limiting factors on tree cover using geospatial data.
Greenberg, Jonathan A; Santos, Maria J; Dobrowski, Solomon Z; Vanderbilt, Vern C; Ustin, Susan L
2015-01-01
Environmental limiting factors (ELFs) are the thresholds that determine the maximum or minimum biological response for a given suite of environmental conditions. We asked the following questions: 1) Can we detect ELFs on percent tree cover across the eastern slopes of the Lake Tahoe Basin, NV? 2) How are the ELFs distributed spatially? 3) To what extent are unmeasured environmental factors limiting tree cover? ELFs are difficult to quantify as they require significant sample sizes. We addressed this by using geospatial data over a relatively large spatial extent, where the wall-to-wall sampling ensures the inclusion of rare data points which define the minimum or maximum response to environmental factors. We tested mean temperature, minimum temperature, potential evapotranspiration (PET) and PET minus precipitation (PET-P) as potential limiting factors on percent tree cover. We found that the study area showed system-wide limitations on tree cover, and each of the factors showed evidence of being limiting on tree cover. However, only 1.2% of the total area appeared to be limited by the four (4) environmental factors, suggesting other unmeasured factors are limiting much of the tree cover in the study area. Where sites were near their theoretical maximum, non-forest sites (tree cover < 25%) were primarily limited by cold mean temperatures, open-canopy forest sites (tree cover between 25% and 60%) were primarily limited by evaporative demand, and closed-canopy forests were not limited by any particular environmental factor. The detection of ELFs is necessary in order to fully understand the width of limitations that species experience within their geographic range.
Multiscale sampling of plant diversity: Effects of minimum mapping unit size
Stohlgren, T.J.; Chong, G.W.; Kalkhan, M.A.; Schell, L.D.
1997-01-01
Only a small portion of any landscape can be sampled for vascular plant diversity because of constraints of cost (salaries, travel time between sites, etc.). Often, the investigator decides to reduce the cost of creating a vegetation map by increasing the minimum mapping unit (MMU), and/or by reducing the number of vegetation classes to be considered. Questions arise about what information is sacrificed when map resolution is decreased. We compared plant diversity patterns from vegetation maps made with 100-ha, 50-ha, 2-ha, and 0.02-ha MMUs in a 754-ha study area in Rocky Mountain National Park, Colorado, United States, using four 0.025-ha and 21 0.1-ha multiscale vegetation plots. We developed and tested species-log(area) curves, correcting the curves for within-vegetation type heterogeneity with Jaccard's coefficients. Total species richness in the study area was estimated from vegetation maps at each resolution (MMU), based on the corrected species-area curves, total area of the vegetation type, and species overlap among vegetation types. With the 0.02-ha MMU, six vegetation types were recovered, resulting in an estimated 552 species (95% CI = 520-583 species) in the 754-ha study area (330 plant species were observed in the 25 plots). With the 2-ha MMU, five vegetation types were recognized, resulting in an estimated 473 species for the study area. With the 50-ha MMU, 439 plant species were estimated for the four vegetation types recognized in the study area. With the 100-ha MMU, only three vegetation types were recognized, resulting in an estimated 341 plant species for the study area. Locally rare species and keystone ecosystems (areas of high or unique plant diversity) were missed at the 2-ha, 50-ha, and 100-ha scales. To evaluate the effects of minimum mapping unit size requires: (1) an initial stratification of homogeneous, heterogeneous, and rare habitat types; and (2) an evaluation of within-type and between-type heterogeneity generated by environmental gradients and other factors. We suggest that at least some portions of vegetation maps created at a coarser level of resolution be validated at a higher level of resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altabet, Y. Elia; Debenedetti, Pablo G., E-mail: pdebene@princeton.edu; Stillinger, Frank H.
In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρ{sub S}. The tensile limit at ρ{sub S} is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρ{sub S} is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherentmore » structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.« less
Zhang, Qing; Zhu, Liang; Feng, Hanhua; Ang, Simon; Chau, Fook Siong; Liu, Wen-Tso
2006-01-18
This paper reported the development of a microfludic device for the rapid detection of viable and nonviable microbial cells through dual labeling by fluorescent in situ hybridization (FISH) and quantum dots (QDs)-labeled immunofluorescent assay (IFA). The coin sized device consists of a microchannel and filtering pillars (gap=1-2 microm) and was demonstrated to effectively trap and concentrate microbial cells (i.e. Giardia lamblia). After sample injection, FISH probe solution and QDs-labeled antibody solution were sequentially pumped into the device to accelerate the fluorescent labeling reactions at optimized flow rates (i.e. 1 and 20 microL/min, respectively). After 2 min washing for each assay, the whole process could be finished within 30 min, with minimum consumption of labeling reagents and superior fluorescent signal intensity. The choice of QDs 525 for IFA resulted in bright and stable fluorescent signal, with minimum interference with the Cy3 signal from FISH detection.
Crucial effect of melt homogenization on the fragility of non-stoichiometric chalcogenides
NASA Astrophysics Data System (ADS)
Ravindren, Sriram; Gunasekera, K.; Tucker, Z.; Diebold, A.; Boolchand, P.; Micoulaut, M.
2014-04-01
The kinetics of homogenization of binary AsxSe100 - x melts in the As concentration range 0% < x < 50% are followed in Fourier Transform (FT)-Raman profiling experiments, and show that 2 g sized melts in the middle concentration range 20% < x < 30% take nearly two weeks to homogenize when starting materials are reacted at 700 °C. In glasses of proven homogeneity, we find molar volumes to vary non-monotonically with composition, and the fragility index M displays a broad global minimum in the 20% < x < 30% range of x wherein M< 20. We show that properly homogenized samples have a lower measured fragility when compared to larger under-reacted melts. The enthalpy of relaxation at Tg, ΔHnr(x) shows a minimum in the 27% < x < 37% range. The super-strong nature of melt compositions in the 20% < x < 30% range suppresses melt diffusion at high temperatures leading to the slow kinetics of melt homogenization.
Memory performance in abstinent 3,4-methylenedioxymethamphetamine (MDMA, "ecstasy") users.
Groth-Marnat, Gary; Howchar, Hennedy; Marsh, Ali
2007-02-01
Research with animals and humans has suggested that acute and subacute use of 3,4-methylenedioxymethamphetamine (MDMA "ecstasy") may lead to memory impairment. However, research is limited by (1) low power due to small sample sizes, (2) the possible confound of polydrug use, and (3) the failure to consider intelligence as a covariate. The present study compared the memory performance on the Wechsler Memory Scale-III of 26 abstinent (2-wk. minimum) recreational MDMA users with 26 abstinent (2-wk. minimum) recreational polydrug users. Despite significantly greater polydrug use amongst these MDMA users, no significant group differences in memory were observed. Regression of total lifetime amount of MDMA use also did not predict memory performance after accounting for intelligence. In addition, the length of time since abstinence (at least 2 wk.) was not associated with an increase in memory performance. Greater total lifetime cocaine use, rather than total lifetime MDMA use, was significantly associated with greater decrements in General Memory and Delayed Verbal Memory performance.
Generalizing boundaries for triangular designs, and efficacy estimation at extended follow-ups.
Allison, Annabel; Edwards, Tansy; Omollo, Raymond; Alves, Fabiana; Magirr, Dominic; E Alexander, Neal D
2015-11-16
Visceral leishmaniasis (VL) is a parasitic disease transmitted by sandflies and is fatal if left untreated. Phase II trials of new treatment regimens for VL are primarily carried out to evaluate safety and efficacy, while pharmacokinetic data are also important to inform future combination treatment regimens. The efficacy of VL treatments is evaluated at two time points, initial cure, when treatment is completed and definitive cure, commonly 6 months post end of treatment, to allow for slow response to treatment and detection of relapses. This paper investigates a generalization of the triangular design to impose a minimum sample size for pharmacokinetic or other analyses, and methods to estimate efficacy at extended follow-up accounting for the sequential design and changes in cure status during extended follow-up. We provided R functions that generalize the triangular design to impose a minimum sample size before allowing stopping for efficacy. For estimation of efficacy at a second, extended, follow-up time, the performance of a shrinkage estimator (SHE), a probability tree estimator (PTE) and the maximum likelihood estimator (MLE) for estimation was assessed by simulation. The SHE and PTE are viable approaches to estimate an extended follow-up although the SHE performed better than the PTE: the bias and root mean square error were lower and coverage probabilities higher. Generalization of the triangular design is simple to implement for adaptations to meet requirements for pharmacokinetic analyses. Using the simple MLE approach to estimate efficacy at extended follow-up will lead to biased results, generally over-estimating treatment success. The SHE is recommended in trials of two or more treatments. The PTE is an acceptable alternative for one-arm trials or where use of the SHE is not possible due to computational complexity. NCT01067443 , February 2010.
NASA Astrophysics Data System (ADS)
Muthaiah, V. M. Suntharavel; Mula, Suhrit
2018-03-01
Present work investigates the microstructural stability during spark plasma sintering (SPS) of Fe-Cr-Y alloys, its mechanical properties and corrosion behavior for its possible applications in nuclear power plant and petrochemical industries. The SPS was carried out for the Fe-7Cr-1Y and Fe-15Cr-1Y alloys at 800 °C, 900 °C, and 1000 °C due to their superior thermal stability as reported in Muthaiah et al. [Mater Charact 114:43-53, 2016]. Microstructural analysis through TEM and electron back scattered diffraction confirmed that the grain sizes of the sintered samples depicted a dual size grain distribution with >50 pct grains within a range of 200 nm and remaining grains in the range 200 nm to 2 µm. The best combination of hardness, wear resistance, and corrosion behavior was achieved for the samples sintered at 1000 °C. The high hardness (9.6 GPa), minimum coefficient of friction (0.25), and extremely low wear volume (0.00277 × 10-2 mm3) and low corrosion rate (3.43 mpy) are discussed in the light of solid solution strengthening, grain size strengthening, grain boundary segregation, excellent densification due to diffusion bonding, and precipitation hardening due to uniformly distributed nanosize Fe17Y2 phase in the alloy matrix. The SEM analysis of the worn surface and corroded features corroborated well with the wear resistance and corrosion behavior of the corresponding samples.
2011-01-01
To obtain approval for the use vertebrate animals in research, an investigator must assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal. How does an investigator make that assurance? A power analysis is most accurate when the outcome is known before the study, which it rarely is. A ‘pilot study’ is appropriate only when the number of animals used is a tiny fraction of the numbers that will be invested in the main study because the data for the pilot animals cannot legitimately be used again in the main study without increasing the rate of type I errors (false discovery). Traditional significance testing requires the investigator to determine the final sample size before any data are collected and then to delay analysis of any of the data until all of the data are final. An investigator often learns at that point either that the sample size was larger than necessary or too small to achieve significance. Subjects cannot be added at this point in the study without increasing type I errors. In addition, journal reviewers may require more replications in quantitative studies than are truly necessary. Sequential stopping rules used with traditional significance tests allow incremental accumulation of data on a biomedical research problem so that significance, replicability, and use of a minimal number of animals can be assured without increasing type I errors. PMID:21838970
Shen, Shaobo; Rao, Ruirui; Wang, Jincao
2013-01-01
The effects of ore particle size on selectively bioleaching phosphorus (P) from high-phosphorus iron ore were studied. The average contents of P and Fe in the iron ore were 1.06 and 47.90% (w/w), respectively. The particle sizes of the ores used ranged from 58 to 3350 microm. It was found that the indigenous sulfur-oxidizing bacteria from municipal wastewater could grow well in the slurries of solid high-phosphorus iron ore and municipal wastewater. The minimum bioleaching pH reached for the current work was 0.33. The P content in bioleached iron ore reduced slightly with decreasing particle size, while the removal percentage of Fe decreased appreciably with decreasing particle size. The optimal particle size fraction was 58-75 microm, because the P content in bioleached iron ore reached a minimum of 0.16% (w/w), the removal percentage of P attained a maximum of 86.7%, while the removal percentage of Fe dropped to a minimum of 1.3% and the Fe content in bioleached iron ore was a maximum of 56.4% (w/w) in this case. The iron ores thus obtained were suitable to be used in the iron-making process. The removal percentage of ore solid decreased with decreasing particle size at particle size range of 106-3350 microm. The possible reasons resulting in above phenomena were explored in the current work. It was inferred that the particle sizes of the iron ore used in this work have no significant effect on the viability of the sulfur-oxidizing bacteria.
50 CFR 622.48 - Adjustment of management measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... biomass achieved by fishing at MSY (BMSY) (or proxy), maximum fishing mortality threshold (MFMT), minimum... biomass achieved by fishing at MSY (BMSY), minimum stock size threshold (MSST), and maximum fishing.... MSY, OY, and TAC. (f) South Atlantic snapper-grouper and wreckfish. Biomass levels, age-structured...
Preparation of nanosize polyaniline and its utilization for microwave absorber.
Abbas, S M; Dixit, A K; Chatterjee, R; Goel, T C
2007-06-01
Polyaniline powder in nanosize has been synthesized by chemical oxidative route. XRD, FTIR, and TEM were used to characterize the polyaniline powder. Crytallite size was estimated from XRD profile and also ascertained by TEM in the range of 15 to 20 nm. The composite absorbers have been prepared by mixing different ratios of polyaniline into procured polyurethane (PU) binder. The complex permittivity (epsilon' - jepsilon") and complex permeability (mu' - jmu") were measured in X-band (8.2-12.4 GHz) using Agilent network analyzer (model PNA E8364B) and its software module 85071 (version 'E'). Measured values of these parameters were used to determine the reflection loss at different frequencies and sample thicknesses, based on a model of a single layered plane wave absorber backed by a perfect conductor. An optimized polyaniline/PU ratio of 3:1 has given a minimum reflection loss of -30 dB (99.9% power absorption) at the central frequency 10 GHz and the bandwidth (full width at half minimum) of 4.2 GHz over whole X-band (8.2 to 12.4 GHz) in a sample thickness of 3.0 mm. The prepared composites can be fruitfully utilized for suppression of electromagnetic interference (EMI) and reduction of radar signatures (stealth technology).
Doshi, Urmi; Hamelberg, Donald
2012-11-13
In enhanced sampling techniques, the precision of the reweighted ensemble properties is often decreased due to large variation in statistical weights and reduction in the effective sampling size. To abate this reweighting problem, here, we propose a general accelerated molecular dynamics (aMD) approach in which only the rotatable dihedrals are subjected to aMD (RaMD), unlike the typical implementation wherein all dihedrals are boosted (all-aMD). Nonrotatable and improper dihedrals are marginally important to conformational changes or the different rotameric states. Not accelerating them avoids the sharp increases in the potential energies due to small deviations from their minimum energy conformations and leads to improvement in the precision of RaMD. We present benchmark studies on two model dipeptides, Ace-Ala-Nme and Ace-Trp-Nme, simulated with normal MD, all-aMD, and RaMD. We carry out a systematic comparison between the performances of both forms of aMD using a theory that allows quantitative estimation of the effective number of sampled points and the associated uncertainty. Our results indicate that, for the same level of acceleration and simulation length, as used in all-aMD, RaMD results in significantly less loss in the effective sample size and, hence, increased accuracy in the sampling of φ-ψ space. RaMD yields an accuracy comparable to that of all-aMD, from simulation lengths 5 to 1000 times shorter, depending on the peptide and the acceleration level. Such improvement in speed and accuracy over all-aMD is highly remarkable, suggesting RaMD as a promising method for sampling larger biomolecules.
78 FR 3923 - Sunshine Act Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-17
... impact of tick sizes on small and middle capitalization companies, the economic consequences (including the costs and benefits) of increasing or decreasing minimum tick sizes, and whether other policy... the second panel will address the impact of tick sizes on the securities market in general, including...
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Local sample thickness determination via scanning transmission electron microscopy defocus series.
Beyer, A; Straubinger, R; Belz, J; Volz, K
2016-05-01
The usable aperture sizes in (scanning) transmission electron microscopy ((S)TEM) have significantly increased in the past decade due to the introduction of aberration correction. In parallel with the consequent increase of convergence angle the depth of focus has decreased severely and optical sectioning in the STEM became feasible. Here we apply STEM defocus series to derive the local sample thickness of a TEM sample. To this end experimental as well as simulated defocus series of thin Si foils were acquired. The systematic blurring of high resolution high angle annular dark field images is quantified by evaluating the standard deviation of the image intensity for each image of a defocus series. The derived dependencies exhibit a pronounced maximum at the optimum defocus and drop to a background value for higher or lower values. The full width half maximum (FWHM) of the curve is equal to the sample thickness above a minimum thickness given by the size of the used aperture and the chromatic aberration of the microscope. The thicknesses obtained from experimental defocus series applying the proposed method are in good agreement with the values derived from other established methods. The key advantages of this method compared to others are its high spatial resolution and that it does not involve any time consuming simulations. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
Rate-Compatible LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel
2009-01-01
A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation
Gauging the Nearness and Size of Cycle Minimum
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.
1997-01-01
By definition, the conventional onset for the start of a sunspot cycle is the time when smoothed sunspot number (i.e., the 12-month moving average) has decreased to its minimum value (called minimum amplitude) prior to the rise to its maximum value (called maximum amplitude) for the given sunspot cycle. On the basis (if the modern era sunspot cycles 10-22 and on the presumption that cycle 22 is a short-period cycle having a cycle length of 120 to 126 months (the observed range of short-period modern era cycles), conventional onset for cycle 23 should not occur until sometime between September 1996 and March 1997, certainly between June 1996 and June 1997, based on the 95-percent confidence level deduced from the mean and standard deviation of period for the sample of six short-pei-iod modern era cycles. Also, because the first occurrence of a new cycle, high-latitude (greater than or equal to 25 degrees) spot has always preceded conventional onset of the new cycle by at least 3 months (for the data-available interval of cycles 12-22), conventional onset for cycle 23 is not expected until about August 1996 or later, based on the first occurrence of a new cycle 23, high-latitude spot during the decline of old cycle 22 in May 1996. Although much excitement for an earlier-occurring minimum (about March 1996) for cycle 23 was voiced earlier this year, the present study shows that this exuberance is unfounded. The decline of cycle 22 continues to favor cycle 23 minimum sometime during the latter portion of 1996 to the early portion of 1997.
Neandertal talus bones from El Sidrón site (Asturias, Spain): A 3D geometric morphometrics analysis.
Rosas, Antonio; Ferrando, Anabel; Bastir, Markus; García-Tabernero, Antonio; Estalrrich, Almudena; Huguet, Rosa; García-Martínez, Daniel; Pastor, Juan Francisco; de la Rasilla, Marco
2017-10-01
The El Sidrón tali sample is assessed in an evolutionary framework. We aim to explore the relationship between Neandertal talus morphology and body size/shape. We test the hypothesis 1: talar Neandertal traits are influenced by body size, and the hypothesis 2: shape variables independent of body size correspond to inherited primitive features. We quantify 35 landmarks through 3D geometric morphometrics techniques to describe H. neanderthalensis-H. sapiens shape variation, by Mean Shape Comparisons, Principal Component, Phenetic Clusters, Minimum spanning tree analyses and partial least square and regression of talus shape on body variables. Shape variation correlated to body size is compared to Neandertals-Modern Humans (MH) evolutionary shape variation. The Neandertal sample is compared to early hominins. Neandertal talus presents trochlear hypertrophy, a larger equality of trochlear rims, a shorter neck, a more expanded head, curvature and an anterior location of the medial malleolar facet, an expanded and projected lateral malleolar facet and laterally expanded posterior calcaneal facet compared to MH. The Neandertal talocrural joint morphology is influenced by body size. The other Neandertal talus traits do not co-vary with it or not follow the same co-variation pattern as MH. Besides, the trochlear hypertrophy, the trochlear rims equality and the short neck could be inherited primitive features; the medial malleolar facet morphology could be an inherited primitive feature or a secondarily primitive trait; and the calcaneal posterior facet would be an autapomorphic feature of the Neandertal lineage. © 2017 Wiley Periodicals, Inc.
Role of microextraction sampling procedures in forensic toxicology.
Barroso, Mário; Moreno, Ivo; da Fonseca, Beatriz; Queiroz, João António; Gallardo, Eugenia
2012-07-01
The last two decades have provided analysts with more sensitive technology, enabling scientists from all analytical fields to see what they were not able to see just a few years ago. This increased sensitivity has allowed drug detection at very low concentrations and testing in unconventional samples (e.g., hair, oral fluid and sweat), where despite having low analyte concentrations has also led to a reduction in sample size. Along with this reduction, and as a result of the use of excessive amounts of potentially toxic organic solvents (with the subsequent environmental pollution and costs associated with their proper disposal), there has been a growing tendency to use miniaturized sampling techniques. Those sampling procedures allow reducing organic solvent consumption to a minimum and at the same time provide a rapid, simple and cost-effective approach. In addition, it is possible to get at least some degree of automation when using these techniques, which will enhance sample throughput. Those miniaturized sample preparation techniques may be roughly categorized in solid-phase and liquid-phase microextraction, depending on the nature of the analyte. This paper reviews recently published literature on the use of microextraction sampling procedures, with a special focus on the field of forensic toxicology.
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and... weigh ten pounds or less, or in any container where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated...
40 CFR 63.1385 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable emission limits: (1) Method 1 (40 CFR part 60, appendix A) for the selection of the sampling port location and number of sampling ports; (2) Method 2 (40 CFR part 60, appendix A) for volumetric flow rate.... Each run shall consist of a minimum run time of 2 hours and a minimum sample volume of 60 dry standard...
Menu Plans: Maximum Nutrition for Minimum Cost.
ERIC Educational Resources Information Center
Texas Child Care, 1995
1995-01-01
Suggests that menu planning is the key to getting maximum nutrition in day care meals and snacks for minimum cost. Explores United States Department of Agriculture food pyramid guidelines for children and tips for planning menus and grocery shopping. Includes suggested meal patterns and portion sizes. (HTH)
MINIMUM AREAS FOR ELEMENTARY SCHOOL BUILDING FACILITIES.
ERIC Educational Resources Information Center
Pennsylvania State Dept. of Public Instruction, Harrisburg.
MINIMUM AREA SPACE REQUIREMENTS IN SQUARE FOOTAGE FOR ELEMENTARY SCHOOL BUILDING FACILITIES ARE PRESENTED, INCLUDING FACILITIES FOR INSTRUCTIONAL USE, GENERAL USE, AND SERVICE USE. LIBRARY, CAFETERIA, KITCHEN, STORAGE, AND MULTIPURPOSE ROOMS SHOULD BE SIZED FOR THE PROJECTED ENROLLMENT OF THE BUILDING IN ACCORDANCE WITH THE PROJECTION UNDER THE…
48 CFR 8.1102 - Presolicitation requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...
48 CFR 8.1102 - Presolicitation requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...
48 CFR 8.1102 - Presolicitation requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...
48 CFR 8.1102 - Presolicitation requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...
48 CFR 8.1102 - Presolicitation requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... that— (1) The vehicles requested are of maximum fuel efficiency and minimum body size, engine size, and... automobiles (sedans and station wagons) larger than Type IA, IB, or II (small, subcompact, or compact) are...
Multiple sensitive estimation and optimal sample size allocation in the item sum technique.
Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz
2018-01-01
For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ascent velocity and dynamics of the Fiumicino mud eruption, Rome, Italy
NASA Astrophysics Data System (ADS)
Vona, A.; Giordano, G.; De Benedetti, A. A.; D'Ambrosio, R.; Romano, C.; Manga, M.
2015-08-01
In August 2013 drilling triggered the eruption of mud near the international airport of Fiumicino (Rome, Italy). We monitored the evolution of the eruption and collected samples for laboratory characterization of physicochemical and rheological properties. Over time, muds show a progressive dilution with water; the rheology is typical of pseudoplastic fluids, with a small yield stress that decreases as mud density decreases. The eruption, while not naturally triggered, shares several similarities with natural mud volcanoes, including mud componentry, grain-size distribution, gas discharge, and mud rheology. We use the size of large ballistic fragments ejected from the vent along with mud rheology to compute a minimum ascent velocity of the mud. Computed values are consistent with in situ measurements of gas phase velocities, confirming that the stratigraphic record of mud eruptions can be quantitatively used to infer eruption history and ascent rates and hence to assess (or reassess) mud eruption hazards.
Modeling chain folding in protein-constrained circular DNA.
Martino, J A; Olson, W K
1998-01-01
An efficient method for sampling equilibrium configurations of DNA chains binding one or more DNA-bending proteins is presented. The technique is applied to obtain the tertiary structures of minimal bending energy for a selection of dinucleosomal minichromosomes that differ in degree of protein-DNA interaction, protein spacing along the DNA chain contour, and ring size. The protein-bound portions of the DNA chains are represented by tight, left-handed supercoils of fixed geometry. The protein-free regions are modeled individually as elastic rods. For each random spatial arrangement of the two nucleosomes assumed during a stochastic search for the global minimum, the paths of the flexible connecting DNA segments are determined through a numerical solution of the equations of equilibrium for torsionally relaxed elastic rods. The minimal energy forms reveal how protein binding and spacing and plasmid size differentially affect folding and offer new insights into experimental minichromosome systems. PMID:9591675
Leeman, Mats; Choi, Jaeyeong; Hansson, Sebastian; Storm, Matilda Ulmius; Nilsson, Lars
2018-05-29
The analysis of aggregates of therapeutic proteins is crucial in order to ensure efficacy and patient safety. Typically, the analysis is performed in the finished formulation to ensure that aggregates are not present. An important question is, however, what happens to therapeutic proteins, with regard to oligomerization and aggregation, after they have been administrated (i.e., in the blood). In this paper, the separation of whole blood, plasma, and serum is shown using asymmetric flow field-flow fractionation (AF4) with a minimum of sample pre-treatment. Furthermore, the analysis and size characterization of a fluorescent antibody in blood plasma using AF4 are demonstrated. The results show the suitability and strength of AF4 for blood analysis and open new important routes for the analysis and characterization of therapeutic proteins in the blood.
A multilevel analysis of aggressive behaviors among nursing home residents.
Cassie, Kimberly M
2012-01-01
Individual and organizational characteristics associated with aggressive behavior among nursing home residents were examined among a sample of 5,494 residents from 23 facilities using the Minimum Data Set 2.0 and the Organizational Social Context scale. On admission, some individual level variables (age, sex, depression, activities of daily life [ADL] impairments, and cognitive impairments) and no organizational level variables were associated with aggressive behaviors. Over time, aggressive behaviors were linked with some individual characteristics (age, sex, and ADL impairments) and several organizational level variables (stressful climates, less rigid cultures, more resistant cultures, geographic location, facility size and staffing patterns). Findings suggest multi-faceted change strategies are needed.
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
Probability of detection of defects in coatings with electronic shearography
NASA Astrophysics Data System (ADS)
Maddux, Gary A.; Horton, Charles M.; Lansing, Matthew D.; Gnacek, William J.; Newton, Patrick L.
1994-07-01
The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA Space Transportation System (STS) Revised Solid Rocket Motor (RSRM) boosters. The population of samples was to be large enough to determine the minimum defect size for 90 percent probability of detection of 95 percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.
Probability of detection of defects in coatings with electronic shearography
NASA Technical Reports Server (NTRS)
Maddux, Gary A.; Horton, Charles M.; Lansing, Matthew D.; Gnacek, William J.; Newton, Patrick L.
1994-01-01
The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA Space Transportation System (STS) Revised Solid Rocket Motor (RSRM) boosters. The population of samples was to be large enough to determine the minimum defect size for 90 percent probability of detection of 95 percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.
Probability of detection of defects in coatings with electronic shearography
NASA Technical Reports Server (NTRS)
Russell, S. S.; Lansing, M. D.; Horton, C. M.; Gnacek, W. J.
1995-01-01
The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA space transportation system reusable solid rocket motor boosters. The population of samples was to be large enough to determine the minimum defect size for 90-percent POD of 95-percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.
Electrocontact material based on silver dispersion-strengthened by nickel, titanium, and zinc oxides
NASA Astrophysics Data System (ADS)
Zeer, G. M.; Zelenkova, E. G.; Belousov, O. V.; Beletskii, V. V.; Nikolaev, S. V.; Ledyaeva, O. N.
2017-09-01
Samples of a composite electrocontact material based on silver strengthened by the dispersed phases of zinc and titanium oxides have been investigated by the electron microscopy and energy dispersive X-ray spectroscopy. A uniform distribution of the oxide phases containing 2 wt % zinc oxide in the initial charge has been revealed. The increase in the amount of zinc oxide leads to an increase of the size of the oxide phases. It has been shown that at the zinc oxide content of 2 wt %, the minimum wear is observed in the process of electroerosion tests; at 3 wt %, an overheating and welding of the contacts are observed.
Minimum variance geographic sampling
NASA Technical Reports Server (NTRS)
Terrell, G. R. (Principal Investigator)
1980-01-01
Resource inventories require samples with geographical scatter, sometimes not as widely spaced as would be hoped. A simple model of correlation over distances is used to create a minimum variance unbiased estimate population means. The fitting procedure is illustrated from data used to estimate Missouri corn acreage.
Determination of the manning coefficient from measured bed roughness in natural channels
Limerinos, John Thomas
1970-01-01
This report presents the results of a study to test the hypothesis that basic values of the Manning roughness coefficient of stream channels may be related to (1) some characteristic size of the streambed particles and to (2) the distribution of particle size. These two elements involving particle size can be combined into a single element by weighting characteristic particle sizes. The investigation was confined to channels with coarse bed material to avoid the complication of bed-form roughness that is associated with alluvial channels composed of fine bed material. Fifty current-meter measurements of discharge and appropriate field surveys were made at 11 sites on California streams for the purpose of computing the roughness coefficient, n, by the Manning formula. The test sites were selected to give a wide range in average size of bed material, and the discharge measurements and surveys were made at such times as to provide data covering a suitable range in stream depth. The sites selected were relatively free of the extraneous flow-retarding effects associated with irregular channel conformation and streambank vegetation. The characteristic bed-particle sizes used in the analyses were the 16,- 50,- and 84-percentile sizes as obtained from a cumulative frequency distribution of the diameters of randomly sampled surficial bed material. Separate distributions were computed for the minimum and intermediate values of the three diameters of a particle. The minimum diameters of the streambed particles were used in the study because a particle at rest on the bed invariably has its minimum diameter in the vertical position; this diameter is, therefore, the most representative measure of roughness height. The intermediate diameter was also studied because this is the diameter most easily measurable-either by sieve analysis or by photographic techniques--and--because it is the diameter that had been used in previous studies by other investigators. No significant difference in reliability was found between the results obtained using minimum diameters and those obtained using intermediate diameters. In analyzing the field data, the roughness parameter, n/R1/6 (where R is hydraulic radius), was related to relative smoothness, R/d (where d is a characteristic, or weighted characteristic, particle size). The parameter n/R1/6, rather than n, was used because it is directly proportional to the square root of the Darcy-Weisbach friction factor, f, which is more widely used in theoretical studies of hydraulic friction. If the transformation of n/R1/6 to vf is made, the relations obtained in this study are of a form that is identical with that of the theoretical friction equation obtained by several investigators and that derived from field data by Leopold and Wolman (1957). The constants in the equation vary, of course, with the characteristic particle size used. The relations best fitting the field data for this study were obtained by using either a characteristic particle diameter equal to the 84-percentile size (d84, the size equal to, or exceeding, that of 84 percent of the streambed particles), or a diameter obtained by weighting three characteristic particle sizes (dw, the size obtained by assigning a weight of 0.1 to d16 , a weight of 0.3 to d50 , and a weight of 0.6 to d84). The use of d84 alone gave slightly better results than the use of dw, and, in addition, the use of d84 alone is attractive from a standpoint of simplicity. It is difficult, however, to rationalize the use of d84 alone because of the implication that the distribution of sizes is irrelevant, and it matters not at all whether 84 percent of the bed material is sand or whether it is large cobbles, as long as 16 percent of the material is of greater size. Consequently, the author recommends the use of dw rather than d84 , although there was no unanimity of opinion on this recommendation among his colleagues who reviewed this paper. The reader is free to
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Pipe sizes and discharge rates for enclosed ventilation... Systems Fixed Carbon Dioxide Fire Extinguishing Systems § 108.437 Pipe sizes and discharge rates for enclosed ventilation systems for rotating electrical equipment. (a) The minimum pipe size for the initial...
NASA Astrophysics Data System (ADS)
Ahmed, Naveed; Alahmari, Abdulrahman M.; Darwish, Saied; Naveed, Madiha
2016-12-01
Micro-channels are considered as the integral part of several engineering devices such as micro-channel heat exchangers, micro-coolers, micro-pulsating heat pipes and micro-channels used in gas turbine blades for aerospace applications. In such applications, a fluid flow is required to pass through certain micro-passages such as micro-grooves and micro-channels. The fluid flow characteristics (flow rate, turbulence, pressure drop and fluid dynamics) are mainly established based on the size and accuracy of micro-passages. Variations (oversizing and undersizing) in micro-passage's geometry directly affect the fluid flow characteristics. In this study, the micro-channels of several sizes are fabricated in well-known aerospace nickel alloy (Inconel 718) through laser beam micro-milling. The variations in geometrical characteristics of different-sized micro-channels are studied under the influences of different parameters of Nd:YAG laser. In order to have a minimum variation in the machined geometries of each size of micro-channel, the multi-objective optimization of laser parameters has been carried out utilizing the response surface methodology approach. The objective was set to achieve the targeted top widths and depths of micro-channels with minimum degree of taperness associated with the micro-channel's sidewalls. The optimized sets of laser parameters proposed for each size of micro-channel can be used to fabricate the micro-channels in Inconel 718 with minimum amount of geometrical variations.
Fujiwara, Masami
2007-09-01
Viability status of populations is a commonly used measure for decision-making in the management of populations. One of the challenges faced by managers is the need to consistently allocate management effort among populations. This allocation should in part be based on comparison of extinction risks among populations. Unfortunately, common criteria that use minimum viable population size or count-based population viability analysis (PVA) often do not provide results that are comparable among populations, primarily because they lack consistency in determining population size measures and threshold levels of population size (e.g., minimum viable population size and quasi-extinction threshold). Here I introduce a new index called the "extinction-effective population index," which accounts for differential effects of demographic stochasticity among organisms with different life-history strategies and among individuals in different life stages. This index is expected to become a new way of determining minimum viable population size criteria and also complement the count-based PVA. The index accounts for the difference in life-history strategies of organisms, which are modeled using matrix population models. The extinction-effective population index, sensitivity, and elasticity are demonstrated in three species of Pacific salmonids. The interpretation of the index is also provided by comparing them with existing demographic indices. Finally, a measure of life-history-specific effect of demographic stochasticity is derived.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
AN EXPERIMENTAL ASSESSMENT OF MINIMUM MAPPING UNIT SIZE
Land-cover (LC) maps derived from remotely sensed data are often presented using a minimum mapping unit (MMU). The choice of a MMU that is appropriate for the projected use of a classification is important. The objective of this experiment was to determine the optimal MMU of a L...
Minimum-Impact Camping in the Front Woods.
ERIC Educational Resources Information Center
Schatz, Curt
1994-01-01
Minimum-impact camping techniques that can be applied to resident camp programs include controlling group size and behavior, designing camp sites, moving groups frequently, proper use of fires, proper disposal of food and human wastes, use of biodegradable soaps, and encouraging staff and camper awareness of impacts on the environment. (LP)
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
On the Relationship Between Spotless Days and the Sunspot Cycle: A Supplement
NASA Technical Reports Server (NTRS)
Wilson, Robert M.; Hathaway, David H.
2006-01-01
This study provides supplemental material to an earlier study concerning the relationship between spotless days and the sunspot cycle. Our previous study, Technical Publication (TP)-2005-213608 determined the timing and size of sunspot minimum and maximum for the new sunspot cycle, relative to the occurrence of the first spotless day during the declining phase of the old sunspot cycle and the last spotless day during the rising portion of the new cycle. Because the number of spotless days (NSD) rapidly increases as the cycle nears sunspot minimum and rapidly decreases thereafter, the size and timing of sunspot minimum and maximum might be more accurately determined using a higher threshold for comparison, rather than using the first and last spotless day occurrences. It is this aspect that is investigated more thoroughly in this TP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, T.A.
1990-01-01
A study undertaken on an Eocene age coal bed in southeast Kalimantan, Indonesia determined that there was a relationship between megascopically determined coal types and kinds and sizes of organic components. The study also concluded that the most efficient way to characterize the seam was from collection of two 3 cm blocks from each layer or bench defined by megascopic character and that a maximum of 125 point counts was needed on each block. Microscopic examination of uncrushed block samples showed the coal to be composed of plant parts and tissues set in a matrix of both fine-grained and amorphousmore » material. The particulate matrix is composed of cell wall and liptinite fragments, resins, spores, algae, and fungal material. The amorphous matrix consists of unstructured (at 400x) huminite and liptinite. Size measurements showed that each particulate component possessed its own size distribution which approached normality when transformed to a log{sub 2} or phi scale. Degradation of the plant material during peat accumulation probably controlled grain size in the coal types. This notion is further supported by the increased concentration of decay resistant resin and cell fillings in the nonbanded and dull coal types. In the sampling design experiment, two blocks from each layer and two layers from each coal type were collected. On each block, 2 to 4 traverses totaling 500 point counts per block were performed to test the minimum number of points needed to characterize a block. A hierarchical analysis of variance showed that most of the petrographic variation occurred between coal types. The results from these analyses also indicated that, within a coal type, sampling should concentrate on the layer level and that only 250 point counts, split between two blocks, were needed to characterize a layer.« less
Statistical considerations for grain-size analyses of tills
Jacobs, A.M.
1971-01-01
Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.
Collector Size or Range Independence of SNR in Fixed-Focus Remote Raman Spectrometry.
Hirschfeld, T
1974-07-01
When sensitivity allows, remote Raman spectrometers can be operated at a fixed focus with purely electronic (easily multiplexable) range gating. To keep the background small, the system etendue must be minimized. For a maximum range larger than the hyperfocal one, this is done by focusing the system at roughly twice the minimum range at which etendue matching is still required. Under these conditions the etendue varies as the fourth power of the collector diameter, causing the background shot noise to vary as its square. As the signal also varies with the same power, and background noise is usually limiting in this type instrument, the SNR becomes independent of the collector size. Below this minimum etendue-matched range, the transmission at the limiting aperture grows with the square of the range, canceling the inverse square loss of signal with range. The SNR is thus range independent below the minimum etendue matched range and collector size independent above it, with the location of transition being determined by the system etendue and collector diameter. The range of validity of these outrageousstatements is discussed.
Practical aspects of photovoltaic technology, applications and cost (revised)
NASA Technical Reports Server (NTRS)
Rosenblum, L.
1985-01-01
The purpose of this text is to provide the reader with the background, understanding, and computational tools needed to master the practical aspects of photovoltaic (PV) technology, application, and cost. The focus is on stand-alone, silicon solar cell, flat-plate systems in the range of 1 to 25 kWh/day output. Technology topics covered include operation and performance of each of the major system components (e.g., modules, array, battery, regulators, controls, and instrumentation), safety, installation, operation and maintenance, and electrical loads. Application experience and trends are presented. Indices of electrical service performance - reliability, availability, and voltage control - are discussed, and the known service performance of central station electric grid, diesel-generator, and PV stand-alone systems are compared. PV system sizing methods are reviewed and compared, and a procedure for rapid sizing is described and illustrated by the use of several sample cases. The rapid sizing procedure yields an array and battery size that corresponds to a minimum cost system for a given load requirement, insulation condition, and desired level of service performance. PV system capital cost and levelized energy cost are derived as functions of service performance and insulation. Estimates of future trends in PV system costs are made.
Paula Menéndez, Lumila
2018-02-01
The aim of this study is to analyze the association between cranial variation and climate in order to discuss their role during the diversification of southern South American populations. Therefore, the specific objectives are: (1) to explore the spatial pattern of cranial variation with regard to the climatic diversity of the region, and (2) to evaluate the differential impact that the climatic factors may have had on the shape and size of the diverse cranial structures studied. The variation in shape and size of 361 crania was studied, registering 62 3D landmarks that capture shape and size variation in the face, cranial vault, and base. Mean, minimum, and maximum annual temperature, as well as mean annual precipitation, but also diet and altitude, were matched for each population sample. A PCA, as well as spatial statistical techniques, including kriging, regression, and multimodel inference were employed. The facial skeleton size presents a latitudinal pattern which is partially associated with temperature diversity. Both diet and altitude are the variables that mainly explain the skull shape variation, although mean annual temperature also plays a role. The association between climate factors and cranial variation is low to moderate, mean annual temperature explains almost 40% of the entire skull, facial skeleton and cranial vault shape variation, while annual precipitation and minimum annual temperature only contribute to the morphological variation when considered together with maximum annual temperature. The cranial base is the structure less associated with climate diversity. These results suggest that climate factors may have had a partial impact on the facial and vault shape, and therefore contributed moderately to the diversification of southern South American populations, while diet and altitude might have had a stronger impact. Therefore, cranial variation at the southern cone has been shaped both by random and nonrandom factors. Particularly, the influence of climate on skull shape has probably been the result of directional selection. This study supports that, although cranial vault is the cranial structure more associated to mean annual temperature, the impact of climate signature on morphology decreases when populations from extreme cold environments are excluded from the analysis. Additionally, it shows that the extent of the geographical scales analyzed, as well as differential sampling may lead to different results regarding the role of ecological factors and evolutionary processes on cranial morphology. © 2017 Wiley Periodicals, Inc.
Code of Federal Regulations, 2014 CFR
2014-10-01
... intact. (a) Yellowtail snapper. The minimum size limit for yellowtail snapper is 12 inches (30.5 cm), TL... inches (20.3 cm), fork length. [78 FR 22952, Apr. 17, 2013, as amended at 78 FR 45896, July 30, 2013] ...
Code of Federal Regulations, 2013 CFR
2013-10-01
... intact. (a) Yellowtail snapper. The minimum size limit for yellowtail snapper is 12 inches (30.5 cm), TL... inches (20.3 cm), fork length. [78 FR 22952, Apr. 17, 2013, as amended at 78 FR 45896, July 30, 2013] ...
A cavitation transition in the energy landscape of simple cohesive liquids and glasses
NASA Astrophysics Data System (ADS)
Altabet, Y. Elia; Stillinger, Frank H.; Debenedetti, Pablo G.
2016-12-01
In particle systems with cohesive interactions, the pressure-density relationship of the mechanically stable inherent structures sampled along a liquid isotherm (i.e., the equation of state of an energy landscape) will display a minimum at the Sastry density ρS. The tensile limit at ρS is due to cavitation that occurs upon energy minimization, and previous characterizations of this behavior suggested that ρS is a spinodal-like limit that separates all homogeneous and fractured inherent structures. Here, we revisit the phenomenology of Sastry behavior and find that it is subject to considerable finite-size effects, and the development of the inherent structure equation of state with system size is consistent with the finite-size rounding of an athermal phase transition. What appears to be a continuous spinodal-like point at finite system sizes becomes discontinuous in the thermodynamic limit, indicating behavior akin to a phase transition. We also study cavitation in glassy packings subjected to athermal expansion. Many individual expansion trajectories averaged together produce a smooth equation of state, which we find also exhibits features of finite-size rounding, and the examples studied in this work give rise to a larger limiting tension than for the corresponding landscape equation of state.
NASA Astrophysics Data System (ADS)
Underhill, P. R.; Krause, T. W.
2017-02-01
Recent work has shown that the detectability of corner cracks in bolt-holes is compromised when rounding of corners arises, as might occur during bolt-hole removal. Probability of Detection (POD) studies normally require a large number of samples of both fatigue cracks and electric discharge machined notches. In the particular instance of rounding of bolt-hole corners the generation of such a large set of samples representing the full spectrum of potential rounding would be prohibitive. In this paper, the application of Finite Element Method (FEM) modeling is used to supplement the study of detection of cracks forming at the rounded corners of bolt-holes. FEM models show that rounding of the corner of the bolt-hole reduces the size of the response to a corner crack to a greater extent than can be accounted for by loss of crack area. This reduced sensitivity can be ascribed to a lower concentration of eddy currents at the rounded corner surface and greater lift-off of pick-up coils relative to that of a straight-edge corner. A rounding with a radius of 0.4 mm (.016 inch) showed a 20% reduction in the strength of the crack signal. Assuming linearity of the crack signal with crack size, this would suggest an increase in the minimum detectable size by 25%.
New polymorphs of 9-nitro-camptothecin prepared using a supercritical anti-solvent process.
Huang, Yinxia; Wang, Hongdi; Liu, Guijin; Jiang, Yanbin
2015-12-30
Recrystallization and micronization of 9-nitro-camptothecin (9-NC) has been investigated using the supercritical anti-solvent (SAS) technology in this study. Five operating factors, i.e., the type of organic solvent, the concentration of 9-NC in the solution, the flow rate of 9-NC solution, the precipitation pressure and the temperature, were optimized using a selected OA16 (4(5)) orthogonal array design and a series of characterizations were performed for all samples. The results showed that the processed 9-NC particles exhibited smaller particle size and narrower particle size distribution as compared with 9-NC raw material (Form I), and the optimum micronization conditions for preparing 9-NC with minimum particle size were determined by variance analysis, where the solvent plays the most important role in the formation and transformation of polymorphs. Three new polymorphic forms (Form II, III and IV) of 9-NC, which present different physicochemical properties, were generated after the SAS process. The predicted structures of the 9-NC crystals, which were consistent with the experiments, were performed from their experimental XRD data by the direct space approach using the Reflex module of Materials Studio. Meanwhile, the optimal sample (Form III) was proved to have higher cytotoxicity against the cancer cells, which suggested the therapeutic efficacy of 9-NC is polymorph-dependent. Copyright © 2015 Elsevier B.V. All rights reserved.
G-NEST: A gene neighborhood scoring tool to identify co-conserved, co-expressed genes
USDA-ARS?s Scientific Manuscript database
In previous studies, gene neighborhoods--spatial clusters of co-expressed genes in the genome--have been defined using arbitrary rules such as requiring adjacency, a minimum number of genes, a fixed window size, or a minimum expression level. In the current study, we developed a Gene Neighborhood Sc...
Characterization of silicon-gate CMOS/SOS integrated circuits processed with ion implantation
NASA Technical Reports Server (NTRS)
Woo, D. S.
1982-01-01
The procedure used to generate MEBES masks and produce test wafers from the 10X Mann 1600 Pattern Generator Tape using existing CAD utility programs and the MEBES machine in the RCA Solid State Technology Center are described. The test vehicle used is the MSFC-designed SC102 Solar House Timing Circuit. When transforming the Mann 1600 tapes into MEBES tapes, extreme care is required in order to obtain accurate minimum linewidths when working with two different coding systems because the minimum grid sizes may be different for the two systems. The minimum grid sizes are 0.025 mil for MSFC Mann 1600 and 0.02 mil for MEBES. Some snapping to the next grid is therefore inevitable, and the results of this snapping effect are significant when submicron lines are present. However, no problem was noticed in the SC102 circuit because its minimum linewidth is 0.3 mil (7.6 microns). MEBES masks were fabricated and wafers were processed using the silicon-gate CMOS/SOS and aluminum-gate COS/MOS processing.
NASA Astrophysics Data System (ADS)
Alakent, Burak; Camurdan, Mehmet C.; Doruker, Pemra
2005-10-01
Time series models, which are constructed from the projections of the molecular-dynamics (MD) runs on principal components (modes), are used to mimic the dynamics of two proteins: tendamistat and immunity protein of colicin E7 (ImmE7). Four independent MD runs of tendamistat and three independent runs of ImmE7 protein in vacuum are used to investigate the energy landscapes of these proteins. It is found that mean-square displacements of residues along the modes in different time scales can be mimicked by time series models, which are utilized in dividing protein dynamics into different regimes with respect to the dominating motion type. The first two regimes constitute the dominance of intraminimum motions during the first 5ps and the random walk motion in a hierarchically higher-level energy minimum, which comprise the initial time period of the trajectories up to 20-40ps for tendamistat and 80-120ps for ImmE7. These are also the time ranges within which the linear nonstationary time series are completely satisfactory in explaining protein dynamics. Encountering energy barriers enclosing higher-level energy minima constrains the random walk motion of the proteins, and pseudorelaxation processes at different levels of minima are detected in tendamistat, depending on the sampling window size. Correlation (relaxation) times of 30-40ps and 150-200ps are detected for two energy envelopes of successive levels for tendamistat, which gives an overall idea about the hierarchical structure of the energy landscape. However, it should be stressed that correlation times of the modes are highly variable with respect to conformational subspaces and sampling window sizes, indicating the absence of an actual relaxation. The random-walk step sizes and the time length of the second regime are used to illuminate an important difference between the dynamics of the two proteins, which cannot be clarified by the investigation of relaxation times alone: ImmE7 has lower-energy barriers enclosing the higher-level energy minimum, preventing the protein to relax and letting it move in a random-walk fashion for a longer period of time.
Campos, C P; Freitas, C E C
2014-02-01
We evaluated the stock of peacock bass Cichla monoculus caught by a small-scale fishing fleet in Lago Grande at Manacapuru. The database was constructed by monthly samplings of 200 fish between February 2007 and January 2008. We measured the total length (cm) and total weight (gr) of each fish. We employed previously estimated growth parameters to run a yield per recruit model and analyse scenarios changing the values of the age of the first catch (Tc), natural mortality (M), and fishing mortality (F). Our model indicated an occurrence of overfishing because the fishing effort applied to catch peacock in Lago Grande at Manacapuru is greater than that associated with the maximum sustainable yield. In addition, the actual size of the first catch is almost half of the estimated value. Although there are difficulties in enforcing a minimum size of the catch, our results show that an increase in the size of the first catch to at least 25 cm would be a good strategy for management of this fishery.
Hwang, Dong-Chir; Li, He-Yi; Tsai, Chieh-Fu; Chen, Chun-Wan; Chen, Jen-Kun
2016-01-01
This study was conducted to investigate the protection of disposable filtering half-facepiece respirators of different grades against particles between 0.093 and 1.61 μm. A personal sampling system was used to particle size-selectively assess the protection of respirators. The results show that about 10.9% of FFP2 respirators and 28.2% of FFP3 respirators demonstrate assigned protection factors (APFs) below 10 and 20, which are the levels assigned for these respirators by the British Standard. On average, the protection factors of FFP respirators were 11.5 to 15.9 times greater than those of surgical masks. The minimum protection factors (PFs) were observed for particles between 0.263 and 0.384 μm. No significant difference in PF results was found among FFP respirator categories and particle size. A strong association between fit factors and protection factors was found. The study indicates that FFP respirators may not achieve the expected protection level and the APFs may need to be revised for these classes of respirators. PMID:27195721
Warach, Steven; Al-Rawi, Yasir; Furlan, Anthony J; Fiebach, Jochen B; Wintermark, Max; Lindstén, Annika; Smyej, Jamal; Bharucha, David B; Pedraza, Salvador; Rowley, Howard A
2012-09-01
The DIAS-2 study was the only large, randomized, intravenous, thrombolytic trial that selected patients based on the presence of ischemic penumbra. However, DIAS-2 did not confirm the positive findings of the smaller DEDAS and DIAS trials, which also used penumbral selection. Therefore, a reevaluation of the penumbra selection strategy is warranted. In post hoc analyses we assessed the relationships of magnetic resonance imaging-measured lesion volumes with clinical measures in DIAS-2, and the relationships of the presence and size of the diffusion-perfusion mismatch with the clinical effect of desmoteplase in DIAS-2 and in pooled data from DIAS, DEDAS, and DIAS-2. In DIAS-2, lesion volumes correlated with National Institutes of Health Stroke Scale (NIHSS) at both baseline and final time points (P<0.0001), and lesion growth was inversely related to good clinical outcome (P=0.004). In the pooled analysis, desmoteplase was associated with 47% clinical response rate (n=143) vs 34% in placebo (n=73; P=0.08). For both the pooled sample and for DIAS-2, increasing the minimum baseline mismatch volume (MMV) for inclusion increased the desmoteplase effect size. The odds ratio for good clinical response between desmoteplase and placebo treatment was 2.83 (95% confidence interval, 1.16-6.94; P=0.023) for MMV >60 mL. Increasing the minimum NIHSS score for inclusion did not affect treatment effect size. Pooled across all desmoteplase trials, desmoteplase appears beneficial in patients with large MMV and ineffective in patients with small MMV. These results support a modified diffusion-perfusion mismatch hypothesis for patient selection in later time-window thrombolytic trials. Clinical Trial Registration- URL: http://www.clinicaltrials.gov. Unique Identifiers: NCT00638781, NCT00638248, NCT00111852.
Grillo, Federica; Valle, Luca; Ferone, Diego; Albertelli, Manuela; Brisigotti, Maria Pia; Cittadini, Giuseppe; Vanoli, Alessandro; Fiocca, Roberto; Mastracci, Luca
2017-09-01
Ki-67 heterogeneity can impact on gastroenteropancreatic neuroendocrine tumor grade assignment, especially when tissue is scarce. This work is aimed at devising adequacy criteria for grade assessment in biopsy specimens. To analyze the impact of biopsy size on reliability, 360 virtual biopsies of different thickness and lengths were constructed. Furthermore, to estimate the mean amount of non-neoplastic tissue component present in biopsies, 28 real biopsies were collected, the non-neoplastic components (fibrosis and inflammation) quantified and the effective area of neoplastic tissue calculated for each biopsy. Heterogeneity of Ki-67 distribution, G2 tumors and biopsy size all play an important role in reducing the reliability of biopsy samples in Ki-67-based grade assignment. In particular in G2 cases, 59.9% of virtual biopsies downgraded the tumor and the smaller the biopsy, the more frequent downgrading occurs. In real biopsies the presence of non-neoplastic tissue reduced the available total area by a mean of 20%. By coupling the results from these two different approaches we show that both biopsy size and non-neoplastic component must be taken into account for biopsy adequacy. In particular, we can speculate that if the minimum biopsy area, necessary to confidently (80% concordance) grade gastro-entero-pancreatic neuroendocrine tumors on virtual biopsies ranges between 15 and 30 mm 2 , and if real biopsies are on average composed of only 80% of neoplastic tissue, then biopsies with a surface area not <12 mm 2 should be performed; using 18G needles, this corresponds to a minimum total length of 15 mm.
Benefaction studies on the Hasan Celebi magnetite deposit, Turkey
Pressler, Jean W.; Akar, Ali
1972-01-01
Bench-scale and semicontinuous tests were performed on surface, trench, and diamond drill core samples from the Hasan Celebi low-grade magnetite deposit to determine the optimum benefication procedures utilizing wet magnetic separation techniques. Composite core samples typically contain about 27 percent recoverable magnetite and require crushing and grinding through 1 mm in size to insure satisfactory separation of the gangue from the magnetite. Regrinding and cleaning the magnetite concentrate to 80 percent minus 150-mesh is necessary to obtain an optimum of 66 percent iron. Semicontinuous pilot-plant testing with the wet magnetic drum using the recycled middling technique indicates that as much as 83 percent of the acid-soluble iron can be recovered into a concentrate containing 66 percent iron, with minimum deleterious elements. This represents 27 weight percent of the original ore. Further tests will continue when the Maden Tetkik ve Arama Enstitusu (MTA) receives 24 tons of bulk sample from an exploratory drift and cross-cut now being driven through a section of the major reserve area.
The Bose-Einstein correlations in CDFII experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lovás, Lubomír
We present the results of a study of pmore » $$\\bar{p}$$ collisions at √s = 1.96 TeV collected by the CDF-II experiment at Tevatron collider. The Bose-Einstein correlations of the π ±π ± two boson system have been studied in the minimum-bias high-multiplicity events. The research was carried out on the sample at the size of 173761 events. The two pion correlations have been retrieved. The final results were corrected to the coulomb interactions. Two different reference samples were compared and discussed. A significant two-pion correlation enhancement near origin is observed. This enhancement effect has been used to evaluate the radius of the two-pion emitter source. We have used the TOF detector to distinguish between π and K mesons. The C 2(Q) function parameters have also been retrieved for the sample containing only tagged π mesons. A comparison between four different parametrizations based on two diff t theoretical approaches of the C 2(Q) function is given.« less
Hayabusa2 Sampler: Collection of Asteroidal Surface Material
NASA Astrophysics Data System (ADS)
Sawada, Hirotaka; Okazaki, Ryuji; Tachibana, Shogo; Sakamoto, Kanako; Takano, Yoshinori; Okamoto, Chisato; Yano, Hajime; Miura, Yayoi; Abe, Masanao; Hasegawa, Sunao; Noguchi, Takaaki
2017-07-01
Japan Aerospace Exploration Agency (JAXA) launched the asteroid exploration probe "Hayabusa2" in December 3rd, 2014, following the 1st Hayabusa mission. With technological and scientific improvements from the Hayabusa probe, we plan to visit the C-type asteroid 162137 Ryugu (1999 JU3), and to sample surface materials of the C-type asteroid that is likely to be different from the S-type asteroid Itokawa and contain more pristine materials, including organic matter and/or hydrated minerals, than S-type asteroids. We developed the Hayabusa2 sampler to collect a minimum of 100 mg of surface samples including several mm-sized particles at three surface locations without any severe terrestrial contamination. The basic configuration of the sampler design is mainly as same as the 1st Hayabusa (Yano et al. in Science, 312(5778):1350-1353, 2006), with several minor but important modifications based on lessons learned from the Hayabusa to fulfill the scientific requirements and to raise the scientific value of the returned samples.
Kobayashi, Chigusa; Jung, Jaewoon; Matsunaga, Yasuhiro; Mori, Takaharu; Ando, Tadashi; Tamura, Koichi; Kamiya, Motoshi; Sugita, Yuji
2017-09-30
GENeralized-Ensemble SImulation System (GENESIS) is a software package for molecular dynamics (MD) simulation of biological systems. It is designed to extend limitations in system size and accessible time scale by adopting highly parallelized schemes and enhanced conformational sampling algorithms. In this new version, GENESIS 1.1, new functions and advanced algorithms have been added. The all-atom and coarse-grained potential energy functions used in AMBER and GROMACS packages now become available in addition to CHARMM energy functions. The performance of MD simulations has been greatly improved by further optimization, multiple time-step integration, and hybrid (CPU + GPU) computing. The string method and replica-exchange umbrella sampling with flexible collective variable choice are used for finding the minimum free-energy pathway and obtaining free-energy profiles for conformational changes of a macromolecule. These new features increase the usefulness and power of GENESIS for modeling and simulation in biological research. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
The distribution of galaxies within the 'Great Wall'
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1992-01-01
The galaxy distribution within the 'Great Wall', the most striking feature in the first three 'slices' of the CfA redshift survey extension is examined. The Great Wall is extracted from the sample and is analyzed by counting galaxies in cells. The 'local' two-point correlation function within the Great Wall is computed and the local correlation length, is estimated 15/h Mpc, about 3 times larger than the correlation length for the entire sample. The redshift distribution of galaxies in the pencil-beam survey by Broadhurst et al. (1990) shows peaks separated about by large 'voids', at least to a redshift of about 0.3. The peaks might represent the intersections of their about 5/h Mpc pencil beams with structures similar to the Great Wall. Under this hypothesis, sampling of the Great Walls shows that l approximately 12/h Mpc is the minimum projected beam size required to detect all the 'walls' at redshifts between the peak of the selection function and the effective depth of the survey.
Aseptic minimum volume vitrification technique for porcine parthenogenetically activated blastocyst.
Lin, Lin; Yu, Yutao; Zhang, Xiuqing; Yang, Huanming; Bolund, Lars; Callesen, Henrik; Vajta, Gábor
2011-01-01
Minimum volume vitrification may provide extremely high cooling and warming rates if the sample and the surrounding medium contacts directly with the respective liquid nitrogen and warming medium. However, this direct contact may result in microbial contamination. In this work, an earlier aseptic technique was applied for minimum volume vitrification. After equilibration, samples were loaded on a plastic film, immersed rapidly into factory derived, filter-sterilized liquid nitrogen, and sealed into sterile, pre-cooled straws. At warming, the straw was cut, the filmstrip was immersed into a 39 degree C warming medium, and the sample was stepwise rehydrated. Cryosurvival rates of porcine blastocysts produced by parthenogenetical activation did not differ from control, vitrified blastocysts with Cryotop. This approach can be used for minimum volume vitrification methods and may be suitable to overcome the biological dangers and legal restrictions that hamper the application of open vitrification techniques.
Effect of sample preparation method on quantification of polymorphs using PXRD.
Alam, Shahnwaz; Patel, Sarsvatkumar; Bansal, Arvind Kumar
2010-01-01
The purpose of this study was to improve the sensitivity and accuracy of quantitative analysis of polymorphic mixtures. Various techniques such as hand grinding and mixing (in mortar and pestle), air jet milling and ball milling for micronization of particle and mixing were used to prepare binary mixtures. Using these techniques, mixtures of form I and form II of clopidogrel bisulphate were prepared in various proportions from 0-5% w/w of form I in form II and subjected to x-ray powder diffraction analysis. In order to obtain good resolution in minimum time, step time and step size were varied to optimize scan rate. Among the six combinations, step size of 0.05 degrees with step time of 5 s demonstrated identification of maximum characteristic peaks of form I in form II. Data obtained from samples prepared using both grinding and mixing in ball mill showed good analytical sensitivity and accuracy compared to other methods. Powder x-ray diffraction method was reproducible, precise with LOD of 0.29% and LOQ of 0.91%. Validation results showed excellent correlation between actual and predicted concentration with R2 > 0.9999.
Aad, G; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Abreu, R; Abulaiti, Y; Acharya, B S; Adamczyk, L; Adams, D L; Adelman, J; Adomeit, S; Adye, T; Agatonovic-Jovin, T; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahmadov, F; Aielli, G; Akerstedt, H; Åkesson, T P A; Akimoto, G; Akimov, A V; Alberghi, G L; Albert, J; Albrand, S; Alconada Verzini, M J; Aleksa, M; Aleksandrov, I N; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Alimonti, G; Alio, L; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Almond, J; Aloisio, A; Alonso, A; Alonso, F; Alpigiani, C; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amaral Coutinho, Y; Amelung, C; Amidei, D; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Amundsen, G; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Angelozzi, I; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A V; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Araque, J P; Arce, A T H; Arguin, J-F; Argyropoulos, S; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Arnold, H; Arratia, M; Arslan, O; Artamonov, A; Artoni, G; Asai, S; Asbah, N; Ashkenazi, A; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Atkinson, M; Atlay, N B; Auerbach, B; Augsten, K; Aurousseau, M; Avolio, G; Azuelos, G; Azuma, Y; Baak, M A; Baas, A E; Bacci, C; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagiacchi, P; Bagnaia, P; Bai, Y; Bain, T; Baines, J T; Baker, O K; Balek, P; Balli, F; Banas, E; Banerjee, Sw; Bannoura, A A E; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barberio, E L; Barberis, D; Barbero, M; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Barnovska, Z; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartos, P; Bartsch, V; Bassalat, A; Basye, A; Bates, R L; Batley, J R; Battaglia, M; Battistin, M; Bauer, F; Bawa, H S; Beattie, M D; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becot, C; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behr, J K; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellerive, A; Bellomo, M; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Beringer, J; Bernard, C; Bernat, P; Bernius, C; Bernlochner, F U; Berry, T; Berta, P; Bertella, C; Bertoli, G; Bertolucci, F; Bertsche, C; Bertsche, D; Besana, M I; Besjes, G J; Bessidskaia Bylund, O; Bessner, M; Besson, N; Betancourt, C; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilbao De Mendizabal, J; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Black, C W; Black, J E; Black, K M; Blackburn, D; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Bock, C; Boddy, C R; Boehler, M; Boek, T T; Bogaerts, J A; Bogdanchikov, A G; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Boldyrev, A S; Bomben, M; Bona, M; Boonekamp, M; Borisov, A; Borissov, G; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Boudreau, J; Bouffard, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozic, I; Bracinik, J; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Brendlinger, K; Brennan, A J; Brenner, R; Bressler, S; Bristow, K; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brosamer, J; Brost, E; Brown, J; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Buehrer, F; Bugge, L; Bugge, M K; Bulekov, O; Bundock, A C; Burckhart, H; Burdin, S; Burghgrave, B; Burke, S; Burmeister, I; Busato, E; Büscher, D; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Butt, A I; Buttar, C M; Butterworth, J M; Butti, P; Buttinger, W; Buzatu, A; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calandri, A; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Calvet, D; Calvet, S; Camacho Toro, R; Camarda, S; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Campoverde, A; Canale, V; Canepa, A; Cano Bret, M; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Casolino, M; Castaneda-Miranda, E; Castelli, A; Castillo Gimenez, V; Castro, N F; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caudron, J; Cavaliere, V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerio, B C; Cerny, K; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cerv, M; Cervelli, A; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chang, P; Chapleau, B; Chapman, J D; Charfeddine, D; Charlton, D G; Chau, C C; Chavez Barajas, C A; Cheatham, S; Chegwidden, A; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, K; Chen, L; Chen, S; Chen, X; Chen, Y; Chen, Y; Cheng, H C; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Chevalier, L; Chiarella, V; Chiefari, G; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Chouridou, S; Chow, B K B; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Chwastowski, J J; Chytka, L; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirkovic, P; Citron, Z H; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Cole, B; Cole, S; Colijn, A P; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Connell, S H; Connelly, I A; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Corso-Radu, A; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Cowan, G; Cox, B E; Cranmer, K; Cree, G; Crépé-Renaudin, S; Crescioli, F; Cribbs, W A; Crispin Ortuzar, M; Cristinziani, M; Croft, V; Crosetti, G; Cuciuc, C-M; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Cuthbert, C; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dale, O; Dallaire, F; Dallapiccola, C; Dam, M; Daniells, A C; Dano Hoffmann, M; Dao, V; Darbo, G; Darmora, S; Dassoulas, J; Dattagupta, A; Davey, W; David, C; Davidek, T; Davies, E; Davies, M; Davignon, O; Davison, A R; Davison, P; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; De Groot, N; de Jong, P; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Deigaard, I; Del Peso, J; Del Prete, T; Deliot, F; Delitzsch, C M; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Dell'Orso, M; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demilly, A; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deterre, C; Deviveiros, P O; Dewhurst, A; Dhaliwal, S; Di Ciaccio, A; Di Ciaccio, L; Di Domenico, A; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Di Valentino, D; Dias, F A; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dimitrievska, A; Dingfelder, J; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; Djuvsland, J I; do Vale, M A B; Do Valle Wemans, A; Dobos, D; Doglioni, C; Doherty, T; Dohmae, T; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donati, S; Dondero, P; Donini, J; Dopke, J; Doria, A; Dova, M T; Doyle, A T; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Ducu, O A; Duda, D; Dudarev, A; Dudziak, F; Duflot, L; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Durglishvili, A; Dwuznik, M; Dyndal, M; Ebke, J; Edson, W; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Endner, O C; Endo, M; Engelmann, R; Erdmann, J; Ereditato, A; Eriksson, D; Ernis, G; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Esposito, B; Etienvre, A I; Etzion, E; Evans, H; Ezhilov, A; Fabbri, L; Facini, G; Fakhrutdinov, R M; Falciano, S; Falla, R J; Faltova, J; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feigl, S; Feligioni, L; Feng, C; Feng, E J; Feng, H; Fenyuk, A B; Fernandez Perez, S; Ferrag, S; Ferrando, J; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filipuzzi, M; Filthaut, F; Fincke-Keeler, M; Finelli, K D; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, A; Fischer, J; Fisher, W C; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Formica, A; Forti, A; Fortin, D; Fournier, D; Fox, H; Fracchia, S; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Franconi, L; Franklin, M; Franz, S; Fraternali, M; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gabrielli, A; Gabrielli, A; Gadatsch, S; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Galster, G; Gan, K K; Gao, J; Gao, Y S; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gershon, A; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Giannetti, P; Gianotti, F; Gibbard, B; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gilles, G; Gingrich, D M; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giorgi, F M; Giraud, P F; Giugni, D; Giuliani, C; Giulini, M; Gjelsten, B K; Gkaitatzis, S; Gkialas, I; Gladilin, L K; Glasman, C; Glatzer, J; Glaysher, P C F; Glazov, A; Glonti, G L; Goblirsch-Kolb, M; Goddard, J R; Godlewski, J; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez-Sevilla, S; Goossens, L; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Grabas, H M X; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramling, J; Gramstad, E; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Graziani, E; Grebenyuk, O G; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Grossi, G C; Groth-Jensen, J; Grout, Z J; Guan, L; Guenther, J; Guescini, F; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gumpert, C; Guo, J; Gupta, S; Gutierrez, P; Gutierrez Ortiz, N G; Gutschow, C; Guttman, N; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haber, C; Hadavand, H K; Haddad, N; Haefner, P; Hageböck, S; Hajduk, Z; Hakobyan, H; Haleem, M; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Hamity, G N; Hamnett, P G; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Hanke, P; Hanna, R; Hansen, J B; Hansen, J D; Hansen, P H; Hara, K; Hard, A S; Harenberg, T; Hariri, F; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Harrison, P F; Hartjes, F; Hasegawa, M; Hasegawa, S; Hasegawa, Y; Hasib, A; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heim, T; Heinemann, B; Heinrich, L; Hejbal, J; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, J; Henderson, R C W; Heng, Y; Hengler, C; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Herbert, G H; Hernández Jiménez, Y; Herrberg-Schubert, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoenig, F; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmes, T R; Hong, T M; Hooft van Huysduynen, L; Hopkins, W H; Horii, Y; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, C; Hsu, P J; Hsu, S-C; Hu, D; Hu, X; Huang, Y; Hubacek, Z; Hubaut, F; Huegging, F; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibragimov, I; Iconomidou-Fayard, L; Ideal, E; Iengo, P; Igonkina, O; Iizawa, T; Ikegami, Y; Ikematsu, K; Ikeno, M; Ilchenko, Y; Iliadis, D; Ilic, N; Inamaru, Y; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Iturbe Ponce, J M; Iuppa, R; Ivarsson, J; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, M; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Janus, M; Jarlskog, G; Javadov, N; Javůrek, T; Jeanty, L; Jejelava, J; Jeng, G-Y; Jennens, D; Jenni, P; Jentzsch, J; Jeske, C; Jézéquel, S; Ji, H; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinaru, A; Jinnouchi, O; Joergensen, M D; Johansson, K E; Johansson, P; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Jongmanns, J; Jorge, P M; Joshi, K D; Jovicevic, J; Ju, X; Jung, C A; Jungst, R M; Jussel, P; Juste Rozas, A; Kaci, M; Kaczmarska, A; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalderon, C W; Kama, S; Kamenshchikov, A; Kanaya, N; Kaneda, M; Kaneti, S; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karakostas, K; Karastathis, N; Kareem, M J; Karnevskiy, M; Karpov, S N; Karpova, Z M; Karthik, K; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katre, A; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Kehoe, R; Keller, J S; Kempster, J J; Keoshkerian, H; Kepka, O; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H Y; Kim, H; Kim, S H; Kimura, N; Kind, O M; King, B T; King, M; King, R S B; King, S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kiss, F; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klioutchnikova, T; Klok, P F; Kluge, E-E; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Kobayashi, D; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Kondrashova, N; Köneke, K; König, A C; König, S; Kono, T; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasnopevtsev, D; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Kretz, M; Kretzschmar, J; Kreutzfeldt, K; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, A; Kruse, M C; Kruskal, M; Kubota, T; Kucuk, H; Kuday, S; Kuehn, S; Kugel, A; Kuhl, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurochkin, Y A; Kurumida, R; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; La Rosa, A; La Rotonda, L; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laier, H; Lambourne, L; Lammers, S; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lankford, A J; Lanni, F; Lantzsch, K; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Lasagni Manghi, F; Lassnig, M; Laurelli, P; Lavrijsen, W; Law, A T; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, C A; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, G; Lefebvre, M; Legger, F; Leggett, C; Lehan, A; Lehmacher, M; Lehmann Miotto, G; Lei, X; Leight, W A; Leisos, A; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Leney, K J C; Lenz, T; Lenzi, B; Leone, R; Leone, S; Leonidopoulos, C; Leontsinis, S; Leroy, C; Lester, C G; Lester, C M; Levchenko, M; Levêque, J; Levin, D; Levinson, L J; Levy, M; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, L; Li, L; Li, S; Li, Y; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Lin, S C; Lin, T H; Linde, F; Lindquist, B E; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, B; Liu, D; Liu, J B; Liu, K; Liu, L; Liu, M; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, B A; Long, J D; Long, R E; Lopes, L; Lopez Mateos, D; Lopez Paredes, B; Lopez Paz, I; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Lou, X; Lounis, A; Love, J; Love, P A; Lowe, A J; Lu, N; Lubatti, H J; Luci, C; Lucotte, A; Luehring, F; Lukas, W; Luminari, L; Lundberg, O; Lund-Jensen, B; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Machado Miguens, J; Macina, D; Madaffari, D; Madar, R; Maddocks, H J; Mader, W F; Madsen, A; Maeno, T; Maeno Kataoka, M; Maevskiy, A; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Maiani, C; Maidantchik, C; Maier, A A; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V M; Malyukov, S; Mamuzic, J; Mandelli, B; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, L; March, L; Marchand, J F; Marchiori, G; Marcisovsky, M; Marino, C P; Marjanovic, M; Marques, C N; Marroquim, F; Marsden, S P; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massa, L; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Mättig, P; Mattmann, J; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazzaferro, L; Mc Goldrick, G; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; McMahon, S J; McPherson, R A; Mechnich, J; Medinnis, M; Meehan, S; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Mergelmeyer, S; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Milic, A; Miller, D W; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitani, T; Mitrevski, J; Mitsou, V A; Mitsui, S; Miucci, A; Miyagawa, P S; Mjörnmark, J U; Moa, T; Mochizuki, K; Mohapatra, S; Mohr, W; Molander, S; Moles-Valls, R; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Morange, N; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Moritz, S; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Moser, H G; Mosidze, M; Moss, J; Motohashi, K; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Muanza, S; Mudd, R D; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Mueller, T; Muenstermann, D; Munwes, Y; Murillo Quijada, J A; Murray, W J; Musheghyan, H; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Narayan, R; Nattermann, T; Naumann, T; Navarro, G; Nayyar, R; Neal, H A; Nechaeva, P Yu; Neep, T J; Nef, P D; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nellist, C; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neves, R M; Nevski, P; Newman, P R; Nguyen, D H; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novgorodova, O; Nowak, S; Nozaki, M; Nozka, L; Ntekas, K; Nunes Hanninger, G; Nunnemann, T; Nurse, E; Nuti, F; O'Brien, B J; O'grady, F; O'Neil, D C; O'Shea, V; Oakham, F G; Oberlack, H; Obermann, T; Ocariz, J; Ochi, A; Ochoa, I; Oda, S; Odaka, S; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohman, H; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira Damazio, D; Oliver Garcia, E; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Otero Y Garzon, G; Otono, H; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Oussoren, K P; Ouyang, Q; Ovcharova, A; Owen, M; Ozcan, V E; Ozturk, N; Pachal, K; Pacheco Pages, A; Padilla Aranda, C; Pagáčová, M; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Palestini, S; Palka, M; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Paolozzi, L; Papadopoulou, Th D; Papageorgiou, K; Paramonov, A; Paredes Hernandez, D; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, L E; Pedersen, M; Pedraza Lopez, S; Pedro, R; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penwell, J; Perepelitsa, D V; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrella, S; Perrino, R; Peschke, R; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Pettersson, N E; Pezoa, R; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pires, S; Pitt, M; Pizio, C; Plazak, L; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Pollard, C S; Polychronakos, V; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospisil, S; Potamianos, K; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Przybycien, M; Przysiezniak, H; Ptacek, E; Puddu, D; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Qian, J; Qin, G; Qin, Y; Quadt, A; Quarrie, D R; Quayle, W B; Queitsch-Maitland, M; Quilty, D; Qureshi, A; Radeka, V; Radescu, V; Radhakrishnan, S K; Radloff, P; Rados, P; Ragusa, F; Rahal, G; Rajagopalan, S; Rammensee, M; Randle-Conde, A S; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Readioff, N P; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Rehnisch, L; Reisin, H; Relich, M; Rembser, C; Ren, H; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Rezanova, O L; Reznicek, P; Rezvani, R; Richter, R; Ridel, M; Rieck, P; Rieger, J; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Ritsch, E; Riu, I; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Roda, C; Rodrigues, L; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romero Adam, E; Rompotis, N; Ronzani, M; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, M; Rose, P; Rosendahl, P L; Rosenthal, O; Rossetti, V; Rossi, E; Rossi, L P; Rosten, R; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sacerdoti, S; Saddique, A; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Sakurai, Y; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Sales De Bruin, P H; Salihagic, D; Salnikov, A; Salt, J; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sandbach, R L; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Sapronov, A; Saraiva, J G; Sarkisyan-Grinbaum, E; Sarrazin, B; Sartisohn, G; Sasaki, O; Sasaki, Y; Sauvage, G; Sauvan, E; Savard, P; Savu, D O; Sawyer, C; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scanlon, T; Scannicchio, D A; Scarcella, M; Scarfone, V; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaefer, R; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schoenrock, B D; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schramm, S; Schreyer, M; Schroeder, C; Schuh, N; Schultens, M J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwanenberger, C; Schwartzman, A; Schwarz, T A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Scuri, F; Scutti, F; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellers, G; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfiligoj, T; Sforza, F; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shang, R; Shank, J T; Shapiro, M; Shatalov, P B; Shaw, K; Shehu, C Y; Sherwood, P; Shi, L; Shimizu, S; Shimmin, C O; Shimojima, M; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Shushkevich, S; Sicho, P; Sidiropoulou, O; Sidorov, D; Sidoti, A; Siegert, F; Sijacki, Dj; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skottowe, H P; Skovpen, K Yu; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snyder, S; Sobie, R; Socher, F; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solodkov, A A; Soloshenko, A; Solovyanov, O V; Solovyev, V; Sommer, P; Song, H Y; Soni, N; Sood, A; Sopczak, A; Sopko, B; Sopko, V; Sorin, V; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A M; South, D; Spagnolo, S; Spanò, F; Spearman, W R; Spettel, F; Spighi, R; Spigo, G; Spiller, L A; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Staerz, S; Stahlman, J; Stamen, R; Stamm, S; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Stavina, P; Steinberg, P; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoicea, G; Stolte, P; Stonjek, S; Stradling, A R; Straessner, A; Stramaglia, M E; Strandberg, J; Strandberg, S; Strandlie, A; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Stroynowski, R; Strubig, A; Stucci, S A; Stugu, B; Styles, N A; Su, D; Su, J; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, S; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Taccini, C; Tackmann, K; Taenzer, J; Taffard, A; Tafirout, R; Taiblum, N; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A A; Tam, J Y C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tannenwald, B B; Tannoury, N; Tapprogge, S; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tashiro, T; Tassi, E; Tavares Delgado, A; Tayalati, Y; Taylor, F E; Taylor, G N; Taylor, W; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Teoh, J J; Terada, S; Terashi, K; Terron, J; Terzo, S; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thomas, J P; Thomas-Wilsker, J; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, R J; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tikhomirov, V O; Tikhonov, Yu A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tolley, E; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsirintanis, N; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsuno, S; Tsybychev, D; Tudorache, A; Tudorache, V; Tuna, A N; Tupputi, S A; Turchikhin, S; Turecek, D; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Unverdorben, C; Urbaniec, D; Urquijo, P; Usai, G; Usanova, A; Vacavant, L; Vacek, V; Vachon, B; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Den Wollenberg, W; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; van Woerden, M C; Vanadia, M; Vandelli, W; Vanguri, R; Vaniachine, A; Vannucci, F; Vardanyan, G; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vazeille, F; Vazquez Schroeder, T; Veatch, J; Veloso, F; Velz, T; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Venturini, A; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Viazlo, O; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Vigne, R; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinogradov, V B; Virzi, J; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vogel, M; Vokac, P; Volpi, G; Volpi, M; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorobev, K; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, P; Wagner, W; Wahlberg, H; Wahrmund, S; Wakabayashi, J; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, C; Wang, F; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Wanotayaroj, C; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, B M; Webb, S; Weber, M S; Weber, S W; Webster, J S; Weidberg, A R; Weigell, P; Weinert, B; Weingarten, J; Weiser, C; Weits, H; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Wessels, M; Wetter, J; Whalen, K; White, A; White, M J; White, R; White, S; Whiteson, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilkens, H G; Will, J Z; Williams, H H; Williams, S; Willis, C; Willocq, S; Wilson, A; Wilson, J A; Wingerter-Seez, I; Winklmeier, F; Winter, B T; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wright, M; Wu, M; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wyatt, T R; Wynne, B M; Xella, S; Xiao, M; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yakabe, R; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yanush, S; Yao, L; Yao, W-M; Yasu, Y; Yatsenko, E; Yau Wong, K H; Ye, J; Ye, S; Yeletskikh, I; Yen, A L; Yildirim, E; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D R; Yu, J; Yu, J M; Yu, J; Yuan, L; Yurkewicz, A; Yusuff, I; Zabinski, B; Zaidan, R; Zaitsev, A M; Zaman, A; Zambito, S; Zanello, L; Zanzi, D; Zeitnitz, C; Zeman, M; Zemla, A; Zengel, K; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, F; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, L; Zhou, N; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhukov, K; Zibell, A; Zieminska, D; Zimine, N I; Zimmermann, C; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zurzolo, G; Zutshi, V; Zwalinski, L
The paper presents studies of Bose-Einstein Correlations (BEC) for pairs of like-sign charged particles measured in the kinematic range [Formula: see text] 100 MeV and [Formula: see text] 2.5 in proton collisions at centre-of-mass energies of 0.9 and 7 TeV with the ATLAS detector at the CERN Large Hadron Collider. The integrated luminosities are approximately 7 [Formula: see text]b[Formula: see text], 190 [Formula: see text]b[Formula: see text] and 12.4 nb[Formula: see text] for 0.9 TeV, 7 TeV minimum-bias and 7 TeV high-multiplicity data samples, respectively. The multiplicity dependence of the BEC parameters characterizing the correlation strength and the correlation source size are investigated for charged-particle multiplicities of up to 240. A saturation effect in the multiplicity dependence of the correlation source size parameter is observed using the high-multiplicity 7 TeV data sample. The dependence of the BEC parameters on the average transverse momentum of the particle pair is also investigated.
42 CFR 84.205 - Facepiece test; minimum requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... respirator will be fitted to the faces of persons having varying facial shapes and sizes. (b) Where the applicant specifies a facepiece size or sizes for the respirator together with the approximate measurement..., pumping with a tire pump into a 28-liter (1 cubic-foot) container. (4) Each wearer shall not detect the...
Minimum financial outlays for purchasing alcohol brands in the U.S.
Albers, Alison Burke; DeJong, William; Naimi, Timothy S; Siegel, Michael; Shoaff, Jessica Ruhlman; Jernigan, David H
2013-01-01
Low alcohol prices are a potent risk factor for excessive drinking, underage drinking, and adverse alcohol-attributable outcomes. Presently, there is little reported information on alcohol prices in the U.S., in particular as it relates to the costs of potentially beneficial amounts of alcohol. To determine the minimum financial outlay necessary to purchase individual brands of alcohol using online alcohol price data from January through March 2012. The smallest container size and the minimum price at which that size beverage could be purchased in the U.S. in 2012 were determined for 898 brands of alcohol, across 17 different alcoholic beverage types. The analyses were conducted in March 2012. The majority of alcoholic beverage categories contain brands that can be purchased in the U.S. for very low minimum financial outlays. In the U.S., a wide variety of alcohol brands, across many types of alcohol, are available at very low prices. Given that both alcohol use and abuse are responsive to price, particularly among adolescents, the prevalence of low alcohol prices is concerning. Surveillance of alcohol prices and minimum pricing policies should be considered in the U.S. as part of a public health strategy to reduce excessive alcohol consumption and related harms. Copyright © 2013 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
Minimum Financial Outlays for Purchasing Alcohol Brands in the U.S
Albers, Alison Burke; DeJong, William; Naimi, Timothy S.; Siegel, Michael; Shoaff, Jessica Ruhlman; Jernigan, David H.
2012-01-01
Background Low alcohol prices are a potent risk factor for excessive drinking, underage drinking, and adverse alcohol-attributable outcomes. Presently, there is little reported information on alcohol prices in the U.S., in particular as it relates to the costs of potentially beneficial amounts of alcohol. Purpose To determine the minimum financial outlay necessary to purchase individual brands of alcohol using online alcohol price data from January through March 2012. Methods The smallest container size and the minimum price at which that size beverage could be purchased in the U.S. in 2012 were determined for 898 brands of alcohol, across 17 different alcoholic beverage types. The analyses were conducted in March 2012. Results The majority of alcoholic beverage categories contain brands that can be purchased in the U.S. for very low minimum financial outlays. Conclusions In the U.S., a wide variety of alcohol brands, across many types of alcohol, are available at very low prices. Given that both alcohol use and abuse are responsive to price, particularly among adolescents, the prevalence of low alcohol prices is concerning. Surveillance of alcohol prices and minimum pricing policies should be considered in the U.S. as part of a public health strategy to reduce excessive alcohol consumption and related harms. PMID:23253652
Heterogeneous porous structures for the fastest liquid absorption
NASA Astrophysics Data System (ADS)
Shou, Dahua; Ye, Lin; Fan, Jintu
2013-08-01
Engineered porous materials, which have fast absorption of liquids under global constraints (e.g. volume, surface area, or cost of the materials), are useful in many applications including moisture management fabrics, medical wound dressings, paper-based analytical devices, liquid molding composites, etc.. The absorption in capillary tubes and porous media is driven by the surface tension of liquid, which is inversely proportional to the pore size. On the contrary, the ability of conduction (or permeability) of liquid in porous materials is linear with the square of pore size. Both mechanisms superimpose with each other leading to a possibility of the fastest absorption for a porous structure. In this work, we explore the flow behaviors for the fastest absorption using heterogeneous porous architectures, from two-portion tubes to two-layer porous media. The absorption time for filling up the voids in these porous materials is expressed in terms of pore size, height and porosity. It is shown that under the given height and void volume, these two-component porous structures with a negative gradient of pore size/porosity against the imbibition direction, have a faster absorption rate than controlled samples with uniform pore size/porosity. Particularly, optimal structural parameters including pore size, height and porosity are found for the minimum absorption time. The obtained results will be used as a priori for the design of porous structures with excellent water absorption and moisture management property in various fields.
NASA Astrophysics Data System (ADS)
MacMahon, Heber; Vyborny, Carl; Powell, Gregory; Doi, Kunio; Metz, Charles E.
1984-08-01
In digital radiography the pixel size used determines the potential spatial resolution of the system. The need for spatial resolution varies depending on the subject matter imaged. In many areas, including the chest, the minimum spatial resolution requirements have not been determined. Sarcoidosis is a disease which frequently causes subtle interstitial infiltrates in the lungs. As the initial step in an investigation designed to determine the minimum pixel size required in digital chest radiographic systems, we have studied 1 mm pixel digitized images on patients with early pulmonary sarcoidosis. The results of this preliminary study suggest that neither mild interstitial pulmonary infiltrates nor other abnormalities such as pneumothoraces may be detected reliably with 1 mm pixel digital images.
Assessment of codes, by-laws and regulations relating to air wells in building design
NASA Astrophysics Data System (ADS)
Fadzil, Sharifah Fairuz Syed; Karamazaman, Nazli
2017-10-01
Codes and by-laws concerning air well design (for buildings and lavatories) in Malaysia has been established in the Malaysian Uniform Building By-Laws UBBL number 40 (1) and (2) since the 1980s. Wells are there to fulfill the ventilation and daylighting requirements. The minimum well area according to building storey height are compared between UBBL and the Singapore's well requirements from the Building Construction Authority BCA. A visual and graphical representation (with schematics building and well diagrams drawn to scale) of the minimum well sizes and dimensions is given. It can be seen that if the minimum requirement of well size is used for buildings above 8 storeys high, a thin well resulted which is not proportionate to the building height. A proposed dimension is graphed and given to be used in the UBBL which translated to graphics (3 dimensional buildings drawn to scale) created a much better well proportion.
Cabrini, M; Cerino, F; de Olazabal, A; Di Poi, E; Fabbro, C; Fornasaro, D; Goruppi, A; Flander-Putrle, V; Francé, J; Gollasch, S; Hure, M; Lipej, L; Lučić, D; Magaletti, E; Mozetič, P; Tinta, T; Tornambè, A; Turk, V; Uhan, J; David, M
2018-02-14
Ballast water discharges may cause negative impacts to aquatic ecosystems, human health and economic activities by the introduction of potentially harmful species. Fifty untreated ballast water tanks, ten in each port, were sampled in four Adriatic Italian ports and one Slovenian port. Salinity, temperature and fluorescence were measured on board. Faecal indicator bacteria (FIB), phyto- and zooplankton were qualitatively and quantitatively determined to identify the species assemblage arriving in ballast water. FIB exceeded the convention standard limits in 12% of the sampled tanks. Vibrio cholerae was not detected. The number of viable organisms in the size groups (minimum dimension) <50 and ≥10 μm and ≥50 μm resulted above the abundances required from the Ballast Water Management Convention in 55 and 86% of the samples, respectively. This is not surprising as unmanaged ballast waters were sampled. Some potentially toxic and non-indigenous species were observed in both phyto- and zooplankton assemblages. Copyright © 2018 Elsevier Ltd. All rights reserved.
Some design issues of strata-matched non-randomized studies with survival outcomes.
Mazumdar, Madhu; Tu, Donsheng; Zhou, Xi Kathy
2006-12-15
Non-randomized studies for the evaluation of a medical intervention are useful for quantitative hypothesis generation before the initiation of a randomized trial and also when randomized clinical trials are difficult to conduct. A strata-matched non-randomized design is often utilized where subjects treated by a test intervention are matched to a fixed number of subjects treated by a standard intervention within covariate based strata. In this paper, we consider the issue of sample size calculation for this design. Based on the asymptotic formula for the power of a stratified log-rank test, we derive a formula to calculate the minimum number of subjects in the test intervention group that is required to detect a given relative risk between the test and standard interventions. When this minimum number of subjects in the test intervention group is available, an equation is also derived to find the multiple that determines the number of subjects in the standard intervention group within each stratum. The methodology developed is applied to two illustrative examples in gastric cancer and sarcoma.
Liu, Shaorong; Elkin, Christopher; Kapur, Hitesh
2003-11-01
We describe a microfabricated hybrid device that consists of a microfabricated chip containing multiple twin-T injectors attached to an array of capillaries that serve as the separation channels. A new fabrication process was employed to create two differently sized round channels in a chip. Twin-T injectors were formed by the smaller round channels that match the bore of the separation capillaries and separation capillaries were incorporated to the injectors through the larger round channels that match the outer diameter of the capillaries. This allows for a minimum dead volume and provides a robust chip/capillary interface. This hybrid design takes full advantage, such as sample stacking and purification and uniform signal intensity profile, of the unique chip injection scheme for DNA sequencing while employing long straight capillaries for the separations. In essence, the separation channel length is optimized for both speed and resolution since it is unconstrained by chip size. To demonstrate the reliability and practicality of this hybrid device, we sequenced over 1000 real-world samples from Human Chromosome 5 and Ciona intestinalis, prepared at Joint Genome Institute. We achieved average Phred20 read of 675 bases in about 70 min with a success rate of 91%. For the similar type of samples on MegaBACE 1000, the average Phred20 read is about 550-600 bases in 120 min separation time with a success rate of about 80-90%.
Emerging technologies in medical applications of minimum volume vitrification
Zhang, Xiaohui; Catalano, Paolo N; Gurkan, Umut Atakan; Khimji, Imran; Demirci, Utkan
2011-01-01
Cell/tissue biopreservation has broad public health and socio-economic impact affecting millions of lives. Cryopreservation technologies provide an efficient way to preserve cells and tissues targeting the clinic for applications including reproductive medicine and organ transplantation. Among these technologies, vitrification has displayed significant improvement in post-thaw cell viability and function by eliminating harmful effects of ice crystal formation compared to the traditional slow freezing methods. However, high cryoprotectant agent concentrations are required, which induces toxicity and osmotic stress to cells and tissues. It has been shown that vitrification using small sample volumes (i.e., <1 μl) significantly increases cooling rates and hence reduces the required cryoprotectant agent levels. Recently, emerging nano- and micro-scale technologies have shown potential to manipulate picoliter to nanoliter sample sizes. Therefore, the synergistic integration of nanoscale technologies with cryogenics has the potential to improve biopreservation methods. PMID:21955080
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
7 CFR 51.308 - Methods of sampling and calculation of percentages.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation... where the minimum diameter of the smallest apple does not vary more than 1/2 inch from the minimum diameter of the largest apple, percentages shall be calculated on the basis of count. (b) In all other...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2013 CFR
2013-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2012 CFR
2012-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
29 CFR 1926.752 - Site layout, site-specific erection plan and construction sequence.
Code of Federal Regulations, 2014 CFR
2014-07-01
... standard test method of field-cured samples, either 75 percent of the intended minimum compressive design... the basis of an appropriate ASTM standard test method of field-cured samples, either 75 percent of the intended minimum compressive design strength or sufficient strength to support the loads imposed during...
Variation of radiation level and radionuclide enrichment in high background area.
Shetty, P K; Narayana, Y
2010-12-01
Significantly high radiation level and radionuclide concentration along Quilon beach area of coastal Kerala have been reported by several investigators. Detailed gamma radiation level survey was carried out using a portable scintillometer. Detailed studies on radionuclides concentration in different environmental matrices of high background areas were undertaken in the coastal areas of Karunagapalli, Kayankulam, Chavara, Neendakara and Kollam to study the distribution and enrichment of the radionuclides in the region. The absorbed gamma dose rates in air in high background area are in the range 43-17,400nGyh⁻¹. Gamma radiation level is found to be maximum at a distance of 20m from the sea waterline in all beaches. The soil samples collected from different locations were analysed for primordial radionuclides by gamma spectrometry. The activity of primordial radionuclides was determined for the different size fractions of soil to study the enrichment pattern. The highest activity of (232)Th and (226)Ra was found to be enriched in 125-63μ size fraction. The preferential accumulation of (40)K was found in <63μ fraction. The minimum (232)Th activity was 30.2Bqkg⁻¹, found in 1000-500μ particle size fraction at Kollam and maximum activity of 3250.4Bqkg⁻¹ was observed in grains of size 125-63μ at Neendakara. The lowest (226)Ra activity observed was 33.9Bqkg⁻¹ at Neendakara in grains of size 1000-500μ and the highest activity observed was 482.6Bqkg⁻¹ in grains of size 125-63μ in Neendakara. The highest (40)K activity found was 1923Bqkg⁻¹ in grains of size <63μ for a sample collected from Neendakara. A good correlation was observed between computed dose and measured dose in air. The correlation between (232)Th and (226)Ra was also moderately high. The results of these investigations are presented and discussed in this paper. Copyright © 2010 Elsevier Ltd. All rights reserved.
van Lettow, Monique; Tweya, Hannock; Rosenberg, Nora E; Trapence, Clement; Kayoyo, Virginia; Kasende, Florence; Kaunda, Blessings; Hosseinipour, Mina C; Eliya, Michael; Cataldo, Fabian; Gugsa, Salem; Phiri, Sam
2017-07-11
Malawi introduced an ambitious public health program known as "Option B+" which provides all HIV-infected pregnant and breastfeeding women with lifelong combination antiretroviral therapy, regardless of WHO clinical stage or CD4 cell count. The PMTCT Uptake and REtention (PURE) study aimed at evaluating the effect of peer-support on care-seeking and retention in care. PURE Malawi was a three-arm cluster randomized controlled trial that compared facility-based and community-based models of peer support to standard of care under Option B+ strategy. Each arm was expected to enroll a minimum of 360 women with a total minimum sample size of 1080 participants. 21 sites (clusters) were selected for inclusion in the study. This paper describes the site selection, recruitment, enrollment process and baseline characteristics of study sites and women enrolled in the trial. Study implementation was managed by 3 partner organizations; each responsible for 7 study sites. The trial was conducted in the South East, South West, and Central West zones of Malawi, the zones where the implementing partners operate. Study sites included 2 district hospitals, 2 mission hospitals, 2 rural hospitals, 13 health centers and 1 private clinic. Enrollment occurred from November 2013 to November 2014, over a median period of 31 weeks (range 17-51) by site. A total of 1269 HIV-infected pregnant (1094) and breastfeeding (175) women, who were eligible to initiate ART under Option B+, were enrolled. Each site reached or surpassed the minimum sample size. Comparing the number of women enrolled versus antenatal cohort reports, sites recruited a median of 90% (IQR 75-100) of eligible reported women. In the majority of sites the ratio of pregnant and lactating women enrolled in the study was similar to the ratio of reported pregnant and lactating women starting ART in the same sites. The median age of all women was 27 (IQR 22-31) years. All women have ≥20 months of possible follow-up time; 96% ≥ 2 years (24-32 months). The PURE Malawi study showed that 3 implementing partner organizations could successfully recruit a complex cohort of pregnant and lactating women across 3 geographical zones in Malawi within a reasonable timeline. This study is registered at clinicaltrials.gov - ID Number NCT02005835 . Registered 4 December, 2013.
Slicing cluster mass functions with a Bayesian razor
NASA Astrophysics Data System (ADS)
Sealfon, C. D.
2010-08-01
We apply a Bayesian ``razor" to forecast Bayes factors between different parameterizations of the galaxy cluster mass function. To demonstrate this approach, we calculate the minimum size N-body simulation needed for strong evidence favoring a two-parameter mass function over one-parameter mass functions and visa versa, as a function of the minimum cluster mass.
A call for transparent reporting to optimize the predictive value of preclinical research
Landis, Story C.; Amara, Susan G.; Asadullah, Khusru; Austin, Chris P.; Blumenstein, Robi; Bradley, Eileen W.; Crystal, Ronald G.; Darnell, Robert B.; Ferrante, Robert J.; Fillit, Howard; Finkelstein, Robert; Fisher, Marc; Gendelman, Howard E.; Golub, Robert M.; Goudreau, John L.; Gross, Robert A.; Gubitz, Amelie K.; Hesterlee, Sharon E.; Howells, David W.; Huguenard, John; Kelner, Katrina; Koroshetz, Walter; Krainc, Dimitri; Lazic, Stanley E.; Levine, Michael S.; Macleod, Malcolm R.; McCall, John M.; Moxley, Richard T.; Narasimhan, Kalyani; Noble, Linda J.; Perrin, Steve; Porter, John D.; Steward, Oswald; Unger, Ellis; Utz, Ursula; Silberberg, Shai D.
2012-01-01
The US National Institute of Neurological Disorders and Stroke convened major stakeholders in June 2012 to discuss how to improve the methodological reporting of animal studies in grant applications and publications. The main workshop recommendation is that at a minimum studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data. We recognize that achieving a meaningful improvement in the quality of reporting will require a concerted effort by investigators, reviewers, funding agencies and journal editors. Requiring better reporting of animal studies will raise awareness of the importance of rigorous study design to accelerate scientific progress. PMID:23060188
A call for transparent reporting to optimize the predictive value of preclinical research.
Landis, Story C; Amara, Susan G; Asadullah, Khusru; Austin, Chris P; Blumenstein, Robi; Bradley, Eileen W; Crystal, Ronald G; Darnell, Robert B; Ferrante, Robert J; Fillit, Howard; Finkelstein, Robert; Fisher, Marc; Gendelman, Howard E; Golub, Robert M; Goudreau, John L; Gross, Robert A; Gubitz, Amelie K; Hesterlee, Sharon E; Howells, David W; Huguenard, John; Kelner, Katrina; Koroshetz, Walter; Krainc, Dimitri; Lazic, Stanley E; Levine, Michael S; Macleod, Malcolm R; McCall, John M; Moxley, Richard T; Narasimhan, Kalyani; Noble, Linda J; Perrin, Steve; Porter, John D; Steward, Oswald; Unger, Ellis; Utz, Ursula; Silberberg, Shai D
2012-10-11
The US National Institute of Neurological Disorders and Stroke convened major stakeholders in June 2012 to discuss how to improve the methodological reporting of animal studies in grant applications and publications. The main workshop recommendation is that at a minimum studies should report on sample-size estimation, whether and how animals were randomized, whether investigators were blind to the treatment, and the handling of data. We recognize that achieving a meaningful improvement in the quality of reporting will require a concerted effort by investigators, reviewers, funding agencies and journal editors. Requiring better reporting of animal studies will raise awareness of the importance of rigorous study design to accelerate scientific progress.
The bivariate regression model and its application
NASA Astrophysics Data System (ADS)
Pratikno, B.; Sulistia, L.; Saniyah
2018-03-01
The paper studied a bivariate regression model (BRM) and its application. The maximum power and minimum size are used to choose the eligible tests using non-sample prior information (NSPI). In the simulation study on real data, we used Wilk’s lamda to determine the best model of the BRM. The result showed that the power of the pre-test-test (PTT) on the NSPI is a significant choice of the tests among unrestricted test (UT) and restricted test (RT), and the best model of the BRM is Y (1) = ‑894 + 46X and Y (2) = 78 + 0.2X with significant Wilk’s lamda 0.88 < 0.90 (Wilk’s table).
Two-sample binary phase 2 trials with low type I error and low sample size
Litwin, Samuel; Basickes, Stanley; Ross, Eric A.
2017-01-01
Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686
Considering aspects of the 3Rs principles within experimental animal biology.
Sneddon, Lynne U; Halsey, Lewis G; Bury, Nic R
2017-09-01
The 3Rs - Replacement, Reduction and Refinement - are embedded into the legislation and guidelines governing the ethics of animal use in experiments. Here, we consider the advantages of adopting key aspects of the 3Rs into experimental biology, represented mainly by the fields of animal behaviour, neurobiology, physiology, toxicology and biomechanics. Replacing protected animals with less sentient forms or species, cells, tissues or computer modelling approaches has been broadly successful. However, many studies investigate specific models that exhibit a particular adaptation, or a species that is a target for conservation, such that their replacement is inappropriate. Regardless of the species used, refining procedures to ensure the health and well-being of animals prior to and during experiments is crucial for the integrity of the results and legitimacy of the science. Although the concepts of health and welfare are developed for model organisms, relatively little is known regarding non-traditional species that may be more ecologically relevant. Studies should reduce the number of experimental animals by employing the minimum suitable sample size. This is often calculated using power analyses, which is associated with making statistical inferences based on the P -value, yet P -values often leave scientists on shaky ground. We endorse focusing on effect sizes accompanied by confidence intervals as a more appropriate means of interpreting data; in turn, sample size could be calculated based on effect size precision. Ultimately, the appropriate employment of the 3Rs principles in experimental biology empowers scientists in justifying their research, and results in higher-quality science. © 2017. Published by The Company of Biologists Ltd.
Rakhshan, Hamid
2016-01-01
Summary Background and purpose: Dental aplasia (or hypodontia) is a frequent and challenging anomaly and thus of interest to many dental fields. Although the number of missing teeth (NMT) in each person is a major clinical determinant of treatment need, there is no meta-analysis on this subject. Therefore, we aimed to investigate the relevant literature, including epidemiological studies and research on dental/orthodontic patients. Methods: Among 50 reports, the effects of ethnicities, regions, sample sizes/types, subjects’ minimum ages, journals’ scientific credit, publication year, and gender composition of samples on the number of missing permanent teeth (except the third molars) per person were statistically analysed (α = 0.05, 0.025, 0.01). Limitations: The inclusion of small studies and second-hand information might reduce the reliability. Nevertheless, these strategies increased the meta-sample size and favoured the generalisability. Moreover, data weighting was carried out to account for the effect of study sizes/precisions. Results: The NMT per affected person was 1.675 [95% confidence interval (CI) = 1.621–1.728], 1.987 (95% CI = 1.949–2.024), and 1.893 (95% CI = 1.864–1.923), in randomly selected subjects, dental/orthodontic patients, and both groups combined, respectively. The effects of ethnicities (P > 0.9), continents (P > 0.3), and time (adjusting for the population type, P = 0.7) were not significant. Dental/orthodontic patients exhibited a significantly greater NMT compared to randomly selected subjects (P < 0.012). Larger samples (P = 0.000) and enrolling younger individuals (P = 0.000) might inflate the observed NMT per person. Conclusions: Time, ethnic backgrounds, and continents seem unlikely influencing factors. Subjects younger than 13 years should be excluded. Larger samples should be investigated by more observers. PMID:25840586
Mineralogy of SNC Meteorite EET79001 by Simultaneous Fitting of Moessbauer Backscatter Spectra
NASA Technical Reports Server (NTRS)
Morris, Richard V.; Agresti, D. G.
2010-01-01
We have acquired M ssbauer spectra for SNC meteorite EET79001 with a MIMOS II backscatter M ssbauer spectrometer [1] similar to those now operating on Mars as part of the Mars Exploration Rover (MER) missions. We are working to compare the Fe mineralogical composition of martian meteorites with in-situ measurements on Mars. Our samples were hand picked from the >1 mm size fraction of saw fines on the basis of lithology, color, and grain size (Table 1). The chips were individually analyzed at approx.300K by placing them on a piece of plastic that was in turn supported by the contact ring of the instrument (oriented vertically). Tungsten foil was used to mask certain areas from analysis. As shown in Figure 1, a variety of spectra was obtained, each resulting from different relative contributions of the Fe-bearing minerals present in the sample. Because the nine samples are reasonably mixtures of the same Fe-bearing phases in variable proportions, the nine spectra were fit simultaneously (simfit) with a common model, adjusting parameters to a single minimum chi-squared convergence criterion [2]. The starting point for the fitting model and values of hyperfine parameters was the work of Solberg and Burns [3], who identified olivine, pyroxene, and ferrous glass as major, and ilmenite and a ferric phase as minor (<5%), Fe-bearing phases in EET79001.
NASA Astrophysics Data System (ADS)
ten Veldhuis, Marie-Claire; Schleiss, Marc
2017-04-01
Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.
NASA Astrophysics Data System (ADS)
Lee, Mun Hyung; Park, Joo Hyun
2018-06-01
The effect of nitrogen content on the formation of an equiaxed solidification structure of Fe-16Cr steel was investigated. Moreover, two different kinds of refractory materials, i.e., alumina and magnesia, were employed to control the type of oxide inclusion. The characteristics of TiN(-oxide) inclusions were quantitatively analyzed in both molten steel and solidified samples. When the melting was carried out in the alumina refractory, the grain size continuously decreased with increasing nitrogen content. However, a minimum grain size was observed at a specific nitrogen content (approx. 150 ppm) when the steel was melted in the magnesia refractory. Most of the single TiN particles had a cuboidal shape and fine irregularly shaped particles were located along the grain boundary due to the microsegregation of Ti at the grain boundary during solidification. The type of TiN-oxide hybrid inclusion was strongly affected by the refractory material where Al2O3-TiN and MgAl2O4-TiN hybrid-type inclusions were obtained in the alumina and magnesia refractory experiments, respectively. The formation of oxide inclusions was well predicted by thermochemical computations and it was commonly found that oxide particles were initially formed, followed by the nucleation and growth of TiN. When the nitrogen content increased, the number density of TiN linearly increased in the alumina refractory experiments. However, the number of TiN exhibits a maximum at about [N] = 150 ppm, at which a minimum grain size was obtained in the magnesia refractory experiments. Therefore, the larger the number density of TiN, the smaller the primary grain size after solidification. The number density of TiN in the steel melted in the magnesia refractory was greater than that in the steel melted in the alumina refractory at given Ti and N contents, which was due to the lower planar lattice disregistry of MgAl2O4-TiN interface rather than that of Al2O3-TiN interface. When Δ T TiN (= difference between the TiN precipitation temperature and the liquidus of the steel) was 20 K to 40 K, the number density of effective TiN was maximized and thus, the grain size was minimized after solidification. Finally, although most of the TiN particles were smaller than 1 μm in the molten steel samples irrespective of the nitrogen content, TiN particles larger than 10 μm were observed in the solidified samples when the nitrogen content was greater than 150 ppm. The growth of TiN particles during melting and solidification was well predicted by the combinatorial simulation of the `Ostwald ripening model' based on the Lifshitz-Slyozov-Wagner theory in conjunction with the `Diffusion controlled model' using Ohnaka's microsegregation equation.
NASA Astrophysics Data System (ADS)
Lee, Mun Hyung; Park, Joo Hyun
2018-03-01
The effect of nitrogen content on the formation of an equiaxed solidification structure of Fe-16Cr steel was investigated. Moreover, two different kinds of refractory materials, i.e., alumina and magnesia, were employed to control the type of oxide inclusion. The characteristics of TiN(-oxide) inclusions were quantitatively analyzed in both molten steel and solidified samples. When the melting was carried out in the alumina refractory, the grain size continuously decreased with increasing nitrogen content. However, a minimum grain size was observed at a specific nitrogen content (approx. 150 ppm) when the steel was melted in the magnesia refractory. Most of the single TiN particles had a cuboidal shape and fine irregularly shaped particles were located along the grain boundary due to the microsegregation of Ti at the grain boundary during solidification. The type of TiN-oxide hybrid inclusion was strongly affected by the refractory material where Al2O3-TiN and MgAl2O4-TiN hybrid-type inclusions were obtained in the alumina and magnesia refractory experiments, respectively. The formation of oxide inclusions was well predicted by thermochemical computations and it was commonly found that oxide particles were initially formed, followed by the nucleation and growth of TiN. When the nitrogen content increased, the number density of TiN linearly increased in the alumina refractory experiments. However, the number of TiN exhibits a maximum at about [N] = 150 ppm, at which a minimum grain size was obtained in the magnesia refractory experiments. Therefore, the larger the number density of TiN, the smaller the primary grain size after solidification. The number density of TiN in the steel melted in the magnesia refractory was greater than that in the steel melted in the alumina refractory at given Ti and N contents, which was due to the lower planar lattice disregistry of MgAl2O4-TiN interface rather than that of Al2O3-TiN interface. When ΔT TiN (= difference between the TiN precipitation temperature and the liquidus of the steel) was 20 K to 40 K, the number density of effective TiN was maximized and thus, the grain size was minimized after solidification. Finally, although most of the TiN particles were smaller than 1 μm in the molten steel samples irrespective of the nitrogen content, TiN particles larger than 10 μm were observed in the solidified samples when the nitrogen content was greater than 150 ppm. The growth of TiN particles during melting and solidification was well predicted by the combinatorial simulation of the `Ostwald ripening model' based on the Lifshitz-Slyozov-Wagner theory in conjunction with the `Diffusion controlled model' using Ohnaka's microsegregation equation.
van Lieshout, Jan; Grol, Richard; Campbell, Stephen; Falcoff, Hector; Capell, Eva Frigola; Glehr, Mathias; Goldfracht, Margalit; Kumpusalo, Esko; Künzi, Beat; Ludt, Sabine; Petek, Davorina; Vanderstighelen, Veerle; Wensing, Michel
2012-10-05
Primary care has an important role in cardiovascular risk management (CVRM) and a minimum size of scale of primary care practices may be needed for efficient delivery of CVRM . We examined CVRM in patients with coronary heart disease (CHD) in primary care and explored the impact of practice size. In an observational study in 8 countries we sampled CHD patients in primary care practices and collected data from electronic patient records. Practice samples were stratified according to practice size and urbanisation; patients were selected using coded diagnoses when available. CVRM was measured on the basis of internationally validated quality indicators. In the analyses practice size was defined in terms of number of patients registered of visiting the practice. We performed multilevel regression analyses controlling for patient age and sex. We included 181 practices (63% of the number targeted). Two countries included a convenience sample of practices. Data from 2960 CHD patients were available. Some countries used methods supplemental to coded diagnoses or other inclusion methods introducing potential inclusion bias. We found substantial variation on all CVRM indicators across practices and countries. We computed aggregated practice scores as percentage of patients with a positive outcome. Rates of risk factor recording varied from 55% for physical activity as the mean practice score across all practices (sd 32%) to 94% (sd 10%) for blood pressure. Rates for reaching treatment targets for systolic blood pressure, diastolic blood pressure and LDL cholesterol were 46% (sd 21%), 86% (sd 12%) and 48% (sd 22%) respectively. Rates for providing recommended cholesterol lowering and antiplatelet drugs were around 80%, and 70% received influenza vaccination. Practice size was not associated to indicator scores with one exception: in Slovenia larger practices performed better. Variation was more related to differences between practices than between countries. CVRM measured by quality indicators showed wide variation within and between countries and possibly leaves room for improvement in all countries involved. Few associations of performance scores with practice size were found.
A Probabilistic Asteroid Impact Risk Model
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2016-01-01
Asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data has little effect on the metrics of interest.
The provision of clearances accuracy in piston - cylinder mating
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.
2017-08-01
The paper is aimed at increasing the quality of the pumping equipment in oil and gas industry. The main purpose of the study is to stabilize maximum values of productivity and durability of the pumping equipment based on the selective assembly of the cylinder-piston kinematic mating by optimization criterion. It is shown that the minimum clearance in the piston-cylinder mating is formed by maximum material dimensions. It is proved that maximum material dimensions are characterized by their own laws of distribution within the tolerance limits for the diameters of the cylinder internal mirror and the outer cylindrical surface of the piston. At that, their dispersion zones should be divided into size groups with a group tolerance equal to half the tolerance for the minimum clearance. The techniques for measuring the material dimensions - the smallest cylinder diameter and the largest piston diameter according to the envelope condition - are developed for sorting them into size groups. Reliable control of the dimensions precision ensures optimal minimum clearances of the piston-cylinder mating in all the size groups of the pumping equipment, necessary for increasing the equipment productivity and durability during the production, operation and repair processes.
NASA Astrophysics Data System (ADS)
Chatterjee, Arijit Kumar; Sarkar, Raj Kumar; Prasun Chattopadhyay, Asoke; Aich, Pulakesh; Chakraborty, Ruchira; Basu, Tarakdas
2012-03-01
A method for preparation of copper nanoparticles (Cu-NPs) was developed by simple reduction of CuCl2 in the presence of gelatin as a stabilizer and without applying stringent conditions like purging with nitrogen. The NPs were characterized by spectrophotometry, dynamic light scattering, x-ray diffraction, transmission electron microscopy, atomic force microscopy and x-ray photoelectron spectroscopy. The particles were about 50-60 nm in size and highly stable. The antibacterial activity of this Cu-NP on Gram-negative Escherichia coli was demonstrated by the methods of agar plating, flow cytometry and phase contrast microscopy. The minimum inhibitory concentration (3.0 µg ml-1), minimum bactericidal concentration (7.5 µg ml-1) and susceptibility constant (0.92) showed that this Cu-NP is highly effective against E. coli at a much lower concentration than that reported previously. Treatment with Cu-NPs made E. coli cells filamentous. The higher the concentration of Cu-NPs, the greater the population of filamentous cells; average filament size varied from 7 to 20 µm compared to the normal cell size of ˜2.5 µm. Both filamentation and killing of cells by Cu-NPs (7.5 µg ml-1) also occurred in an E. coli strain resistant to multiple antibiotics. Moreover, an antibacterial effect of Cu-NPs was also observed in Gram-positive Bacillus subtilis and Staphylococcus aureus, for which the values of minimum inhibitory concentration and minimum bactericidal concentration were close to that for E. coli.
50 CFR 648.143 - Minimum sizes.
Code of Federal Regulations, 2010 CFR
2010-10-01
... retain black sea bass in or from U.S. waters of the western Atlantic Ocean from 35′ 15.3 N. Lat., the... size for black sea bass is 12.5 inches (31.75 cm) TL for all vessels that do not qualify for a...
50 CFR 648.127 - Framework adjustments to management measures.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., FMP Monitoring Committee composition and process, description and identification of essential fish... additions to management measures must come from one or more of the following categories: Minimum fish size, maximum fish size, gear restrictions, gear restricted areas, gear requirements or prohibitions, permitting...
Skrbinšek, Tomaž; Jelenčič, Maja; Waits, Lisette; Kos, Ivan; Jerina, Klemen; Trontelj, Peter
2012-02-01
The effective population size (N(e) ) could be the ideal parameter for monitoring populations of conservation concern as it conveniently summarizes both the evolutionary potential of the population and its sensitivity to genetic stochasticity. However, tracing its change through time is difficult in natural populations. We applied four new methods for estimating N(e) from a single sample of genotypes to trace temporal change in N(e) for bears in the Northern Dinaric Mountains. We genotyped 510 bears using 20 microsatellite loci and determined their age. The samples were organized into cohorts with regard to the year when the animals were born and yearly samples with age categories for every year when they were alive. We used the Estimator by Parentage Assignment (EPA) to directly estimate both N(e) and generation interval for each yearly sample. For cohorts, we estimated the effective number of breeders (N(b) ) using linkage disequilibrium, sibship assignment and approximate Bayesian computation methods and extrapolated these estimates to N(e) using the generation interval. The N(e) estimate by EPA is 276 (183-350 95% CI), meeting the inbreeding-avoidance criterion of N(e) > 50 but short of the long-term minimum viable population goal of N(e) > 500. The results obtained by the other methods are highly consistent with this result, and all indicate a rapid increase in N(e) probably in the late 1990s and early 2000s. The new single-sample approaches to the estimation of N(e) provide efficient means for including N(e) in monitoring frameworks and will be of great importance for future management and conservation. © 2012 Blackwell Publishing Ltd.
Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.
Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo
2014-08-01
Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mahmoudi, Soulmaz; Gholizadeh, Ahmad
2018-06-01
In this work, Y3-xSrxZrxFe5O12 (0.0 ≤ x ≤ 0.7) were synthesized by citrate precursor method at 1050 °C. The structural and magnetic properties of Y3-xSrxFe5-xZrxO12 were studied by using the X-ray diffraction technique, scanning electron microscopy, transmission electron microscopy, the Fourier transform infrared spectroscopy and vibrating sample magnetometer. XRD analysis using X'Pert package show a pure garnet phase with cubic structure (space group Ia-3d) and the impurity phase SrZrO3 is observed when the range of x value is exceeded from 0.6. Rietveld refinement using Fullprof program shows the lattice volume expansion with increasing the degree of Sr/Zr substitution. The crystallite sizes remain constant in the range of x = 0.0 - 0.5 and then increase. The different morphology observed in SEM micrographs of the samples can be related to different values of the microstrain in the samples. The hysteresis loops of the samples reveal a superparamagnetic behaviour. Also, the drop in coercivity with increasing of the substitution is mainly originated from a reduction in the magneto-elastic anisotropy energy. The values of the saturation magnetization (MS) indicate a non-monotonically variant with increasing the Sr/Zr substitution and reach a maximum 26.14 emu/g for the sample x = 0.1 and a minimum 17.64 emu/g for x = 0.0 and x = 0.2. The variation of MS, in these samples results from a superposition of three factors; reduction of Fe3+ in a-site, change in angle FeT-O-FeO, and magnetic core size.
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
40 CFR Table 1 to Subpart III of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determining compliance using this method Cadmium 0.004 milligrams per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of part 60). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance...
40 CFR Table 1 to Subpart Eeee of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... determiningcompliance using this method 1. Cadmium 18 micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour...
NASA Astrophysics Data System (ADS)
Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa
2015-03-01
Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Elahi, Fanny M; Marx, Gabe; Cobigo, Yann; Staffaroni, Adam M; Kornak, John; Tosun, Duygu; Boxer, Adam L; Kramer, Joel H; Miller, Bruce L; Rosen, Howard J
2017-01-01
Degradation of white matter microstructure has been demonstrated in frontotemporal lobar degeneration (FTLD) and Alzheimer's disease (AD). In preparation for clinical trials, ongoing studies are investigating the utility of longitudinal brain imaging for quantification of disease progression. To date only one study has examined sample size calculations based on longitudinal changes in white matter integrity in FTLD. To quantify longitudinal changes in white matter microstructural integrity in the three canonical subtypes of frontotemporal dementia (FTD) and AD using diffusion tensor imaging (DTI). 60 patients with clinical diagnoses of FTD, including 27 with behavioral variant frontotemporal dementia (bvFTD), 14 with non-fluent variant primary progressive aphasia (nfvPPA), and 19 with semantic variant PPA (svPPA), as well as 19 patients with AD and 69 healthy controls were studied. We used a voxel-wise approach to calculate annual rate of change in fractional anisotropy (FA) and mean diffusivity (MD) in each group using two time points approximately one year apart. Mean rates of change in FA and MD in 48 atlas-based regions-of-interest, as well as global measures of cognitive function were used to calculate sample sizes for clinical trials (80% power, alpha of 5%). All FTD groups showed statistically significant baseline and longitudinal white matter degeneration, with predominant involvement of frontal tracts in the bvFTD group, frontal and temporal tracts in the PPA groups and posterior tracts in the AD group. Longitudinal change in MD yielded a larger number of regions with sample sizes below 100 participants per therapeutic arm in comparison with FA. SvPPA had the smallest sample size based on change in MD in the fornix (n = 41 participants per study arm to detect a 40% effect of drug), and nfvPPA and AD had their smallest sample sizes based on rate of change in MD within the left superior longitudinal fasciculus (n = 49 for nfvPPA, and n = 23 for AD). BvFTD generally showed the largest sample size estimates (minimum n = 140 based on MD in the corpus callosum). The corpus callosum appeared to be the best region for a potential study that would include all FTD subtypes. Change in global measure of functional status (CDR box score) yielded the smallest sample size for bvFTD (n = 71), but clinical measures were inferior to white matter change for the other groups. All three of the canonical subtypes of FTD are associated with significant change in white matter integrity over one year. These changes are consistent enough that drug effects in future clinical trials could be detected with relatively small numbers of participants. While there are some differences in regions of change across groups, the genu of the corpus callosum is a region that could be used to track progression in studies that include all subtypes.
Jian Yang; Hong S. He; Brian R. Sturtevant; Brian R. Miranda; Eric J. Gustafson
2008-01-01
We compared four fire spread simulation methods (completely random, dynamic percolation. size-based minimum travel time algorithm. and duration-based minimum travel time algorithm) and two fire occurrence simulation methods (Poisson fire frequency model and hierarchical fire frequency model) using a two-way factorial design. We examined these treatment effects on...
Voltage scheduling for low power/energy
NASA Astrophysics Data System (ADS)
Manzak, Ali
2001-07-01
Power considerations have become an increasingly dominant factor in the design of both portable and desk-top systems. An effective way to reduce power consumption is to lower the supply voltage since voltage is quadratically related to power. This dissertation considers the problem of lowering the supply voltage at (i) the system level and at (ii) the behavioral level. At the system level, the voltage of the variable voltage processor is dynamically changed with the work load. Processors with limited sized buffers as well as those with very large buffers are considered. Given the task arrival times, deadline times, execution times, periods and switching activities, task scheduling algorithms that minimize energy or peak power are developed for the processors equipped with very large buffers. A relation between the operating voltages of the tasks for minimum energy/power is determined using the Lagrange multiplier method, and an iterative algorithm that utilizes this relation is developed. Experimental results show that the voltage assignment obtained by the proposed algorithm is very close (0.1% error) to that of the optimal energy assignment and the optimal peak power (1% error) assignment. Next, on-line and off-fine minimum energy task scheduling algorithms are developed for processors with limited sized buffers. These algorithms have polynomial time complexity and present optimal (off-line) and close-to-optimal (on-line) solutions. A procedure to calculate the minimum buffer size given information about the size of the task (maximum, minimum), execution time (best case, worst case) and deadlines is also presented. At the behavioral level, resources operating at multiple voltages are used to minimize power while maintaining the throughput. Such a scheme has the advantage of allowing modules on the critical paths to be assigned to the highest voltage levels (thus meeting the required timing constraints) while allowing modules on non-critical paths to be assigned to lower voltage levels (thus reducing the power consumption). A polynomial time resource and latency constrained scheduling algorithm is developed to distribute the available slack among the nodes such that power consumption is minimum. The algorithm is iterative and utilizes the slack based on the Lagrange multiplier method.
NASA Astrophysics Data System (ADS)
Castilla, G.
2004-09-01
Landcover maps typically represent the territory as a mosaic of contiguous units "polygons- that are assumed to correspond to geographic entities" like e.g. lakes, forests or villages-. They may also be viewed as representing a particular level of a landscape hierarchy where each polygon is a holon - an object made of subobjects and part of a superobject. The focal level portrayed in the map is distinguished from other levels by the average size of objects compounding it. Moreover, the focal level is bounded by the minimum size that objects of this level are supposed to have. Based on this framework, we have developed a segmentation method that defines a partition on a multiband image such that i) the mean size of segments is close to the one specified; ii) each segment exceeds the required minimum size; and iii) the internal homogeneity of segments is maximal given the size constraints. This paper briefly describes the method, focusing on its region merging stage. The most distinctive feature of the latter is that while the merging sequence is ordered by increasing dissimilarity as in conventional methods, there is no need to define a threshold on the dissimilarity measure between adjacent segments.
Preparation and antibacterial properties of titanium-doped ZnO from different zinc salts
2014-01-01
To research the relationship of micro-structures and antibacterial properties of the titanium-doped ZnO powders and probe their antibacterial mechanism, titanium-doped ZnO powders with different shapes and sizes were prepared from different zinc salts by alcohothermal method. The ZnO powders were characterized by X-ray powder diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), ultraviolet-visible spectroscopy (UV-vis), scanning electron microscopy (SEM), transmission electron microscopy (TEM), and selected area electron diffraction (SAED), and the antibacterial activities of titanium-doped ZnO powders on Escherichia coli and Staphylococcus aureus were evaluated. Furthermore, the tested strains were characterized by SEM, and the electrical conductance variation trend of the bacterial suspension was characterized. The results indicate that the morphologies of the powders are different due to preparation from different zinc salts. The XRD results manifest that the samples synthesized from zinc acetate, zinc nitrate, and zinc chloride are zincite ZnO, and the sample synthesized from zinc sulfate is the mixture of ZnO, ZnTiO3, and ZnSO4 · 3Zn (OH)2 crystal. UV-vis spectra show that the absorption edges of the titanium-doped ZnO powders are red shifted to more than 400 nm which are prepared from zinc acetate, zinc nitrate, and zinc chloride. The antibacterial activity of titanium-doped ZnO powders synthesized from zinc chloride is optimal, and its minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) are lower than 0.25 g L−1. Likewise, when the bacteria are treated by ZnO powders synthesized from zinc chloride, the bacterial cells are damaged most seriously, and the electrical conductance increment of bacterial suspension is slightly high. It can be inferred that the antibacterial properties of the titanium-doped ZnO powders are relevant to the microstructure, particle size, and the crystal. The powders can damage the cell walls; thus, the electrolyte is leaked from cells. PMID:24572014
Minimum-sized ideal reactor for continuous alcohol fermentation using immobilized microorganism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamane, T.; Shimizu, S.
Recently, alcohol fermentation has gained considerable attention with the aim of lowering its production cost in the production processes of both fuel ethanol and alcoholic beverages. The over-all cost is a summation of costs of various subsystems such as raw material (sugar, starch, and cellulosic substances) treatment, fermentation process, and alcohol separation from water solutions; lowering the cost of the fermentation processes is very important in lowering the total cost. Several new techniques have been developed for economic continuous ethanol production, use of a continuous wine fermentor with no mechanical stirring, cell recycle combined with continuous removal of ethanol undermore » vaccum, a technique involving a bed of yeast admixed with an inert carrier, and use of immobilized yeast reactors in packed-bed column and in a three-stage double conical fluidized-bed bioreactor. All these techniques lead to increases more or less, in reactor productivity, which in turn result in the reduction of the reactor size for a given production rate and a particular conversion. Since an improvement in the fermentation process often leads to a reduction of fermentor size and hence, a lowering of the initial construction cost, it is important to theoretically arrive at a solution to what is the minimum-size setup of ideal reactors from the viewpoint of liquid backmixing. In this short communication, the minimum-sized ideal reactor for continuous alcohol fermentation using immobilized cells will be specifically discussed on the basis of a mathematical model. The solution will serve for designing an optimal bioreactor. (Refs. 26).« less
NASA Astrophysics Data System (ADS)
Golobokova, Liudmila; Polkin, Victor
2014-05-01
The newly observed kickoff of the Northern Route development drew serious attention to state of the Arctic Resource environment. Occurring climatic and environmental changes are more sensitively seen in polar areas in particular. Air environment control allows for making prognostic assessments which are required for planning hazardous environmental impacts preventive actions. In August - September 2013, RV «Professor Khlustin» Northern Sea Route expeditionary voyage took place. En-route aerosol sampling was done over the surface of the Beringov, Chukotka and Eastern-Siberia seas (till the town of Pevek). The purpose of sampling was to assess spatio-temporal variability of optic, microphysical and chemical characteristics of aerosol particles of the surface layer within different areas adjacent to the Northern Sea Route. Aerosol test made use of automated mobile unit consisting of photoelectric particles counter AZ-10, aetalometr MDA-02, aspirator on NBM-1.2 pump chassis, and the impactor. This set of equipment allows for doing measurements of number concentration, dispersed composition of aerosols within sizes d=0.3-10 mkm, mass concentration of submicron sized aerosol, and filter-conveyed aerosols sampling. Filter-conveyed aerosols sampling was done using method accepted by EMEP and EANET monitoring networks. The impactor channel was upgraded to separate particles bigger than 1 mkm in size, and the fine grain fraction settled down on it. Reverse 5-day and 10-day trajectories of air mass transfer executed at heights of 10, 1500 and 3500 m were analyzed. The heights were selected by considerations that 3000 m is the height which characterizes air mass trend in the lower troposphere. 1500 m is the upper border of the atmospheric boundary layer, and the sampling was done in the Earth's surface layer at less than 10 m. Minimum values of the bespoken microphysical characteristics are better characteristic of higher latitudes where there are no man induced sources of aerosols while the natural ones are of lower severity due to low temperatures endemic for the Arctic Ocean areas. For doing the assessment of the air mass components chemical formulation samples of water soluble fraction of the atmospheric aerosol underwent chemical analysis. Sum of main ions within the aerosol composition varied from 0.23 to 16.2 mkg/m3. Minimum ion concentrations are defined in the aerosol sampled over the Chukotka sea surface at still. Chemical composition of the Beringov and Chukotka sea aerosol was dominated by impurities of sea origin coming from the ocean with air mass. Ion sum increased concentrations were observed in the Pevek area (Eastern Siberia Sea). Aerosol chemical composition building was impacted by air mass coming from the shore. Maximum concentrations of the bespoken components are seen in the aerosol sampled during stormy weather. Increase of wind made it for raising into the air of the sea origin particles. Ingestion of sprays onto the filter was eliminated by covering the sample catcher with a special protective hood. This completed survey is indicative of favourable state of atmosphere in the arctic resource of the Russian Arctic Eastern Section during Summer-Autumn season of 2013. The job is done under financial support of project. 23 Programs of fundamental research of the RAS Presidium, Partnership Integration Project, SB RAS. 25.
Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error
Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong
2013-01-01
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harikrishnan, R.; Hareland, G.; Warpinski, N.R.
This paper evaluates the correlation between values of minimum principal in situ stress derived from two different models which use data obtained from triaxial core tests and coefficient for earth at rest correlations. Both models use triaxial laboratory tests with different confining pressures. The first method uses a vcrified fit to the Mohr failure envelope as a function of average rock grain size, which was obtained from detailed microscopic analyses. The second method uses the Mohr-Coulomb failure criterion. Both approaches give an angle in internal friction which is used to calculate the coefficient for earth at rest which gives themore » minimum principal in situ stress. The minimum principal in situ stress is then compared to actual field mini-frac test data which accurately determine the minimum principal in situ stress and are used to verify the accuracy of the correlations. The cores and the mini-frac stress test were obtained from two wells, the Gas Research Institute`s (GRIs) Staged Field Experiment (SFE) no. 1 well through the Travis Peak Formation in the East Texas Basin, and the Department of Energy`s (DOE`s) Multiwell Experiment (MWX) wells located west-southwest of the town of Rifle, Colorado, near the Rulison gas field. Results from this study indicates that the calculated minimum principal in situ stress values obtained by utilizing the rock failure envelope as a function of average rock grain size correlation are in better agreement with the measured stress values (from mini-frac tests) than those obtained utilizing Mohr-Coulomb failure criterion.« less
NASA Astrophysics Data System (ADS)
Baasch, B.; Müller, H.; von Dobeneck, T.
2018-07-01
In this work, we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine-learning techniques. Non-negative matrix factorization is used to determine grain-size end-members from sediment surface samples. Four end-members were found, which well represent the variety of sediments in the study area. A radial basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.
NASA Astrophysics Data System (ADS)
Baasch, B.; M"uller, H.; von Dobeneck, T.
2018-04-01
In this work we present a new methodology to predict grain-size distributions from geophysical data. Specifically, electric conductivity and magnetic susceptibility of seafloor sediments recovered from electromagnetic profiling data are used to predict grain-size distributions along shelf-wide survey lines. Field data from the NW Iberian shelf are investigated and reveal a strong relation between the electromagnetic properties and grain-size distribution. The here presented workflow combines unsupervised and supervised machine learning techniques. Nonnegative matrix factorisation is used to determine grain-size end-members from sediment surface samples. Four end-members were found which well represent the variety of sediments in the study area. A radial-basis function network modified for prediction of compositional data is then used to estimate the abundances of these end-members from the electromagnetic properties. The end-members together with their predicted abundances are finally back transformed to grain-size distributions. A minimum spatial variation constraint is implemented in the training of the network to avoid overfitting and to respect the spatial distribution of sediment patterns. The predicted models are tested via leave-one-out cross-validation revealing high prediction accuracy with coefficients of determination (R2) between 0.76 and 0.89. The predicted grain-size distributions represent the well-known sediment facies and patterns on the NW Iberian shelf and provide new insights into their distribution, transition and dynamics. This study suggests that electromagnetic benthic profiling in combination with machine learning techniques is a powerful tool to estimate grain-size distribution of marine sediments.
Hampton, Paul M
2018-02-01
As body size increases, some predators eliminate small prey from their diet exhibiting an ontogenetic shift toward larger prey. In contrast, some predators show a telescoping pattern of prey size in which both large and small prey are consumed with increasing predator size. To explore a functional explanation for the two feeding patterns, I examined feeding effort as both handling time and number of upper jaw movements during ingestion of fish of consistent size. I used a range of body sizes from two snake species that exhibit ontogenetic shifts in prey size (Nerodia fasciata and N. rhombifer) and a species that exhibits telescoping prey size with increased body size (Thamnophis proximus). For the two Nerodia species, individuals with small or large heads exhibited greater difficulty in feeding effort compared to snakes of intermediate size. However, for T. proximus measures of feeding effort were negatively correlated with head length and snout-vent length (SVL). These data indicate that ontogenetic shifters of prey size develop trophic morphology large enough that feeding effort increases for disproportionately small prey. I also compared changes in body size among the two diet strategies for active foraging snake species using data gleaned from the literature to determine if increased change in body size and thereby feeding morphology is observable in snakes regardless of prey type or foraging habitat. Of the 30 species sampled from literature, snakes that exhibit ontogenetic shifts in prey size have a greater magnitude of change in SVL than species that have telescoping prey size patterns. Based upon the results of the two data sets above, I conclude that ontogenetic shifts away from small prey occur in snakes due, in part, to growth of body size and feeding structures beyond what is efficient for handling small prey. Copyright © 2017. Published by Elsevier GmbH.
ERIC Educational Resources Information Center
West, Lloyd Wilbert
An investigation was designed to ascertain the effects of cultural background on selected intelligence tests and to identify instruments which validly measure intellectual ability with a minimum of cultural bias. A battery of tests, selected for factor analytic study, was administered and replicated at four grade levels to a sample of Metis and…
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...
40 CFR Table 1 to Subpart Cccc of... - Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part). Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B of appendix A of this...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2010 CFR
2010-07-01
... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2012 CFR
2012-07-01
... part) Hydrogen chloride 62 parts per million by dry volume 3-run average (1 hour minimum sample time...) Sulfur dioxide 20 parts per million by dry volume 3-run average (1 hour minimum sample time per run...-8) or ASTM D6784-02 (Reapproved 2008).c Opacity 10 percent Three 1-hour blocks consisting of ten 6...
40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations
Code of Federal Regulations, 2011 CFR
2011-07-01
... this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample... per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method... appendix A of this part) Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour...
NASA Astrophysics Data System (ADS)
Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.
2018-06-01
Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.
The legibility of prescription medication labelling in Canada
Ahrens, Kristina; Krishnamoorthy, Abinaya; Gold, Deborah; Rojas-Fernandez, Carlos H.
2014-01-01
Introduction: The legibility of medication labelling is a concern for all Canadians, because poor or illegible labelling may lead to miscommunication of medication information and poor patient outcomes. There are currently few guidelines and no regulations regarding print standards on medication labels. This study analyzed sample prescription labels from Ontario, Canada, and compared them with print legibility guidelines (both generic and specific to medication labels). Methods: Cluster sampling was used to randomly select a total of 45 pharmacies in the tri-cities of Kitchener, Waterloo and Cambridge. Pharmacies were asked to supply a regular label with a hypothetical prescription. The print characteristics of patient-critical information were compared against the recommendations for prescription labels by pharmaceutical and health organizations and for print accessibility by nongovernmental organizations. Results: More than 90% of labels followed the guidelines for font style, contrast, print colour and nonglossy paper. However, only 44% of the medication instructions met the minimum guideline of 12-point print size, and none of the drug or patient names met this standard. Only 5% of the labels were judged to make the best use of space, and 51% used left alignment. None of the instructions were in sentence case, as is recommended. Discussion: We found discrepancies between guidelines and current labels in print size, justification, spacing and methods of emphasis. Conclusion: Improvements in pharmacy labelling are possible without moving to new technologies or changing the size of labels and would be expected to enhance patient outcomes. PMID:24847371
7 CFR 984.11 - Merchantable walnuts.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing... means all inshell walnuts meeting the minimum grade and size regulations effective pursuant to § 984.50... size regulations effective pursuant to § 984.50. [27 FR 9094, Sept. 13, 1962, as amended at 39 FR 35328...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-25
... Change Amending Rule 7.31(h)(5) To Reduce the Minimum Order Entry Size of a Mid-Point Passive Liquidity... order entry size of a Mid-Point Passive Liquidity Order (``MPL Order'') from 100 shares to one share...
Gritti, Fabrice; Guiochon, Georges
2015-03-06
Previous data have shown that could deliver a minimum reduced plate height as small as 1.7. Additionally, the reduction of the mesopore size after C18 derivatization and the subsequent restriction for sample diffusivity across the Titan-C18 particles were found responsible for the unusually small value of the experimental optimum reduced velocity (5 versus 10 for conventional particles) and for the large values of the average reduced solid-liquid mass transfer resistance coefficients (0.032 versus 0.016) measured for a series of seven n-alkanophenones. The improvements in column efficiency made by increasing the average mesopore size of the Titan silica from 80 to 120Å are investigated from a quantitative viewpoint based on the accurate measurements of the reduced coefficients (longitudinal diffusion, trans-particle mass transfer resistance, and eddy diffusion) and of the intra-particle diffusivity, pore, and surface diffusion for the same series of n-alkanophenone compounds. The experimental results reveal an increase (from 0% to 30%) of the longitudinal diffusion coefficients for the same sample concentration distribution (from 0.25 to 4) between the particle volume and the external volume of the column, a 40% increase of the intra-particle diffusivity for the same sample distribution (from 1 to 7) between the particle skeleton volume and the bulk phase, and a 15-30% decrease of the solid-liquid mass transfer coefficient for the n-alkanophenone compounds. Pore and surface diffusion are increased by 60% and 20%, respectively. The eddy dispersion term and the maximum column efficiency (295000plates/m) remain virtually unchanged. The rate of increase of the total plate height with increasing the chromatographic speed is reduced by 20% and it is mostly controlled (75% and 70% for 80 and 120Å pore size) by the flow rate dependence of the eddy dispersion term. Copyright © 2015 Elsevier B.V. All rights reserved.
Minimum distance classification in remote sensing
NASA Technical Reports Server (NTRS)
Wacker, A. G.; Landgrebe, D. A.
1972-01-01
The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.
Todd, Helena; Mirawdeli, Avin; Costelloe, Sarah; Cavenagh, Penny; Davis, Stephen; Howell, Peter
2014-12-01
Riley stated that the minimum speech sample length necessary to compute his stuttering severity estimates was 200 syllables. This was investigated. Procedures supplied for the assessment of readers and non-readers were examined to see whether they give equivalent scores. Recordings of spontaneous speech samples from 23 young children (aged between 2 years 8 months and 6 years 3 months) and 31 older children (aged between 10 years 0 months and 14 years 7 months) were made. Riley's severity estimates were scored on extracts of different lengths. The older children provided spontaneous and read samples, which were scored for severity according to reader and non-reader procedures. Analysis of variance supported the use of 200-syllable-long samples as the minimum necessary for obtaining severity scores. There was no significant difference in SSI-3 scores for the older children when the reader and non-reader procedures were used. Samples that are 200-syllables long are the minimum that is appropriate for obtaining stable Riley's severity scores. The procedural variants provide similar severity scores.
A Transmission Electron Microscope Study of Experimentally Shocked Pregraphitic Carbon
NASA Technical Reports Server (NTRS)
Rietmeijer, Frans J. M.
1995-01-01
A transmission electron microscope study of experimental shock metamorphism in natural pre-graphitic carbon simulates the response of the most common natural carbons to increased shock pressure. The d-spacings of this carbon are insensitive to the shock pressure and have no apparent diagnostic value, but progressive comminution occurs in response to increased shock pressure up to 59.6 GPa. The function, P = 869.1 x (size(sub minimum )(exp -0.83), describes the relationship between the minimum root-mean-square subgrain size (nm) and shock pressure (GPa). While a subgrain texture of natural pregraphitic carbons carries little information when pre-shock textures are unknown, this texture may go unnoticed as a shock metamorphic feature.
USSR Report International Affairs.
1986-09-02
minimum interest rate (price of a loan), proportion of the value of a contract to be covered by an easy loan (minimum size of payments in cash...Kuwait Laos Lebanon Malaysia Mongolian People’s Republic Nepal Pakistan Saudi Arabia Singapore Syria Turnover Export Import Turnover...including the blockade imposed on export financing. The latter was started in July 1980 by swiftly increasing the interest rates on foreign trade loans
Variability of space climate and its extremes with successive solar cycles
NASA Astrophysics Data System (ADS)
Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas
2016-04-01
Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287
[Trial of eye drops recognizer for visually disabled persons].
Okamoto, Norio; Suzuki, Katsuhiko; Mimura, Osamu
2009-01-01
The development of a device to enable the visually disabled to differentiate eye drops and their dose. The new instrument is composed of a voice generator and a two-dimensional bar-code reader (LS9208). We designed voice outputs for the visually disabled to state when (number of times) and where (right, left, or both) to administer eye drops. We then determined the minimum bar-code size that can be recognized. After attaching bar-codes of the appropriate size to the lateral or bottom surface of the eye drops container, the readability of the bar-codes was compared. The minimum discrimination bar-code size was 6 mm high x 8.5 mm long. Bar-codes on the bottom surface could be more easily recognized than bar-codes on the side. Our newly-developed device using bar-codes enables visually disabled persons to differentiate eye drops and their doses.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
Design of landfill daily cells.
Panagiotakopoulos, D; Dokas, I
2001-08-01
The objective of this paper is to study the behaviour of the landfill soil-to-refuse (S/R) ratio when size, geometry and operating parameters of the daily cell vary over realistic ranges. A simple procedure is presented (1) for calculating the cell parameters values which minimise the S/R ratio and (2) for studying the sensitivity of this minimum S/R ratio to variations in cell size, final refuse density, working face length, lift height and cover thickness. In countries where daily soil cover is required, savings in landfill space could be realised following this procedure. The sensitivity of minimum S/R to variations in cell dimensions decreases with cell size. Working face length and lift height affect the S/R ratio significantly. This procedure also offers the engineer an additional tool for comparing one large daily cell with two or more smaller ones, at two different working faces within the same landfill.
Techno-economic assessment of pellets produced from steam pretreated biomass feedstock
Shahrukh, Hassan; Oyedun, Adetoyese Olajire; Kumar, Amit; ...
2016-03-10
Minimum production cost and optimum plant size are determined for pellet plants for three types of biomass feedstock e forest residue, agricultural residue, and energy crops. The life cycle cost from harvesting to the delivery of the pellets to the co-firing facility is evaluated. The cost varies from 95 to 105 t -1 for regular pellets and 146–156 t -1 for steam pretreated pellets. The difference in the cost of producing regular and steam pretreated pellets per unit energy is in the range of 2e3 GJ -1. The economic optimum plant size (i.e., the size at which pellet production costmore » is minimum) is found to be 190 kt for regular pellet production and 250 kt for steam pretreated pellet. Furthermore, sensitivity and uncertainty analyses were carried out to identify sensitivity parameters and effects of model error.« less
STS mission duration enhancement study: (orbiter habitability)
NASA Technical Reports Server (NTRS)
Carlson, A. D.
1979-01-01
Habitability improvements for early flights that could be implemented with minimum impact were investigated. These included: (1) launching the water dispenser in the on-orbit position instead of in a locker; (2) the sleep pallet concept; and (3) suction cup foot restraints. Past studies that used volumetric terms and requirements for crew size versus mission duration were reviewed and common definitions of key habitability terms were established. An accurately dimensioned drawing of the orbiter mid-deck, locating all of the known major elements was developed. Finally, it was established that orbiter duration and crew size can be increased with minimum modification and impact to the crew module. Preliminary concepts of the aft med-deck, external versions of expanded tunnel adapters (ETA), and interior concepts of ETA-3 were developed and comparison charts showing the various factors of volume, weight, duration, size, impact to orbiter, and number of sleep stations were generated.
Ensemble Learning Method for Hidden Markov Models
2014-12-01
Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we
Preparation and bactericide activity of gallic acid stabilized gold nanoparticles
NASA Astrophysics Data System (ADS)
Moreno-Álvarez, S. A.; Martínez-Castañón, G. A.; Niño-Martínez, N.; Reyes-Macías, J. F.; Patiño-Marín, N.; Loyola-Rodríguez, J. P.; Ruiz, Facundo
2010-10-01
In this work, gold nanoparticles with three different sizes (13.7, 39.4, and 76.7 nm) were prepared using a simple aqueous method with gallic acid as the reducing and stabilizing agent, the different sizes were obtained varying some experimental parameters as the pH of the reaction and the amount of the gallic acid. The prepared nanoparticles were characterized using X-ray diffraction, transmission electron microscopy, dynamic light scattering, and UV-Vis spectroscopy. Samples were identified as elemental gold and present spherical morphology, a narrow size distribution and good stabilization according to TEM and DLS results. The antibacterial activity of this gallic acid stabilized gold nanoparticles against S. mutans (the etiologic agent of dental caries) was assessed using a microdilution method obtaining a minimum inhibitory concentration of 12.31, 12.31, and 49.25 μg/mL for 13.7, 39.4, and 76.7 nm gold nanoparticles, respectively. The antibacterial assay showed that gold nanoparticles prepared in this work present a bactericide activity by a synergistic action with gallic acid. The MIC found for this nanoparticles are much lower than those reported for mixtures of gold nanoparticles and antibiotics.
Alter, S. Elizabeth; Newsome, Seth D.; Palumbi, Stephen R.
2012-01-01
Commercial whaling decimated many whale populations, including the eastern Pacific gray whale, but little is known about how population dynamics or ecology differed prior to these removals. Of particular interest is the possibility of a large population decline prior to whaling, as such a decline could explain the ∼5-fold difference between genetic estimates of prior abundance and estimates based on historical records. We analyzed genetic (mitochondrial control region) and isotopic information from modern and prehistoric gray whales using serial coalescent simulations and Bayesian skyline analyses to test for a pre-whaling decline and to examine prehistoric genetic diversity, population dynamics and ecology. Simulations demonstrate that significant genetic differences observed between ancient and modern samples could be caused by a large, recent population bottleneck, roughly concurrent with commercial whaling. Stable isotopes show minimal differences between modern and ancient gray whale foraging ecology. Using rejection-based Approximate Bayesian Computation, we estimate the size of the population bottleneck at its minimum abundance and the pre-bottleneck abundance. Our results agree with previous genetic studies suggesting the historical size of the eastern gray whale population was roughly three to five times its current size. PMID:22590499
NASA Astrophysics Data System (ADS)
Tarling, G. A.; Stowasser, G.; Ward, P.; Poulton, A. J.; Zhou, M.; Venables, H. J.; McGill, R. A. R.; Murphy, E. J.
2012-01-01
The biomass size structure of pelagic communities provides a system level perspective that can be instructive when considering trophic interactions. Such perspectives can become even more powerful when combined with taxonomic information and stable isotope analysis. Here we apply these approaches to the pelagic community of the Scotia Sea (Southern Ocean) and consider the structure and development of trophic interactions over different years and seasons. Samples were collected from three open-ocean cruises during the austral spring 2006, summer 2008 and autumn 2009. Three main sampling techniques were employed: sampling bottles for microplankton (0-50 m), vertically hauled fine meshed nets for mesozooplankton (0-400 m) and coarse-meshed trawls for macrozooplankton and nekton (0-1000 m). All samples were identified to the lowest practicable taxonomic level and their abundance, individual body weight and biomass (in terms of carbon) estimated. Slopes of normalised biomass spectrum versus size showed a significant but not substantial difference between cruises and were between -1.09 and -1.06. These slopes were shallower than expected for a community at equilibrium and indicated that there was an accumulation of biomass in the larger size classes (10 1-10 5 mg C ind -1). A secondary structure of biomass domes was also apparent, with the domes being 2.5-3 log 10 intervals apart in spring and summer and 2 log 10 intervals apart in autumn. The recruitment of copepod-consuming macrozooplankton, Euphausia triacantha and Themisto gaudichaudii into an additional biomass dome was responsible for the decrease in the inter-dome interval in autumn. Predator to prey mass ratios estimated from stable isotope analysis reached a minimum in autumn while the estimated trophic level of myctophid fish was highest in that season. This reflected greater amounts of internal recycling and increased numbers of trophic levels in autumn compared to earlier times of the year. The accumulation of biomass in larger size classes throughout the year in the Scotia Sea may reflect the prevalence of species that store energy and have multiyear life-cycles.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Apples Definitions § 51.320 Diameter. When measuring for minimum size, “diameter” means the greatest dimension of the apple measured at right angles to a line from stem to blossom end. When measuring for maximum size, “diameter” means the smallest dimension of the apple determined by...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Apples Definitions § 51.320 Diameter. When measuring for minimum size, “diameter” means the greatest dimension of the apple measured at right angles to a line from stem to blossom end. When measuring for maximum size, “diameter” means the smallest dimension of the apple determined by...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Apples Definitions § 51.320 Diameter. When measuring for minimum size, “diameter” means the greatest dimension of the apple measured at right angles to a line from stem to blossom end. When measuring for maximum size, “diameter” means the smallest dimension of the apple determined by...
Sex Differences in Wisconsin Schizotypy Scales—A Meta-analysis
Miettunen, Jouko; Jääskeläinen, Erika
2010-01-01
Previous single studies have found inconsistent results on sex differences in positive schizotypy, women scoring mainly higher than men, whereas in negative schizotypy studies have often found that men score higher than women. However, information on the overall effect is unknown. In this study, meta-analytic methods were used to estimate sex differences in Wisconsin Schizotypy Scales developed to measure schizotypal traits and psychosis proneness. We also studied the effect of the sample characteristics on possible differences. Studies on healthy populations were extensively collected; the required minimum sample size was 50. According to the results, men scored higher on the scales of negative schizotypy, ie, in the Physical Anhedonia Scale (n = 23 studies, effect size, Cohen d = 0.59, z test P < .001) and Social Anhedonia Scale (n = 14, d = 0.44, P < .001). Differences were virtually nonexistent in the measurements of the positive schizotypy, ie, the Magical Ideation Scale (n = 29, d = −0.01, P = .74) and Perceptual Aberration Scale (n = 22, d = −0.08, P = .05). The sex difference was larger in studies with nonstudent and older samples on the Perceptual Aberration Scale (d = −0.19 vs d = −0.03, P < .05). This study was the first one to pool studies on sex differences in these scales. The gender differences in social anhedonia both in nonclinical samples and in schizophrenia may relate to a broader aspect of social and interpersonal deficits. The results should be taken into account in studies using these instruments. PMID:18644855
Continuous standalone controllable aerosol/cloud droplet dryer for atmospheric sampling
NASA Astrophysics Data System (ADS)
Sjogren, S.; Frank, G. P.; Berghof, M. I. A.; Martinsson, B. G.
2012-08-01
We describe a general-purpose dryer designed for continuous sampling of atmospheric aerosol, where a specified relative humidity (RH) of the sample flow (lower than the atmospheric humidity) is required. It is often prescribed to measure the properties of dried aerosol, for instance for monitoring networks. The specific purpose of our dryer is to dry highly charged cloud droplets (maximum diameter approximately 25 μm) with minimum losses from the droplet size distribution entering the dryer as well as on the residual dry particle size distribution exiting the dryer. This is achieved by using a straight vertical downwards path from the aerosol inlet mounted above the dryer, and removing humidity to a dry closed loop airflow on the other side of a semi-permeable GORE-TEX membrane (total area 0.134 m2). The water vapour transfer coefficient, k, was measured to 4.6 × 10-7 kg m-2 s-1% RH-1 in the laboratory and is used for design purposes. A net water vapour transfer rate of up to 1.2 × 10-6 kg s-1 was achieved in the field. This corresponds to drying a 5.7 L min-1 (0.35 m3 h-1) aerosol sample flow from 100% RH to 27% RH at 293 K (with a drying air total flow of 8.7 L min-1). The system was used outdoors from 9 May until 20 October 2010, on the mountain Brocken (51.80° N, 10.67° E, 1142 m a.s.l.) in the Harz region in central Germany. Sample air relative humidity of less than 30% was obtained 72% of the time period. The total availability of the measurement system was > 94% during these five months.
Electric property measurement of free-standing SrTiO3 nanoparticles assembled by dielectrophoresis
NASA Astrophysics Data System (ADS)
Budiman, Faisal; Kotooka, Takumi; Horibe, Yoichi; Eguchi, Masanori; Tanaka, Hirofumi
2018-06-01
Free-standing strontium titanate (SrTiO3/STO) nanoparticles (NPs) were synthesized by the sol–gel method. X-ray diffractometry revealed that the required minimum annealing temperature to synthesize pure and highly crystalline STO NPs was 500 °C. Moreover, morphological observation by field emission scanning electron microscopy showed that the STO NPs have a spherical structure and their size depended on annealing condition. Electrical properties were measured using a low-temperature probing system. Here, an electrode was fabricated by electron beam lithography and the synthesized STO NPs were aligned at the electrodes by dielectrophoresis. The conductance of a sample was proportional to temperature. Two conduction mechanisms originating from hopping and tunneling appeared in the Arrhenius plot.
Annual plants change in size over a century of observations.
Leger, Elizabeth A
2013-07-01
Studies have documented changes in animal body sizes over the last century, but very little is known about changes in plant sizes, even though reduced plant productivity is potentially responsible for declines in size of other organisms. Here, I ask whether warming trends in the Great Basin have affected plant size by measuring specimens preserved on herbarium sheets collected between 1893 and 2011. I asked how maximum and minimum temperatures, precipitation, and the Pacific Decadal Oscillation (PDO) in the year of collection affected plant height, leaf size, and flower number, and asked whether changes in climate resulted in decreasing sizes for seven annual forbs. Species had contrasting responses to climate factors, and would not necessarily be expected to respond in parallel to climatic shifts. There were generally positive relationships between plant size and increased minimum and maximum temperatures, which would have been predicted to lead to small increases in plant sizes over the observation period. While one species increased in size and flower number over the observation period, five of the seven species decreased in plant height, four of these decreased in leaf size, and one species also decreased in flower production. One species showed no change. The mechanisms behind these size changes are unknown, and the limited data available on these species (germination timing, area of occupancy, relative abundance) did not explain why some species shrank while others grew or did not change in size over time. These results show that multiple annual forbs are decreasing in size, but that even within the same functional group, species may have contrasting responses to similar environmental stimuli. Changes in plant size could have cascading effects on other members of these communities, and differential responses to directional change may change the composition of plant communities over time. © 2013 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Wiedensohler, A.; Birmili, W.; Nowak, A.; Sonntag, A.; Weinhold, K.; Merkel, M.; Wehner, B.; Tuch, T.; Pfeifer, S.; Fiebig, M.; Fjäraa, A. M.; Asmi, E.; Sellegri, K.; Depuy, R.; Venzac, H.; Villani, P.; Laj, P.; Aalto, P.; Ogren, J. A.; Swietlicki, E.; Roldin, P.; Williams, P.; Quincey, P.; Hüglin, C.; Fierz-Schmidhauser, R.; Gysel, M.; Weingartner, E.; Riccobono, F.; Santos, S.; Grüning, C.; Faloon, K.; Beddows, D.; Harrison, R. M.; Monahan, C.; Jennings, S. G.; O'Dowd, C. D.; Marinoni, A.; Horn, H.-G.; Keck, L.; Jiang, J.; Scheckman, J.; McMurry, P. H.; Deng, Z.; Zhao, C. S.; Moerman, M.; Henzing, B.; de Leeuw, G.
2010-12-01
Particle mobility size spectrometers often referred to as DMPS (Differential Mobility Particle Sizers) or SMPS (Scanning Mobility Particle Sizers) have found a wide application in atmospheric aerosol research. However, comparability of measurements conducted world-wide is hampered by lack of generally accepted technical standards with respect to the instrumental set-up, measurement mode, data evaluation as well as quality control. This article results from several instrument intercomparison workshops conducted within the European infrastructure project EUSAAR (European Supersites for Atmospheric Aerosol Research). Under controlled laboratory conditions, the number size distribution from 20 to 200 nm determined by mobility size spectrometers of different design are within an uncertainty range of ±10% after correcting internal particle losses, while below and above this size range the discrepancies increased. Instruments with identical design agreed within ±3% in the peak number concentration when all settings were done carefully. Technical standards were developed for a minimum requirement of mobility size spectrometry for atmospheric aerosol measurements. Technical recommendations are given for atmospheric measurements including continuous monitoring of flow rates, temperature, pressure, and relative humidity for the sheath and sample air in the differential mobility analyser. In cooperation with EMEP (European Monitoring and Evaluation Program), a new uniform data structure was introduced for saving and disseminating the data within EMEP. This structure contains three levels: raw data, processed data, and final particle size distributions. Importantly, we recommend reporting raw measurements including all relevant instrument parameters as well as a complete documentation on all data transformation and correction steps. These technical and data structure standards aim to enhance the quality of long-term size distribution measurements, their comparability between different networks and sites, and their transparency and traceability back to raw data.
Two-sample binary phase 2 trials with low type I error and low sample size.
Litwin, Samuel; Basickes, Stanley; Ross, Eric A
2017-04-30
We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Acoustic Enrichment of Extracellular Vesicles from Biological Fluids.
Ku, Anson; Lim, Hooi Ching; Evander, Mikael; Lilja, Hans; Laurell, Thomas; Scheding, Stefan; Ceder, Yvonne
2018-06-11
Extracellular vesicles (EVs) have emerged as a rich source of biomarkers providing diagnostic and prognostic information in diseases such as cancer. Large-scale investigations into the contents of EVs in clinical cohorts are warranted, but a major obstacle is the lack of a rapid, reproducible, efficient, and low-cost methodology to enrich EVs. Here, we demonstrate the applicability of an automated acoustic-based technique to enrich EVs, termed acoustic trapping. Using this technology, we have successfully enriched EVs from cell culture conditioned media and urine and blood plasma from healthy volunteers. The acoustically trapped samples contained EVs ranging from exosomes to microvesicles in size and contained detectable levels of intravesicular microRNAs. Importantly, this method showed high reproducibility and yielded sufficient quantities of vesicles for downstream analysis. The enrichment could be obtained from a sample volume of 300 μL or less, an equivalent to 30 min of enrichment time, depending on the sensitivity of downstream analysis. Taken together, acoustic trapping provides a rapid, automated, low-volume compatible, and robust method to enrich EVs from biofluids. Thus, it may serve as a novel tool for EV enrichment from large number of samples in a clinical setting with minimum sample preparation.
Liu, Fang
2016-01-01
In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.
Selenium isotope ratios as indicators of selenium sources and oxyanion reduction
Johnson, T.M.; Herbel, M.J.; Bullen, T.D.; Zawislanski, P.T.
1999-01-01
Selenium stable isotope ratio measurements should serve as indicators of sources and biogeochemical transformations of Se. We report measurements of Se isotope fractionation during selenate reduction, selenite sorption, oxidation of reduced Se in soils, and Se volatilization by algae and soil samples. These results, combined with previous work with Se isotopes, indicate that reduction of soluble oxyanions is the dominant cause of Se isotope fractionation. Accordingly, Se isotope ratios should be useful as indicators of oxyanion reduction, which can transform mobile species to forms that are less mobile and less bioavailable. Additional investigations of Se isotope fractionation are needed to confirm this preliminary assessment. We have developed a new method for measurement of natural Se isotope ratio variation which requires less than 500 ng Se per analysis and yields ??0.2??? precision on 80Se/76Se. A double isotope spike technique corrects for isotopic fractionation during sample preparation and mass spectrometry. The small minimum sample size is important, as Se concentrations are often below 1 ppm in solids and 1 ??g/L in fluids. The Se purification process is rapid and compatible with various sample matrices, including acidic rock or sediment digests.
Selenium isotope ratios as indicators of selenium sources and oxyanion reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, T.M.; Herbel, M.J.; Bullen, T.D.
1999-09-01
Selenium stable isotope ratio measurements should serve as indicators of sources and biogeochemical transformations of Se. The authors report measurements of Se isotope fractionation during selenate reduction, selenite sorption, oxidation of reduced Se in soils, and Se volatilization by algae and soil samples. These results, combined with previous work with Se isotopes, indicate that reduction of soluble oxyanions is the dominant cause of Se isotope fractionation. Accordingly, Se isotope ratios should be useful as indicators of oxyanion reduction, which can transform mobile species to forms that are less mobile and less bioavailable. Additional investigations of Se isotope fractionation are neededmore » to confirm this preliminary assessment. The authors have developed a new method for measurement of natural Se isotope ratio variation which requires less than 500 ng Se per analysis and yields {+-}0.2% precision on {sup 80}Se/{sup 76}Se. A double isotope spike technique corrects for isotopic fractionation during sample preparation and mass spectrometry. The small minimum sample size is important, as Se concentrations are often below 1 ppm in solids and 1 {micro}g/L in fluids. The Se purification process is rapid and compatible with various sample matrices, including acidic rock or sediment digests.« less
Estimated Mid-Infrared (200-2000 cm-1) Optical Constants of Some Silica Polymorphs
NASA Astrophysics Data System (ADS)
Glotch, Timothy; Rossman, G. R.; Michalski, J. R.
2006-09-01
We use Lorentz-Lorenz dispersion analysis to model the mid-infrared (200-2000 cm-1) optical constants, of opal-A, opal-CT, and tridymite. These minerals, which are all polymorphs of silica (SiO2), are potentially important in the analysis of thermal emission spectra acquired by the Mars Global Surveyor Thermal Emission Spectrometer (MGS-TES) and Mars Exploration Rover Mini-TES instruments in orbit and on the surface of Mars as well as emission spectra acquired by telescopes of planetary disks and dust and debris clouds in young solar systems. Mineral samples were crushed, washed, and sieved and emissivity spectra of the >100; μm size fraction were acquired at Arizona State University's emissivity spectroscopy laboratory. Therefore, the spectra and optical constants are representative of all crystal orientations. Ideally, emissivity or reflectance measurements of single polished crystals or fine powders pressed to compact disks are used for the determination of mid-infrared optical constants. Measurements of these types of surfaces eliminate or minimize multiple reflections, providing a specular surface. Our measurements, however, likely produce a reasonable approximation of specular emissivity or reflectance, as the minimum particle size is greater than the maximum wavelength of light measured. Future work will include measurement of pressed disks of powdered samples in emission and reflection, and when possible, small single crystals under an IR reflectance microscope, which will allow us to assess the variability of spectra and optical constants under different sample preparation and measurement conditions.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
NASA Technical Reports Server (NTRS)
Corbett, Lee B.; Bierman, Paul R.; Graly, Joseph A.; Neumann, Thomas A.; Rood, Dylan H.
2013-01-01
High-latitude landscape evolution processes have the potential to preserve old, relict surfaces through burial by cold-based, nonerosive glacial ice. To investigate landscape history and age in the high Arctic, we analyzed in situ cosmogenic Be(sup 10) and Al (sup 26) in 33 rocks from Upernavik, northwest Greenland. We sampled adjacent bedrock-boulder pairs along a 100 km transect at elevations up to 1000 m above sea level. Bedrock samples gave significantly older apparent exposure ages than corresponding boulder samples, and minimum limiting ages increased with elevation. Two-isotope calculations Al(sup26)/B(sup 10) on 20 of the 33 samples yielded minimum limiting exposure durations up to 112 k.y., minimum limiting burial durations up to 900 k.y., and minimum limiting total histories up to 990 k.y. The prevalence of BE(sup 10) and Al(sup 26) inherited from previous periods of exposure, especially in bedrock samples at high elevation, indicates that these areas record long and complex surface exposure histories, including significant periods of burial with little subglacial erosion. The long total histories suggest that these high elevation surfaces were largely preserved beneath cold-based, nonerosive ice or snowfields for at least the latter half of the Quaternary. Because of high concentrations of inherited nuclides, only the six youngest boulder samples appear to record the timing of ice retreat. These six samples suggest deglaciation of the Upernavik coast at 11.3 +/- 0.5 ka (average +/- 1 standard deviation). There is no difference in deglaciation age along the 100 km sample transect, indicating that the ice-marginal position retreated rapidly at rates of approx.120 m yr(sup-1).
A probabilistic asteroid impact risk model: assessment of sub-300 m impacts
NASA Astrophysics Data System (ADS)
Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.
2017-06-01
A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.
Henin, Simon; Fein, Dovid; Smouha, Eric; Parra, Lucas C
2016-01-01
Tinnitus correlates with elevated hearing thresholds and reduced cochlear compression. We hypothesized that reduced peripheral input leads to elevated neuronal gain resulting in the perception of a phantom sound. The purpose of this pilot study was to test whether compensating for this peripheral deficit could reduce the tinnitus percept acutely using customized auditory stimulation. To further enhance the effects of auditory stimulation, this intervention was paired with high-definition transcranial direct current stimulation (HD-tDCS). A randomized sham-controlled, single blind study was conducted in a clinical setting on adult participants with chronic tinnitus (n = 14). Compensatory auditory stimulation (CAS) and HD-tDCS were administered either individually or in combination in order to access the effects of both interventions on tinnitus perception. CAS consisted of sound exposure typical to daily living (20-minute sound-track of a TV show), which was adapted with compressive gain to compensate for deficits in each subject's individual audiograms. Minimum masking levels and the visual analog scale were used to assess the strength of the tinnitus percept immediately before and after the treatment intervention. CAS reduced minimum masking levels, and visual analog scale trended towards improvement. Effects of HD-tDCS could not be resolved with the current sample size. The results of this pilot study suggest that providing tailored auditory stimulation with frequency-specific gain and compression may alleviate tinnitus in a clinical population. Further experimentation with longer interventions is warranted in order to optimize effect sizes.
Foreshocks during the nucleation of stick-slip instability
McLaskey, Gregory C.; Kilgore, Brian D.
2013-01-01
We report on laboratory experiments which investigate interactions between aseismic slip, stress changes, and seismicity on a critically stressed fault during the nucleation of stick-slip instability. We monitor quasi-static and dynamic changes in local shear stress and fault slip with arrays of gages deployed along a simulated strike-slip fault (2 m long and 0.4 m deep) in a saw cut sample of Sierra White granite. With 14 piezoelectric sensors, we simultaneously monitor seismic signals produced during the nucleation phase and subsequent dynamic rupture. We observe localized aseismic fault slip in an approximately meter-sized zone in the center of the fault, while the ends of the fault remain locked. Clusters of high-frequency foreshocks (Mw ~ −6.5 to −5.0) can occur in this slowly slipping zone 5–50 ms prior to the initiation of dynamic rupture; their occurrence appears to be dependent on the rate at which local shear stress is applied to the fault. The meter-sized nucleation zone is generally consistent with theoretical estimates, but source radii of the foreshocks (2 to 70 mm) are 1 to 2 orders of magnitude smaller than the theoretical minimum length scale over which earthquake nucleation can occur. We propose that frictional stability and the transition between seismic and aseismic slip are modulated by local stressing rate and that fault sections, which would typically slip aseismically, may radiate seismic waves if they are rapidly stressed. Fault behavior of this type may provide physical insight into the mechanics of foreshocks, tremor, repeating earthquake sequences, and a minimum earthquake source dimension.
The Minimum Wage and the Employment of Teenagers. Recent Research.
ERIC Educational Resources Information Center
Fallick, Bruce; Currie, Janet
A study used individual-level data from the National Longitudinal Study of Youth to examine the effects of changes in the federal minimum wage on teenage employment. Individuals in the sample were classified as either likely or unlikely to be affected by these increases in the federal minimum wage on the basis of their wage rates and industry of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter
2016-07-26
This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less
Asteroid size distributions for the main belt and for asteroid families
NASA Astrophysics Data System (ADS)
Kazantzev, A.; Kazantzeva, L.
2017-12-01
The asteroid-size distribution for he Eos family was constructed. The WISE database containing the albedo p and the size D of over 80,000 asteroids was used. The b parameter of the power-law dependence has a minimum at some average values of the asteroid size of the family. A similar dependence b(D) exists for the whole asteroid belt. An assumption on the possible similarity of the formation mechanisms of the asteroid belt as a whole and separate families is made.
Wang, Rong
2015-01-01
In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-06
... Index, Equity and Currency Options) to extend a pilot program that eliminates minimum value sizes for... FLEX Options, FLEX currency options are also traded on the Exchange. These flexible index, equity, and currency options provide investors the ability to customize basic option features including size...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-07
... Index, Equity and Currency Options) to extend a pilot program that eliminates minimum value sizes for... FLEX Options, FLEX currency options are also traded on the Exchange. These flexible index, equity, and currency options provide investors the ability to customize basic option features including size...
Crown size relationships for black willow in the Lower Mississippi Alluvial Valley
Jamie L. Schuler; Bradley Woods; Joshua Adams; Ray Souter
2015-01-01
Growing space requirements derived from maximum and minimum crown sizes have been identified for many southern hardwood species. These requirements help managers assess stocking levels, schedule intermediate treatments, and even assist in determining planting densities. Throughout the Mississippi Alluvial Valley, black willow (Salix nigra Marsh.) stands are common...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-15
... the minimum quotation sizes (or ``tier sizes'') for OTC equity securities \\4\\ to, among other things... depth (dollar value of shares) at the inside. Amendment No. 2 specified, among other things, that: (1...\\ which requires, among other things, that FINRA rules must be designed to prevent fraudulent and...