An aftereffect of adaptation to mean size
Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David
2013-01-01
The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083
A generalized memory test algorithm
NASA Technical Reports Server (NTRS)
Milner, E. J.
1982-01-01
A general algorithm for testing digital computer memory is presented. The test checks that (1) every bit can be cleared and set in each memory work, and (2) bits are not erroneously cleared and/or set elsewhere in memory at the same time. The algorithm can be applied to any size memory block and any size memory word. It is concise and efficient, requiring the very few cycles through memory. For example, a test of 16-bit-word-size memory requries only 384 cycles through memory. Approximately 15 seconds were required to test a 32K block of such memory, using a microcomputer having a cycle time of 133 nanoseconds.
Analysis of Duplicated Multiple-Samples Rank Data Using the Mack-Skillings Test.
Carabante, Kennet Mariano; Alonso-Marenco, Jose Ramon; Chokumnoyporn, Napapan; Sriwattana, Sujinda; Prinyawiwatkul, Witoon
2016-07-01
Appropriate analysis for duplicated multiple-samples rank data is needed. This study compared analysis of duplicated rank preference data using the Friedman versus Mack-Skillings tests. Panelists (n = 125) ranked twice 2 orange juice sets: different-samples set (100%, 70%, vs. 40% juice) and similar-samples set (100%, 95%, vs. 90%). These 2 sample sets were designed to get contrasting differences in preference. For each sample set, rank sum data were obtained from (1) averaged rank data of each panelist from the 2 replications (n = 125), (2) rank data of all panelists from each of the 2 separate replications (n = 125 each), (3) jointed rank data of all panelists from the 2 replications (n = 125), and (4) rank data of all panelists pooled from the 2 replications (n = 250); rank data (1), (2), and (4) were separately analyzed by the Friedman test, although those from (3) by the Mack-Skillings test. The effect of sample sizes (n = 10 to 125) was evaluated. For the similar-samples set, higher variations in rank data from the 2 replications were observed; therefore, results of the main effects were more inconsistent among methods and sample sizes. Regardless of analysis methods, the larger the sample size, the higher the χ(2) value, the lower the P-value (testing H0 : all samples are not different). Analyzing rank data (2) separately by replication yielded inconsistent conclusions across sample sizes, hence this method is not recommended. The Mack-Skillings test was more sensitive than the Friedman test. Furthermore, it takes into account within-panelist variations and is more appropriate for analyzing duplicated rank data. © 2016 Institute of Food Technologists®
NASA Astrophysics Data System (ADS)
Chmela, Jiří; Harding, Michael E.
2018-06-01
Optimised auxiliary basis sets for lanthanide atoms (Ce to Lu) for four basis sets of the Karlsruhe error-balanced segmented contracted def2 - series (SVP, TZVP, TZVPP and QZVPP) are reported. These auxiliary basis sets enable the use of the resolution-of-the-identity (RI) approximation in post Hartree-Fock methods - as for example, second-order perturbation theory (MP2) and coupled cluster (CC) theory. The auxiliary basis sets are tested on an enlarged set of about a hundred molecules where the test criterion is the size of the RI error in MP2 calculations. Our tests also show that the same auxiliary basis sets can be used together with different effective core potentials. With these auxiliary basis set calculations of MP2 and CC quality can now be performed efficiently on medium-sized molecules containing lanthanides.
Size and Strength: Do We Need Both to Measure Vocabulary Knowledge?
ERIC Educational Resources Information Center
Laufer, B.; Elder, C.; Hill, K.; Congdon, P.
2004-01-01
This article describes the development and validation of a test of vocabulary size and strength. The first part of the article sets out the theoretical rationale for the test, and describes how the size and strength constructs have been conceptualized and operationalized. The second part of the article focusses on the process of test validation,…
16 CFR § 1633.4 - Prototype testing requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
.../foundation length and width, not depth (e.g., twin, queen, king); (2) Ticking, unless the ticking of the... § 1633.3(b). (c) All tests must be conducted on specimens that are no smaller than a twin size, unless the largest size mattress set produced is smaller than a twin size, in which case the largest size...
16 CFR 1633.4 - Prototype testing requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
.../foundation length and width, not depth (e.g., twin, queen, king); (2) Ticking, unless the ticking of the... § 1633.3(b). (c) All tests must be conducted on specimens that are no smaller than a twin size, unless the largest size mattress set produced is smaller than a twin size, in which case the largest size...
16 CFR 1633.4 - Prototype testing requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
.../foundation length and width, not depth (e.g., twin, queen, king); (2) Ticking, unless the ticking of the... § 1633.3(b). (c) All tests must be conducted on specimens that are no smaller than a twin size, unless the largest size mattress set produced is smaller than a twin size, in which case the largest size...
16 CFR 1633.4 - Prototype testing requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
.../foundation length and width, not depth (e.g., twin, queen, king); (2) Ticking, unless the ticking of the... § 1633.3(b). (c) All tests must be conducted on specimens that are no smaller than a twin size, unless the largest size mattress set produced is smaller than a twin size, in which case the largest size...
Sung, Kyongje
2008-12-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.
Hydraulic Performance of Set-Back Curb Inlets
DOT National Transportation Integrated Search
1998-06-01
The objective of this study was to develop hydraulic design charts for the location and sizing of set-back curb inlets. An extensive program of hydraulic model testing was conducted to evaluate the performance of various inlet opening sizes. The grad...
Raychaudhuri, Soumya; Korn, Joshua M.; McCarroll, Steven A.; Altshuler, David; Sklar, Pamela; Purcell, Shaun; Daly, Mark J.
2010-01-01
Investigators have linked rare copy number variation (CNVs) to neuropsychiatric diseases, such as schizophrenia. One hypothesis is that CNV events cause disease by affecting genes with specific brain functions. Under these circumstances, we expect that CNV events in cases should impact brain-function genes more frequently than those events in controls. Previous publications have applied “pathway” analyses to genes within neuropsychiatric case CNVs to show enrichment for brain-functions. While such analyses have been suggestive, they often have not rigorously compared the rates of CNVs impacting genes with brain function in cases to controls, and therefore do not address important confounders such as the large size of brain genes and overall differences in rates and sizes of CNVs. To demonstrate the potential impact of confounders, we genotyped rare CNV events in 2,415 unaffected controls with Affymetrix 6.0; we then applied standard pathway analyses using four sets of brain-function genes and observed an apparently highly significant enrichment for each set. The enrichment is simply driven by the large size of brain-function genes. Instead, we propose a case-control statistical test, cnv-enrichment-test, to compare the rate of CNVs impacting specific gene sets in cases versus controls. With simulations, we demonstrate that cnv-enrichment-test is robust to case-control differences in CNV size, CNV rate, and systematic differences in gene size. Finally, we apply cnv-enrichment-test to rare CNV events published by the International Schizophrenia Consortium (ISC). This approach reveals nominal evidence of case-association in neuronal-activity and the learning gene sets, but not the other two examined gene sets. The neuronal-activity genes have been associated in a separate set of schizophrenia cases and controls; however, testing in independent samples is necessary to definitively confirm this association. Our method is implemented in the PLINK software package. PMID:20838587
Working memory for visual features and conjunctions in schizophrenia.
Gold, James M; Wilk, Christopher M; McMahon, Robert P; Buchanan, Robert W; Luck, Steven J
2003-02-01
The visual working memory (WM) storage capacity of patients with schizophrenia was investigated using a change detection paradigm. Participants were presented with 2, 3, 4, or 6 colored bars with testing of both single feature (color, orientation) and feature conjunction conditions. Patients performed significantly worse than controls at all set sizes but demonstrated normal feature binding. Unlike controls, patient WM capacity declined at set size 6 relative to set size 4. Impairments with subcapacity arrays suggest a deficit in task set maintenance: Greater impairment for supercapacity set sizes suggests a deficit in the ability to selectively encode information for WM storage. Thus, the WM impairment in schizophrenia appears to be a consequence of attentional deficits rather than a reduction in storage capacity.
The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory
ERIC Educational Resources Information Center
Sahin, Alper; Anil, Duygu
2017-01-01
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
Bernstein, Joshua G. W.; Summers, Van; Iyer, Nandini; Brungart, Douglas S.
2012-01-01
Adaptive signal-to-noise ratio (SNR) tracking is often used to measure speech reception in noise. Because SNR varies with performance using this method, data interpretation can be confounded when measuring an SNR-dependent effect such as the fluctuating-masker benefit (FMB) (the intelligibility improvement afforded by brief dips in the masker level). One way to overcome this confound, and allow FMB comparisons across listener groups with different stationary-noise performance, is to adjust the response set size to equalize performance across groups at a fixed SNR. However, this technique is only valid under the assumption that changes in set size have the same effect on percentage-correct performance for different masker types. This assumption was tested by measuring nonsense-syllable identification for normal-hearing listeners as a function of SNR, set size and masker (stationary noise, 4- and 32-Hz modulated noise and an interfering talker). Set-size adjustment had the same impact on performance scores for all maskers, confirming the independence of FMB (at matched SNRs) and set size. These results, along with those of a second experiment evaluating an adaptive set-size algorithm to adjust performance levels, establish set size as an efficient and effective tool to adjust baseline performance when comparing effects of masker fluctuations between listener groups. PMID:23039460
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
Configuration and Sizing of a Test Fixture for Panels Under Combined Loads
NASA Technical Reports Server (NTRS)
Lovejoy, Andrew E.
2006-01-01
Future air and space structures are expected to utilize composite panels that are subjected to combined mechanical loads, such as bi-axial compression/tension, shear and pressure. Therefore, the ability to accurately predict the buckling and strength failures of such panels is important. While computational analysis can provide tremendous insight into panel response, experimental results are necessary to verify predicted performances of these panels to judge the accuracy of computational methods. However, application of combined loads is an extremely difficult task due to the complex test fixtures and set-up required. Presented herein is a comparison of several test set-ups capable of testing panels under combined loads. Configurations compared include a D-box, a segmented cylinder and a single panel set-up. The study primarily focuses on the preliminary sizing of a single panel test configuration capable of testing flat panels under combined in-plane mechanical loads. This single panel set-up appears to be best suited to the testing of both strength critical and buckling critical panels. Required actuator loads and strokes are provided for various square, flat panels.
Molecular Transporters for Desalination Applications
2014-08-02
Collaborative and commercially available state-of-the-art test Zeolite template based synthesis II. Summary of key results and challenges For the...size setting CNT diameter. The tightest distribution of SWCNTs reported (Lu group, Duke Univ.) was achieved by loading catalyst into zeolite with the...pore size nominally acting to set the size of catalyst on the surface. However nano particles and CNTs grow on the surface of the zeolite , thus
Ranking metrics in gene set enrichment analysis: do they matter?
Zyla, Joanna; Marczyk, Michal; Weiner, January; Polanska, Joanna
2017-05-12
There exist many methods for describing the complex relation between changes of gene expression in molecular pathways or gene ontologies under different experimental conditions. Among them, Gene Set Enrichment Analysis seems to be one of the most commonly used (over 10,000 citations). An important parameter, which could affect the final result, is the choice of a metric for the ranking of genes. Applying a default ranking metric may lead to poor results. In this work 28 benchmark data sets were used to evaluate the sensitivity and false positive rate of gene set analysis for 16 different ranking metrics including new proposals. Furthermore, the robustness of the chosen methods to sample size was tested. Using k-means clustering algorithm a group of four metrics with the highest performance in terms of overall sensitivity, overall false positive rate and computational load was established i.e. absolute value of Moderated Welch Test statistic, Minimum Significant Difference, absolute value of Signal-To-Noise ratio and Baumgartner-Weiss-Schindler test statistic. In case of false positive rate estimation, all selected ranking metrics were robust with respect to sample size. In case of sensitivity, the absolute value of Moderated Welch Test statistic and absolute value of Signal-To-Noise ratio gave stable results, while Baumgartner-Weiss-Schindler and Minimum Significant Difference showed better results for larger sample size. Finally, the Gene Set Enrichment Analysis method with all tested ranking metrics was parallelised and implemented in MATLAB, and is available at https://github.com/ZAEDPolSl/MrGSEA . Choosing a ranking metric in Gene Set Enrichment Analysis has critical impact on results of pathway enrichment analysis. The absolute value of Moderated Welch Test has the best overall sensitivity and Minimum Significant Difference has the best overall specificity of gene set analysis. When the number of non-normally distributed genes is high, using Baumgartner-Weiss-Schindler test statistic gives better outcomes. Also, it finds more enriched pathways than other tested metrics, which may induce new biological discoveries.
Determinants of Awareness, Consideration, and Choice Set Size in University Choice.
ERIC Educational Resources Information Center
Dawes, Philip L.; Brown, Jennifer
2002-01-01
Developed and tested a model of students' university "brand" choice using five individual-level variables (ethnic group, age, gender, number of parents going to university, and academic ability) and one situational variable (duration of search) to explain variation in the sizes of awareness, consideration, and choice decision sets. (EV)
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C. PMID:27383135
Samson, Shazwani; Basri, Mahiran; Fard Masoumi, Hamid Reza; Abdul Malek, Emilia; Abedi Karjiban, Roghayeh
2016-01-01
A predictive model of a virgin coconut oil (VCO) nanoemulsion system for the topical delivery of copper peptide (an anti-aging compound) was developed using an artificial neural network (ANN) to investigate the factors that influence particle size. Four independent variables including the amount of VCO, Tween 80: Pluronic F68 (T80:PF68), xanthan gum and water were the inputs whereas particle size was taken as the response for the trained network. Genetic algorithms (GA) were used to model the data which were divided into training sets, testing sets and validation sets. The model obtained indicated the high quality performance of the neural network and its capability to identify the critical composition factors for the VCO nanoemulsion. The main factor controlling the particle size was found out to be xanthan gum (28.56%) followed by T80:PF68 (26.9%), VCO (22.8%) and water (21.74%). The formulation containing copper peptide was then successfully prepared using optimum conditions and particle sizes of 120.7 nm were obtained. The final formulation exhibited a zeta potential lower than -25 mV and showed good physical stability towards centrifugation test, freeze-thaw cycle test and storage at temperature 25°C and 45°C.
Tests for informative cluster size using a novel balanced bootstrap scheme.
Nevalainen, Jaakko; Oja, Hannu; Datta, Somnath
2017-07-20
Clustered data are often encountered in biomedical studies, and to date, a number of approaches have been proposed to analyze such data. However, the phenomenon of informative cluster size (ICS) is a challenging problem, and its presence has an impact on the choice of a correct analysis methodology. For example, Dutta and Datta (2015, Biometrics) presented a number of marginal distributions that could be tested. Depending on the nature and degree of informativeness of the cluster size, these marginal distributions may differ, as do the choices of the appropriate test. In particular, they applied their new test to a periodontal data set where the plausibility of the informativeness was mentioned, but no formal test for the same was conducted. We propose bootstrap tests for testing the presence of ICS. A balanced bootstrap method is developed to successfully estimate the null distribution by merging the re-sampled observations with closely matching counterparts. Relying on the assumption of exchangeability within clusters, the proposed procedure performs well in simulations even with a small number of clusters, at different distributions and against different alternative hypotheses, thus making it an omnibus test. We also explain how to extend the ICS test to a regression setting and thereby enhancing its practical utility. The methodologies are illustrated using the periodontal data set mentioned earlier. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-15
... proposed rules provide a clear set of guidelines for small businesses to understand and a bright-line test..., bright-line test for SBIR and STTR applicants to apply when determining eligibility with respect to size... owns 33% or more of the company) in order to create a bright-line test for applicants; (2) find...
Automated spot defect characterization in a field portable night vision goggle test set
NASA Astrophysics Data System (ADS)
Scopatz, Stephen; Ozten, Metehan; Aubry, Gilles; Arquetoux, Guillaume
2018-05-01
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
Cameron, E Leslie; Tai, Joanna C; Eckstein, Miguel P; Carrasco, Marisa
2004-01-01
Adding distracters to a display impairs performance on visual tasks (i.e. the set-size effect). While keeping the display characteristics constant, we investigated this effect in three tasks: 2 target identification, yes-no detection with 2 targets, and 8-alternative localization. A Signal Detection Theory (SDT) model, tailored for each task, accounts for the set-size effects observed in identification and localization tasks, and slightly under-predicts the set-size effect in a detection task. Given that sensitivity varies as a function of spatial frequency (SF), we measured performance in each of these three tasks in neutral and peripheral precue conditions for each of six spatial frequencies (0.5-12 cpd). For all spatial frequencies tested, performance on the three tasks decreased as set size increased in the neutral precue condition, and the peripheral precue reduced the effect. Larger set-size effects were observed at low SFs in the identification and localization tasks. This effect can be described using the SDT model, but was not predicted by it. For each of these tasks we also established the extent to which covert attention modulates performance across a range of set sizes. A peripheral precue substantially diminished the set-size effect and improved performance, even at set size 1. These results provide support for distracter exclusion, and suggest that signal enhancement may also be a mechanism by which covert attention can impose its effect.
Ensemble coding remains accurate under object and spatial visual working memory load.
Epstein, Michael L; Emmanouil, Tatiana A
2017-10-01
A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies
2010-01-01
Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general. PMID:20144194
Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies.
David, Maria Pamela C; Concepcion, Gisela P; Padlan, Eduardo A
2010-02-08
All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant applications in evaluating engineered antibodies, and may be adapted for evaluating engineered proteins in general.
Hypervelocity Capability of the HYPULSE Shock-Expansion Tunnel for Scramjet Testing
NASA Technical Reports Server (NTRS)
Foelsche, Robert O.; Rogers, R. Clayton; Tsai, Ching-Yi; Bakos, Robert J.; Shih, Ann T.
2001-01-01
New hypervelocity capabilities for scramjet testing have recently been demonstrated in the HYPULSE Shock-Expansion Tunnel (SET). With NASA's continuing interests in scramjet testing at hypervelocity conditions (Mach 12 and above), a SET nozzle was designed and added to the HYPULSE facility. Results of tests conducted to establish SET operational conditions and facility nozzle calibration are presented and discussed for a Mach 15 (M15) flight enthalpy. The measurements and detailed computational fluid dynamics calculations (CFD) show the nozzle delivers a test gas with sufficiently wide core size to be suitable for free-jet testing of scramjet engine models of similar scale as, those tested in conventional low Mach number blow-down test facilities.
The effects of delay duration on visual working memory for orientation.
Shin, Hongsup; Zou, Qijia; Ma, Wei Ji
2017-12-01
We used a delayed-estimation paradigm to characterize the joint effects of set size (one, two, four, or six) and delay duration (1, 2, 3, or 6 s) on visual working memory for orientation. We conducted two experiments: one with delay durations blocked, another with delay durations interleaved. As dependent variables, we examined four model-free metrics of dispersion as well as precision estimates in four simple models. We tested for effects of delay time using analyses of variance, linear regressions, and nested model comparisons. We found significant effects of set size and delay duration on both model-free and model-based measures of dispersion. However, the effect of delay duration was much weaker than that of set size, dependent on the analysis method, and apparent in only a minority of subjects. The highest forgetting slope found in either experiment at any set size was a modest 1.14°/s. As secondary results, we found a low rate of nontarget reports, and significant estimation biases towards oblique orientations (but no dependence of their magnitude on either set size or delay duration). Relative stability of working memory even at higher set sizes is consistent with earlier results for motion direction and spatial frequency. We compare with a recent study that performed a very similar experiment.
The influence of perceptual load on age differences in selective attention.
Maylor, E A; Lavie, N
1998-12-01
The effect of perceptual load on age differences in visual selective attention was examined in 2 studies. In Experiment 1, younger and older adults made speeded choice responses indicating which of 2 target letters was present in a relevant set of letters in the center of the display while they attempted to ignore an irrelevant distractor in the periphery. The perceptual load of relevant processing was manipulated by varying the central set size. When the relevant set size was small, the adverse effect of an incompatible distractor was much greater for the older participants than for the younger ones. However, with larger relevant set sizes, this was no longer the case, with the distractor effect decreasing for older participants at lower levels of perceptual load than for younger ones. In Experiment 2, older adults were tested with the empty locations in the central set either unmarked (as in Experiment 1) or marked by small circles to form a group of 6 items irrespective of set size; the 2 conditions did not differ markedly, ruling out an explanation based entirely on perceptual grouping.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
NASA Astrophysics Data System (ADS)
Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes
2004-12-01
In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .
Is overestimation of body size associated with neuropsychological weaknesses in anorexia nervosa?
Øverås, Maria; Kapstad, Hilde; Brunborg, Cathrine; Landrø, Nils Inge; Rø, Øyvind
2017-03-01
Recent research indicates some evidence of neuropsychological weaknesses in visuospatial memory, central coherence and set-shifting in adults with anorexia nervosa (AN). The growing interest in neuropsychological functioning of patients with AN is based upon the assumption that neuropsychological weaknesses contribute to the clinical features of the illness. However, due to a paucity of research on the connection between neuropsychological difficulties and the clinical features of AN, this link remains hypothetical. The main objective of this study was to explore the association between specific areas of neuropsychological functioning and body size estimation in patients with AN and healthy controls. The sample consisted of 36 women diagnosed with AN and 34 healthy female controls. Participants were administered the continuous visual memory test and the recall trials of Rey Complex Figure Test to assess visual memory. Central coherence was assessed using the copy trial of Rey Complex Figure Test, and the Wisconsin Card Sorting Test was used to assess set-shifting. Body size estimation was assessed with a computerized morphing programme. The analyses showed no significant correlations between any of the neuropsychological measures and body size estimation. The results suggest that there is no association between these areas of neuropsychological difficulties and body size estimation among patients with AN. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.
Child t-shirt size data set from 3D body scanner anthropometric measurements and a questionnaire.
Pierola, A; Epifanio, I; Alemany, S
2017-04-01
A dataset of a fit assessment study in children is presented. Anthropometric measurements of 113 children were obtained using a 3D body scanner. Children tested a t-shirt of different sizes and a different model for boys and girls, and their fit was assessed by an expert. This expert labeled the fit as 0 (correct), -1 (if the garment was small for that child), or 1 (if the garment was large for that child) in an ordered factor called Size-fit. Moreover, the fit was numerically assessed from 1 (very poor fit) to 10 (perfect fit) in a variable called Expert evaluation. This data set contains the differences between the reference mannequin of the evaluated size and the child׳s anthropometric measurements for 27 variables. Besides these variables, in the data set, we can also find the gender, the size evaluated, and the size recommended by the expert, including if an intermediate, but nonexistent size between two consecutive sizes would have been the right size. In total, there are 232 observations. The analysis of these data can be found in Pierola et al. (2016) [2].
Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo
2010-05-01
The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Crutcher, H. L.; Falls, L. W.
1976-01-01
Sets of experimentally determined or routinely observed data provide information about the past, present and, hopefully, future sets of similarly produced data. An infinite set of statistical models exists which may be used to describe the data sets. The normal distribution is one model. If it serves at all, it serves well. If a data set, or a transformation of the set, representative of a larger population can be described by the normal distribution, then valid statistical inferences can be drawn. There are several tests which may be applied to a data set to determine whether the univariate normal model adequately describes the set. The chi-square test based on Pearson's work in the late nineteenth and early twentieth centuries is often used. Like all tests, it has some weaknesses which are discussed in elementary texts. Extension of the chi-square test to the multivariate normal model is provided. Tables and graphs permit easier application of the test in the higher dimensions. Several examples, using recorded data, illustrate the procedures. Tests of maximum absolute differences, mean sum of squares of residuals, runs and changes of sign are included in these tests. Dimensions one through five with selected sample sizes 11 to 101 are used to illustrate the statistical tests developed.
Kursawe, Michael A; Zimmer, Hubert D
2015-06-01
We investigated the impact of perceptual processing demands on visual working memory of coloured complex random polygons during change detection. Processing load was assessed by pupil size (Exp. 1) and additionally slow wave potentials (Exp. 2). Task difficulty was manipulated by presenting different set sizes (1, 2, 4 items) and by making different features (colour, shape, or both) task-relevant. Memory performance in the colour condition was better than in the shape and both condition which did not differ. Pupil dilation and the posterior N1 increased with set size independent of type of feature. In contrast, slow waves and a posterior P2 component showed set size effects but only if shape was task-relevant. In the colour condition slow waves did not vary with set size. We suggest that pupil size and N1 indicates different states of attentional effort corresponding to the number of presented items. In contrast, slow waves reflect processes related to encoding and maintenance strategies. The observation that their potentials vary with the type of feature (simple colour versus complex shape) indicates that perceptual complexity already influences encoding and storage and not only comparison of targets with memory entries at the moment of testing. Copyright © 2015 Elsevier B.V. All rights reserved.
How to test validity in orthodontic research: a mixed dentition analysis example.
Donatelli, Richard E; Lee, Shin-Jae
2015-02-01
The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness
NASA Technical Reports Server (NTRS)
Staats, Matt; Whalen, Michael W.; Heindahl, Mats P. E.; Rajan, Ajitha
2010-01-01
In black-box testing, the tester creates a set of tests to exercise a system under test without regard to the internal structure of the system. Generally, no objective metric is used to measure the adequacy of black-box tests. In recent work, we have proposed three requirements coverage metrics, allowing testers to objectively measure the adequacy of a black-box test suite with respect to a set of requirements formalized as Linear Temporal Logic (LTL) properties. In this report, we evaluate the effectiveness of these coverage metrics with respect to fault finding. Specifically, we conduct an empirical study to investigate two questions: (1) do test suites satisfying a requirements coverage metric provide better fault finding than randomly generated test suites of approximately the same size?, and (2) do test suites satisfying a more rigorous requirements coverage metric provide better fault finding than test suites satisfying a less rigorous requirements coverage metric? Our results indicate (1) only one coverage metric proposed -- Unique First Cause (UFC) coverage -- is sufficiently rigorous to ensure test suites satisfying the metric outperform randomly generated test suites of similar size and (2) that test suites satisfying more rigorous coverage metrics provide better fault finding than test suites satisfying less rigorous coverage metrics.
Measuring and Specifying Combinatorial Coverage of Test Input Configurations
Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu
2015-01-01
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
Size and emotion averaging: costs of dividing attention after all.
Brand, John; Oriet, Chris; Tottenham, Laurie Sykes
2012-03-01
Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.
Retest effects in working memory capacity tests: A meta-analysis.
Scharfen, Jana; Jansen, Katrin; Holling, Heinz
2018-06-15
The repeated administration of working memory capacity tests is common in clinical and research settings. For cognitive ability tests and different neuropsychological tests, meta-analyses have shown that they are prone to retest effects, which have to be accounted for when interpreting retest scores. Using a multilevel approach, this meta-analysis aims at showing the reproducibility of retest effects in working memory capacity tests for up to seven test administrations, and examines the impact of the length of the test-retest interval, test modality, equivalence of test forms and participant age on the size of retest effects. Furthermore, it is assessed whether the size of retest effects depends on the test paradigm. An extensive literature search revealed 234 effect sizes from 95 samples and 68 studies, in which healthy participants between 12 and 70 years repeatedly performed a working memory capacity test. Results yield a weighted average of g = 0.28 for retest effects from the first to the second test administration, and a significant increase in effect sizes was observed up to the fourth test administration. The length of the test-retest interval and publication year were found to moderate the size of retest effects. Retest effects differed between the paradigms of working memory capacity tests. These findings call for the development and use of appropriate experimental or statistical methods to address retest effects in working memory capacity tests.
Evaluation of rules to distinguish unique female grizzly bears with cubs in Yellowstone
Schwartz, C.C.; Haroldson, M.A.; Cherry, S.; Keating, K.A.
2008-01-01
The United States Fish and Wildlife Service uses counts of unduplicated female grizzly bears (Ursus arctos) with cubs-of-the-year to establish limits of sustainable mortality in the Greater Yellowstone Ecosystem, USA. Sightings are dustered into observations of unique bears based on an empirically derived rule set. The method has never been tested or verified. To evaluate the rule set, we used data from radiocollared females obtained during 1975-2004 to simulate populations under varying densities, distributions, and sighting frequencies. We tested individual rules and rule-set performance, using custom software to apply the rule-set and duster sightings. Results indicated most rules were violated to some degree, and rule-based dustering consistently underestimated the minimum number of females and total population size derived from a nonparametric estimator (Chao2). We conclude that the current rule set returns conservative estimates, but with minor improvements, counts of unduplicated females-with-cubs can serve as a reasonable index of population size useful for establishing annual mortality limits. For the Yellowstone population, the index is more practical and cost-effective than capture-mark-recapture using either DNA hair snagging or aerial surveys with radiomarked bears. The method has useful application in other ecosystems, but we recommend rules used to distinguish unique females be adapted to local conditions and tested.
Set size manipulations reveal the boundary conditions of perceptual ensemble learning.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-11-01
Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Aizawa, Kazumi; Iso, Tatsuo; Nadasdy, Paul
2017-01-01
Testing learners' English proficiency is central to university English classes in Japan. This study developed and implemented a set of parallel online receptive aural and visual vocabulary tests that would predict learners' English proficiency. The tests shared the same target words and choices--the main difference was the presentation of the…
Load Transmission Through Artificial Hip Joints due to Stress Wave Loading
NASA Astrophysics Data System (ADS)
Tanabe, Y.; Uchiyama, T.; Yamaoka, H.; Ohashi, H.
Since wear of the polyethylene (Ultra High Molecular Weight Polyethylene or UHMWPE) acetabular cup is considered to be the main cause of loosening of the artificial hip joint, the cross-linked UHMWPE with high durability to wear has been developed. This paper deals with impact load transmission through the complex of an artificial hip joint consisting of a UHMWPE acetabular cup (or liner), a metallic femoral head and stem. Impact compressive tests on the complex were performed using the split-Hopkinson pressure bar apparatus. To investigate the effects of material (conventional or cross-linked UHMWPE), size and setting angle of the liner, and test temperature on force transmission, the impact load transmission ratio (ILTR) was experimentally determined. The ILTR decreased with an increase of the setting angle independent of material and size of the liner, and test temperature. The ILTR values at 37°C were larger than those at 24 °C and 60°C. The ILTR also appeared to be affected by the type of material as well as size of the liner.
The production of calibration specimens for impact testing of subsize Charpy specimens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexander, D.J.; Corwin, W.R.; Owings, T.D.
1994-09-01
Calibration specimens have been manufactured for checking the performance of a pendulum impact testing machine that has been configured for testing subsize specimens, both half-size (5.0 {times} 5.0 {times} 25.4 mm) and third-size (3.33 {times} 3.33 {times} 25.4 mm). Specimens were fabricated from quenched-and-tempered 4340 steel heat treated to produce different microstructures that would result in either high or low absorbed energy levels on testing. A large group of both half- and third-size specimens were tested at {minus}40{degrees}C. The results of the tests were analyzed for average value and standard deviation, and these values were used to establish calibration limitsmore » for the Charpy impact machine when testing subsize specimens. These average values plus or minus two standard deviations were set as the acceptable limits for the average of five tests for calibration of the impact testing machine.« less
Set size and culture influence children's attention to number.
Cantrell, Lisa; Kuwabara, Megumi; Smith, Linda B
2015-03-01
Much research evidences a system in adults and young children for approximately representing quantity. Here we provide evidence that the bias to attend to discrete quantity versus other dimensions may be mediated by set size and culture. Preschool-age English-speaking children in the United States and Japanese-speaking children in Japan were tested in a match-to-sample task where number was pitted against cumulative surface area in both large and small numerical set comparisons. Results showed that children from both cultures were biased to attend to the number of items for small sets. Large set responses also showed a general attention to number when ratio difficulty was easy. However, relative to the responses for small sets, attention to number decreased for both groups; moreover, both U.S. and Japanese children showed a significant bias to attend to total amount for difficult numerical ratio distances, although Japanese children shifted attention to total area at relatively smaller set sizes than U.S. children. These results add to our growing understanding of how quantity is represented and how such representation is influenced by context--both cultural and perceptual. Copyright © 2014 Elsevier Inc. All rights reserved.
The influence of particle size and curing conditions on testing mineral trioxide aggregate cement.
Ha, William Nguyen; Kahler, Bill; Walsh, Laurence James
2016-12-01
Objectives: To assess the effects on curing conditions (dry versus submerged curing) and particle size on the compressive strength (CS) and flexural strength (FS) of set MTA cement. Materials and methods: Two different Portland cements were created, P1 and P2, with P1 < P2 in particle size. These were then used to create two experimental MTA products, M1 and M2, with M1 < M2 in particle size. Particle size analysis was performed according to ISO 13320. The particle size at the 90th percentile (i.e. the larger particles) was P1: 15.2 μm, P2: 29.1 μm, M1: 16.5 μm, and M2: 37.1 μm. M2 was cured exposed to air, or submerged in fluids of pH 5.0, 7.2 (PBS), or 7.5 for 1 week. CS and FS of the set cement were determined using a modified ISO 9917-1 and ISO 4049 methods, respectively. P1, P2, M1 and M2 were cured in PBS at physiological pH (7.2) and likewise tested for CS and FS. Results: Curing under dry conditions gave a significantly lower CS than when cured in PBS. There was a trend for lower FS for dry versus wet curing. However, this did not reach statistical significance. Cements with smaller particle sizes showed greater CS and FS at 1 day than those with larger particle sizes. However, this advantage was lost over the following 1-3 weeks. Conclusions : Experiments that test the properties of MTA should cure the MTA under wet conditions and at physiological pH.
The influence of particle size and curing conditions on testing mineral trioxide aggregate cement
Ha, William Nguyen; Kahler, Bill; Walsh, Laurence James
2016-01-01
Abstract Objectives: To assess the effects on curing conditions (dry versus submerged curing) and particle size on the compressive strength (CS) and flexural strength (FS) of set MTA cement. Materials and methods: Two different Portland cements were created, P1 and P2, with P1 < P2 in particle size. These were then used to create two experimental MTA products, M1 and M2, with M1 < M2 in particle size. Particle size analysis was performed according to ISO 13320. The particle size at the 90th percentile (i.e. the larger particles) was P1: 15.2 μm, P2: 29.1 μm, M1: 16.5 μm, and M2: 37.1 μm. M2 was cured exposed to air, or submerged in fluids of pH 5.0, 7.2 (PBS), or 7.5 for 1 week. CS and FS of the set cement were determined using a modified ISO 9917-1 and ISO 4049 methods, respectively. P1, P2, M1 and M2 were cured in PBS at physiological pH (7.2) and likewise tested for CS and FS. Results: Curing under dry conditions gave a significantly lower CS than when cured in PBS. There was a trend for lower FS for dry versus wet curing. However, this did not reach statistical significance. Cements with smaller particle sizes showed greater CS and FS at 1 day than those with larger particle sizes. However, this advantage was lost over the following 1–3 weeks. Conclusions: Experiments that test the properties of MTA should cure the MTA under wet conditions and at physiological pH. PMID:28642923
Crans, Gerald G; Shuster, Jonathan J
2008-08-15
The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd
Jou, Jerwen
2014-10-01
Subjects performed Sternberg-type memory recognition tasks (Sternberg paradigm) in four experiments. Category-instance names were used as learning and testing materials. Sternberg's original experiments demonstrated a linear relation between reaction time (RT) and memory-set size (MSS). A few later studies found no relation, and other studies found a nonlinear relation (logarithmic) between the two variables. These deviations were used as evidence undermining Sternberg's serial scan theory. This study identified two confounding variables in the fixed-set procedure of the paradigm (where multiple probes are presented at test for a learned memory set) that could generate a MSS RT function that was either flat or logarithmic rather than linearly increasing. These two confounding variables were task-switching cost and repetition priming. The former factor worked against smaller memory sets and in favour of larger sets whereas the latter factor worked in the opposite way. Results demonstrated that a null or a logarithmic RT-to-MSS relation could be the artefact of the combined effects of these two variables. The Sternberg paradigm has been used widely in memory research, and a thorough understanding of the subtle methodological pitfalls is crucial. It is suggested that a varied-set procedure (where only one probe is presented at test for a learned memory set) is a more contamination-free procedure for measuring the MSS effects, and that if a fixed-set procedure is used, it is worthwhile examining the RT function of the very first trials across the MSSs, which are presumably relatively free of contamination by the subsequent trials.
Reponen, Tiina; Lee, Shu-An; Grinshpun, Sergey A; Johnson, Erik; McKay, Roy
2011-04-01
This study investigated particle-size-selective protection factors (PFs) of four models of N95 filtering facepiece respirators (FFRs) that passed and failed fit testing. Particle size ranges were representative of individual viruses and bacteria (aerodynamic diameter d(a) = 0.04-1.3 μm). Standard respirator fit testing was followed by particle-size-selective measurement of PFs while subjects wore N95 FFRs in a test chamber. PF values obtained for all subjects were then compared to those obtained for the subjects who passed the fit testing. Overall fit test passing rate for all four models of FFRs was 67%. Of these, 29% had PFs <10 (the Occupational Safety and Health Administration Assigned Protection Factor designated for this type of respirator). When only subjects that passed fit testing were included, PFs improved with 9% having values <10. On average, the PFs were 1.4 times (29.5/21.5) higher when only data for those who passed fit testing were included. The minimum PFs were consistently observed in the particle size range of 0.08-0.2 μm. Overall PFs increased when subjects passed fit testing. The results support the value of fit testing but also show for the first time that PFs are dependent on particle size regardless of fit testing status.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Serial reconstruction of order and serial recall in verbal short-term memory.
Quinlan, Philip T; Roodenrys, Steven; Miller, Leonie M
2017-10-01
We carried out a series of experiments on verbal short-term memory for lists of words. In the first experiment, participants were tested via immediate serial recall, and word frequency and list set size were manipulated. With closed lists, the same set of items was repeatedly sampled, and with open lists, no item was presented more than once. In serial recall, effects of word frequency and set size were found. When a serial reconstruction-of-order task was used, in a second experiment, robust effects of word frequency emerged, but set size failed to show an effect. The effects of word frequency in order reconstruction were further examined in two final experiments. The data from these experiments revealed that the effects of word frequency are robust and apparently are not exclusively indicative of output processes. In light of these findings, we propose a multiple-mechanisms account in which word frequency can influence both retrieval and preretrieval processes.
McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.
2017-01-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957
McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S
2017-12-01
Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.
Dellicour, Simon; Flot, Jean-François
2015-11-01
Most single-locus molecular approaches to species delimitation available to date have been designed and tested on data sets comprising at least tens of species, whereas the opposite case (species-poor data sets for which the hypothesis that all individuals are conspecific cannot by rejected beforehand) has rarely been the focus of such attempts. Here we compare the performance of barcode gap detection, haplowebs and generalized mixed Yule-coalescent (GMYC) models to delineate chimpanzees and bonobos using nuclear sequence markers, then apply these single-locus species delimitation methods to data sets of one, three, or six species simulated under a wide range of population sizes, speciation rates, mutation rates and sampling efforts. Our results show that barcode gap detection and GMYC models are unable to delineate species properly in data sets composed of one or two species, two situations in which haplowebs outperform them. For data sets composed of three or six species, bGMYC and haplowebs outperform the single-threshold and multiple-threshold versions of GMYC, whereas a clear barcode gap is only observed when population sizes and speciation rates are both small. The latter conditions represent a "sweet spot" for molecular taxonomy where all the single-locus approaches tested work well; however, the performance of these methods decreases strongly when population sizes and speciation rates are high, suggesting that multilocus approaches may be necessary to tackle such cases. © The Author(s) 2015. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Enhanced ID Pit Sizing Using Multivariate Regression Algorithm
NASA Astrophysics Data System (ADS)
Krzywosz, Kenji
2007-03-01
EPRI is funding a program to enhance and improve the reliability of inside diameter (ID) pit sizing for balance-of plant heat exchangers, such as condensers and component cooling water heat exchangers. More traditional approaches to ID pit sizing involve the use of frequency-specific amplitude or phase angles. The enhanced multivariate regression algorithm for ID pit depth sizing incorporates three simultaneous input parameters of frequency, amplitude, and phase angle. A set of calibration data sets consisting of machined pits of various rounded and elongated shapes and depths was acquired in the frequency range of 100 kHz to 1 MHz for stainless steel tubing having nominal wall thickness of 0.028 inch. To add noise to the acquired data set, each test sample was rotated and test data acquired at 3, 6, 9, and 12 o'clock positions. The ID pit depths were estimated using a second order and fourth order regression functions by relying on normalized amplitude and phase angle information from multiple frequencies. Due to unique damage morphology associated with the microbiologically-influenced ID pits, it was necessary to modify the elongated calibration standard-based algorithms by relying on the algorithm developed solely from the destructive sectioning results. This paper presents the use of transformed multivariate regression algorithm to estimate ID pit depths and compare the results with the traditional univariate phase angle analysis. Both estimates were then compared with the destructive sectioning results.
Performance analysis of SA-3 missile second stage
NASA Technical Reports Server (NTRS)
Helmy, A. M.
1981-01-01
One SA-3 missile was disassembled. The constituents of the second stage were thoroughly investigated for geometrical details. The second stage slotted composite propellant grain was subjected to mechanical properties testing, physiochemical analyses, and burning rate measurements at different conditions. To determine the propellant performance parameters, the slotted composite propellant grain was machined into a set of small-size tubular grains. These grains were fired in a small size rocket motor with a set of interchangeable nozzles with different throat diameters. The firings were carried out at three different conditions. The data from test motor firings, physiochemical properties of the propellant, burning rate measurement results and geometrical details of the second stage motor, were used as input data in a computer program to compute the internal ballistic characteristics of the second stage.
40 CFR 86.1111-87 - Test procedures for PCA testing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... in paragraph (a) of § 86.133. (v) The manufacturer may substitute slave tires for the drive wheel... same size as the drive wheel tires. (vi) The cold start exhaust emission test described in § 86.137... well as the likelihood that similar settings will occur on in-use heavy-duty engines or light-duty...
Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P
2014-03-01
Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Sample size determination for equivalence assessment with multiple endpoints.
Sun, Anna; Dong, Xiaoyu; Tsong, Yi
2014-01-01
Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
Xenon monitoring and the Comprehensive Nuclear-Test-Ban Treaty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowyer, Theodore W.
How do you monitor (verify) a CTBT? It is a difficult challenge to monitor the entire world for nuclear tests, regardless of size. Nuclear tests 'normally' occur underground, above ground or underwater. Setting aside very small tests (let's limit our thinking to 1 kiloton or more), nuclear tests shake the ground, emit large amounts of radioactivity, and make loud noises if in the atmosphere (or hydroacoustic waves if underwater)
2001-09-01
testing is performed between two machines connected by either a 100 Mbps Ethernet connection or a 56K modem connection. This testing is performed...and defined as follows: • The available bandwidth is set at two different levels (Ethernet 100 Mbps and 56K modem ). 32 • The packet size is set... modem connection. These two connections represent the target 100 Mbps high end and 56k bps low end of anticipated client connections in web-based
Methods for converging correlation energies within the dielectric matrix formalism
NASA Astrophysics Data System (ADS)
Dixit, Anant; Claudot, Julien; Gould, Tim; Lebègue, Sébastien; Rocca, Dario
2018-03-01
Within the dielectric matrix formalism, the random-phase approximation (RPA) and analogous methods that include exchange effects are promising approaches to overcome some of the limitations of traditional density functional theory approximations. The RPA-type methods however have a significantly higher computational cost, and, similarly to correlated quantum-chemical methods, are characterized by a slow basis set convergence. In this work we analyzed two different schemes to converge the correlation energy, one based on a more traditional complete basis set extrapolation and one that converges energy differences by accounting for the size-consistency property. These two approaches have been systematically tested on the A24 test set, for six points on the potential-energy surface of the methane-formaldehyde complex, and for reaction energies involving the breaking and formation of covalent bonds. While both methods converge to similar results at similar rates, the computation of size-consistent energy differences has the advantage of not relying on the choice of a specific extrapolation model.
Internal Temperature Control For Vibration Testers
NASA Technical Reports Server (NTRS)
Dean, Richard J.
1996-01-01
Vibration test fixtures with internal thermal-transfer capabilities developed. Made of aluminum for rapid thermal transfer. Small size gives rapid response to changing temperatures, with better thermal control. Setup quicker and internal ducting facilitates access to parts being tested. In addition, internal flows smaller, so less energy consumed in maintaining desired temperature settings.
Valeix, Marion; Loveridge, Andrew J; MacDonald, David W
2012-11-01
Empirical tests of the resource dispersion hypothesis (RDH), a theory to explain group living based on resource heterogeneity, have been complicated by the fact that resource patch dispersion and richness have proved difficult to define and measure in natural systems. Here, we studied the ecology of African lions Panthera leo in Hwange National Park, Zimbabwe, where waterholes are prey hotspots, and where dispersion of water sources and abundance of prey at these water sources are quantifiable. We combined a 10-year data set from GPS-collared lions for which information of group composition was available concurrently with data for herbivore abundance at waterholes. The distance between two neighboring waterholes was a strong determinant of lion home range size, which provides strong support for the RDH prediction that territory size increases as resource patches are more dispersed in the landscape. The mean number of herbivore herds using a waterhole, a good proxy of patch richness, determined the maximum lion group biomass an area can support. This finding suggests that patch richness sets a maximum ceiling on lion group size. This study demonstrates that landscape ecology is a major driver of ranging behavior and suggests that aspects of resource dispersion limit group sizes.
Usability-driven pruning of large ontologies: the case of SNOMED CT.
López-García, Pablo; Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan
2012-06-01
To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Graph-traversal heuristics provided high coverage (71-96% of terms in the test sets of discharge summaries) at the expense of subset size (17-51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24-55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available.
Adaptive Set-Based Methods for Association Testing
Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo
2017-01-01
With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371
Quality-control issues on high-resolution diagnostic monitors.
Parr, L F; Anderson, A L; Glennon, B K; Fetherston, P
2001-06-01
Previous literature indicates a need for more data collection in the area of quality control of high-resolution diagnostic monitors. Throughout acceptance testing, which began in June 2000, stability of monitor calibration was analyzed. Although image quality on all monitors was found to be acceptable upon initial acceptance testing using VeriLUM software by Image Smiths, Inc (Germantown, MD), it was determined to be unacceptable during the clinical phase of acceptance testing. High-resolution monitors were evaluated for quality assurance on a weekly basis from installation through acceptance testing and beyond. During clinical utilization determination (CUD), monitor calibration was identified as a problem and the manufacturer returned and recalibrated all workstations. From that time through final acceptance testing, high-resolution monitor calibration and monitor failure rate remained a problem. The monitor vendor then returned to the site to address these areas. Monitor defocus was still noticeable and calibration checks were increased to three times per week. White and black level drift on medium-resolution monitors had been attributed to raster size settings. Measurements of white and black level at several different size settings were taken to determine the effect of size on white and black level settings. Black level remained steady with size change. White level appeared to increase by 2.0 cd/m2 for every 0.1 inches decrease in horizontal raster size. This was determined not to be the cause of the observed brightness drift. Frequency of calibration/testing is an issue in a clinical environment. The increased frequency required at our site cannot be sustained. The medical physics division cannot provide dedicated personnel to conduct the quality-assurance testing on all monitors at this interval due to other physics commitments throughout the hospital. Monitor access is also an issue due to radiologists' need to read images. Some workstations are in use 7 AM to 11 PM daily. An appropriate monitor calibration frequency must be established during acceptance testing to ensure unacceptable drift is not masked by excessive calibration frequency. Standards for acceptable black level and white level drift also need to be determined. The monitor vendor and hospital staff agree that currently, very small printed text is an acceptable method of determining monitor blur, however, a better method of determining monitor blur is being pursued. Although monitors may show acceptable quality during initial acceptance testing, they need to show sustained quality during the clinical acceptance-testing phase. Defocus, black level, and white level are image quality concerns, which need to be evaluated during the clinical phase of acceptance testing. Image quality deficiencies can have a negative impact on patient care and raise serious medical-legal concerns. The attention to quality control required of the hospital staff needs to be realistic and not have a significant impact on radiology workflow.
Msimanga, Huggins Z; Ollis, Robert J
2010-06-01
Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify acetaminophen-containing medicines using their attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectra. Four formulations of Tylenol (Arthritis Pain Relief, Extra Strength Pain Relief, 8 Hour Pain Relief, and Extra Strength Pain Relief Rapid Release) along with 98% pure acetaminophen were selected for this study because of the similarity of their spectral features, with correlation coefficients ranging from 0.9857 to 0.9988. Before acquiring spectra for the predictor matrix, the effects on spectral precision with respect to sample particle size (determined by sieve size opening), force gauge of the ATR accessory, sample reloading, and between-tablet variation were examined. Spectra were baseline corrected and normalized to unity before multivariate analysis. Analysis of variance (ANOVA) was used to study spectral precision. The large particles (35 mesh) showed large variance between spectra, while fine particles (120 mesh) indicated good spectral precision based on the F-test. Force gauge setting did not significantly affect precision. Sample reloading using the fine particle size and a constant force gauge setting of 50 units also did not compromise precision. Based on these observations, data acquisition for the predictor matrix was carried out with the fine particles (sieve size opening of 120 mesh) at a constant force gauge setting of 50 units. After removing outliers, PCA successfully classified the five samples in the first and second components, accounting for 45.0% and 24.5% of the variances, respectively. The four-component PLS-DA model (R(2)=0.925 and Q(2)=0.906) gave good test spectra predictions with an overall average of 0.961 +/- 7.1% RSD versus the expected 1.0 prediction for the 20 test spectra used.
K-S Test for Goodness of Fit and Waiting Times for Fatal Plane Accidents
ERIC Educational Resources Information Center
Gwanyama, Philip Wagala
2005-01-01
The Kolmogorov?Smirnov (K-S) test for goodness of fit was developed by Kolmogorov in 1933 [1] and Smirnov in 1939 [2]. Its procedures are suitable for testing the goodness of fit of a data set for most probability distributions regardless of sample size [3-5]. These procedures, modified for the exponential distribution by Lilliefors [5] and…
Byrne, A W; Graham, J; Brown, C; Donaghy, A; Guelbenzu-Gonzalo, M; McNair, J; Skuce, R A; Allen, A; McDowell, S W
2018-06-01
Correctly identifying bovine tuberculosis (bTB) in cattle remains a significant problem in endemic countries. We hypothesized that animal characteristics (sex, age, breed), histories (herd effects, testing, movement) and potential exposure to other pathogens (co-infection; BVDV, liver fluke and Mycobacterium avium reactors) could significantly impact the immune responsiveness detected at skin testing and the variation in post-mortem pathology (confirmation) in bTB-exposed cattle. Three model suites were developed using a retrospective observational data set of 5,698 cattle culled during herd breakdowns in Northern Ireland. A linear regression model suggested that antemortem tuberculin reaction size (difference in purified protein derivative avium [PPDa] and bovine [PPDb] reactions) was significantly positively associated with post-mortem maximum lesion size and the number of lesions found. This indicated that reaction size could be considered a predictor of both the extent (number of lesions/tissues) and the pathological progression of infection (maximum lesion size). Tuberculin reaction size was related to age class, and younger animals (<2.85 years) displayed larger reaction sizes than older animals. Tuberculin reaction size was also associated with breed and animal movement and increased with the time between the penultimate and disclosing tests. A negative binomial random-effects model indicated a significant increase in lesion counts for animals with M. avium reactions (PPDb-PPDa < 0) relative to non-reactors (PPDb-PPDa = 0). Lesion counts were significantly increased in animals with previous positive severe interpretation skin-test results. Animals with increased movement histories, young animals and non-dairy breed animals also had significantly increased lesion counts. Animals from herds that had BVDV-positive cattle had significantly lower lesion counts than animals from herds without evidence of BVDV infection. Restricting the data set to only animals with a bTB visible lesion at slaughter (n = 2471), an ordinal regression model indicated that liver fluke-infected animals disclosed smaller lesions, relative to liver fluke-negative animals, and larger lesions were disclosed in animals with increased movement histories. © 2018 Blackwell Verlag GmbH.
Wang, Zhuoyu; Dendukuri, Nandini; Pai, Madhukar; Joseph, Lawrence
2017-11-01
When planning a study to estimate disease prevalence to a pre-specified precision, it is of interest to minimize total testing cost. This is particularly challenging in the absence of a perfect reference test for the disease because different combinations of imperfect tests need to be considered. We illustrate the problem and a solution by designing a study to estimate the prevalence of childhood tuberculosis in a hospital setting. All possible combinations of 3 commonly used tuberculosis tests, including chest X-ray, tuberculin skin test, and a sputum-based test, either culture or Xpert, are considered. For each of the 11 possible test combinations, 3 Bayesian sample size criteria, including average coverage criterion, average length criterion and modified worst outcome criterion, are used to determine the required sample size and total testing cost, taking into consideration prior knowledge about the accuracy of the tests. In some cases, the required sample sizes and total testing costs were both reduced when more tests were used, whereas, in other examples, lower costs are achieved with fewer tests. Total testing cost should be formally considered when designing a prevalence study.
Criteria for a State-of-the-Art Vision Test System
1985-05-01
tests are enumerated for possible inclusion in a battery of candidate vision tests to be statistically examined for validity as predictors of aircrew...derived subset thereof) of vision tests may be given to a series of individuals, and statistical tests may be used to determine which visual functions...no target. Statistical analysis of the responses would set a threshold level, which would define the smallest size - (most distant target) or least
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2018-03-01
Like other NDE methods, eddy current surface crack detectability is determined using probability of detection (POD) demonstration. The POD demonstration involves eddy current testing of surface crack specimens with known crack sizes. Reliably detectable flaw size, denoted by, a90/95 is determined by statistical analysis of POD test data. The surface crack specimens shall be made from a similar material with electrical conductivity close to the part conductivity. A calibration standard with electro-discharged machined (EDM) notches is typically used in eddy current testing for surface crack detection. The calibration standard conductivity shall be within +/- 15% of the part conductivity. This condition is also applicable to the POD demonstration crack set. Here, a case is considered, where conductivity of the crack specimens available for POD testing differs by more than 15% from that of the part to be inspected. Therefore, a direct POD demonstration of reliably detectable flaw size is not applicable. Additional testing is necessary to use the demonstrated POD test data. An approach to estimate the reliably detectable flaw size in eddy current testing for part made from material A using POD crack specimens made from material B with different conductivity is provided. The approach uses additional test data obtained on EDM notch specimens made from materials A and B. EDM notch test data from the two materials is used to create a transfer function between the demonstrated a90/95 size on crack specimens made of material B and the estimated a90/95 size for part made of material A. Two methods are given. For method A, a90/95 crack size for material B is given and POD data is available. Objective of method A is to determine a90/95 crack size for material A using the same relative decision threshold that was used for material B. For method B, target crack size a90/95 for material A is known. Objective is to determine decision threshold for inspecting material A.
Wood crib fire free burning test in ISO room
NASA Astrophysics Data System (ADS)
Qiang, Xu; Griffin, Greg; Bradbury, Glenn; Dowling, Vince
2006-04-01
In the research of application potential of water mist fire suppression system for fire fighting in train luggage carriage, a series of experiments were conducted in ISO room on wood crib fire with and without water mist actuation. The results of free burn test without water mist suppression are used as reference in evaluating the efficiency of water mist suppression system. As part of the free burn test, several tests have been done under the hood of ISO room to calibrate the size of the crib fire and these tests can also be used in analyzing the wall effect in room fire hazard. In these free burning experiments, wood cribs of four sizes under the hood were tested. The temperature of crib fire, heat flux around the fire, gas concentration in hood of ISO room were measured in the experiments and two sets of thermal imaging system were used to get the temperature distribution and the typical shape of the free burning flames. From the experiments, the radiation intensity in specific positions around the fire, the effective heat of combustion, mass loss, oxygen consumption rate for different sizes of fire, typical structure of the flame and self extinguishment time was obtained for each crib size.
NASA Technical Reports Server (NTRS)
Johnson, Paul E.; Smith, Milton O.; Adams, John B.
1992-01-01
Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.
Testing for qualitative heterogeneity: An application to composite endpoints in survival analysis.
Oulhaj, Abderrahim; El Ghouch, Anouar; Holman, Rury R
2017-01-01
Composite endpoints are frequently used in clinical outcome trials to provide more endpoints, thereby increasing statistical power. A key requirement for a composite endpoint to be meaningful is the absence of the so-called qualitative heterogeneity to ensure a valid overall interpretation of any treatment effect identified. Qualitative heterogeneity occurs when individual components of a composite endpoint exhibit differences in the direction of a treatment effect. In this paper, we develop a general statistical method to test for qualitative heterogeneity, that is to test whether a given set of parameters share the same sign. This method is based on the intersection-union principle and, provided that the sample size is large, is valid whatever the model used for parameters estimation. We propose two versions of our testing procedure, one based on a random sampling from a Gaussian distribution and another version based on bootstrapping. Our work covers both the case of completely observed data and the case where some observations are censored which is an important issue in many clinical trials. We evaluated the size and power of our proposed tests by carrying out some extensive Monte Carlo simulations in the case of multivariate time to event data. The simulations were designed under a variety of conditions on dimensionality, censoring rate, sample size and correlation structure. Our testing procedure showed very good performances in terms of statistical power and type I error. The proposed test was applied to a data set from a single-center, randomized, double-blind controlled trial in the area of Alzheimer's disease.
Large size GEM for Super Bigbite Spectrometer (SBS) polarimeter for Hall A 12GeV program at JLab
Gnanvo, Kondo; Liyanage, Nilanga; Nelyubin, Vladimir; ...
2015-05-01
We report on the R&D effort in the design and construction of a large size GEM chamber for the Proton Polarimeter of the Super Bigbite Spectrometer (SBS) in Hall A at Thomas Jefferson National Laboratory (JLab). The SBS Polarimeter trackers consist of two sets of four large chambers of size 200 cm x 60 cm 2. Each chamber is a vertical stack of four GEM modules with an active area of 60 cm x 50 cm. We have built and tested several GEM modules and we describe in this paper the design and construction of the final GEM as wellmore » as the preliminary results on performances from tests carried out in our detector lab and with test beams at (Fermilab).« less
Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond
Wiens, Stefan; Nilsson, Mats E.
2016-01-01
Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes? PMID:29805179
Adaptive Set-Based Methods for Association Testing.
Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo
2016-02-01
With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.
Park, Bo Youn; Kim, Sujin; Cho, Yang Seok
2018-02-01
The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.
The grain-size lineup: A test of a novel eyewitness identification procedure.
Horry, Ruth; Brewer, Neil; Weber, Nathan
2016-04-01
When making a memorial judgment, respondents can regulate their accuracy by adjusting the precision, or grain size, of their responses. In many circumstances, coarse-grained responses are less informative, but more likely to be accurate, than fine-grained responses. This study describes a novel eyewitness identification procedure, the grain-size lineup, in which participants eliminated any number of individuals from the lineup, creating a choice set of variable size. A decision was considered to be fine-grained if no more than 1 individual was left in the choice set or coarse-grained if more than 1 individual was left in the choice set. Participants (N = 384) watched 2 high-quality or low-quality videotaped mock crimes and then completed 4 standard simultaneous lineups or 4 grain-size lineups (2 target-present and 2 target-absent). There was some evidence of strategic regulation of grain size, as the most difficult lineup was associated with a greater proportion of coarse-grained responses than the other lineups. However, the grain-size lineup did not outperform the standard simultaneous lineup. Fine-grained suspect identifications were no more diagnostic than suspect identifications from standard lineups, whereas coarse-grained suspect identifications carried little probative value. Participants were generally reluctant to provide coarse-grained responses, which may have hampered the utility of the procedure. For a grain-size approach to be useful, participants may need to be trained or instructed to use the coarse-grained option effectively. (c) 2016 APA, all rights reserved).
The Formation of Chondrules: Petrologic Tests of the Shock Wave Model
NASA Technical Reports Server (NTRS)
Connolly, H. C., Jr.; Love, S. G.
1998-01-01
Chondrules are mm-sized spheroidal igneous components of chondritic meteorites. They consist of olivine and orthopyroxene set in a glassy mesostasis with varying minor amounts of metals, sulfieds, oxides and carbon phases.
Clinical relevance is associated with allergen-specific wheal size in skin prick testing
Haahtela, T; Burbach, G J; Bachert, C; Bindslev-Jensen, C; Bonini, S; Bousquet, J; Bousquet-Rouanet, L; Bousquet, P J; Bresciani, M; Bruno, A; Canonica, G W; Darsow, U; Demoly, P; Durham, S R; Fokkens, W J; Giavi, S; Gjomarkaj, M; Gramiccioni, C; Kowalski, M L; Losonczy, G; Orosz, M; Papadopoulos, N G; Stingl, G; Todo-Bom, A; von Mutius, E; Köhli, A; Wöhrl, S; Järvenpää, S; Kautiainen, H; Petman, L; Selroos, O; Zuberbier, T; Heinzerling, L M
2014-01-01
Background Within a large prospective study, the Global Asthma and Allergy European Network (GA2LEN) has collected skin prick test (SPT) data throughout Europe to make recommendations for SPT in clinical settings. Objective To improve clinical interpretation of SPT results for inhalant allergens by providing quantitative decision points. Methods The GA2LEN SPT study with 3068 valid data sets was used to investigate the relationship between SPT results and patient-reported clinical relevance for each of the 18 inhalant allergens as well as SPT wheal size and physician-diagnosed allergy (rhinitis, asthma, atopic dermatitis, food allergy). The effects of age, gender, and geographical area on SPT results were assessed. For each allergen, the wheal size in mm with an 80% positive predictive value (PPV) for being clinically relevant was calculated. Results Depending on the allergen, from 40% (blatella) to 87–89% (grass, mites) of the positive SPT reactions (wheal size ≥ 3 mm) were associated with patient-reported clinical symptoms when exposed to the respective allergen. The risk of allergic symptoms increased significantly with larger wheal sizes for 17 of the 18 allergens tested. Children with positive SPT reactions had a smaller risk of sensitizations being clinically relevant compared with adults. The 80% PPV varied from 3 to 10 mm depending on the allergen. Conclusion These ‘reading keys’ for 18 inhalant allergens can help interpret SPT results with respect to their clinical significance. A SPT form with the standard allergens including mm decision points for each allergen is offered for clinical use. PMID:24283409
Clinical relevance is associated with allergen-specific wheal size in skin prick testing.
Haahtela, T; Burbach, G J; Bachert, C; Bindslev-Jensen, C; Bonini, S; Bousquet, J; Bousquet-Rouanet, L; Bousquet, P J; Bresciani, M; Bruno, A; Canonica, G W; Darsow, U; Demoly, P; Durham, S R; Fokkens, W J; Giavi, S; Gjomarkaj, M; Gramiccioni, C; Kowalski, M L; Losonczy, G; Orosz, M; Papadopoulos, N G; Stingl, G; Todo-Bom, A; von Mutius, E; Köhli, A; Wöhrl, S; Järvenpää, S; Kautiainen, H; Petman, L; Selroos, O; Zuberbier, T; Heinzerling, L M
2014-03-01
Within a large prospective study, the Global Asthma and Allergy European Network (GA(2) LEN) has collected skin prick test (SPT) data throughout Europe to make recommendations for SPT in clinical settings. To improve clinical interpretation of SPT results for inhalant allergens by providing quantitative decision points. The GA(2) LEN SPT study with 3068 valid data sets was used to investigate the relationship between SPT results and patient-reported clinical relevance for each of the 18 inhalant allergens as well as SPT wheal size and physician-diagnosed allergy (rhinitis, asthma, atopic dermatitis, food allergy). The effects of age, gender, and geographical area on SPT results were assessed. For each allergen, the wheal size in mm with an 80% positive predictive value (PPV) for being clinically relevant was calculated. Depending on the allergen, from 40% (blatella) to 87-89% (grass, mites) of the positive SPT reactions (wheal size ≥ 3 mm) were associated with patient-reported clinical symptoms when exposed to the respective allergen. The risk of allergic symptoms increased significantly with larger wheal sizes for 17 of the 18 allergens tested. Children with positive SPT reactions had a smaller risk of sensitizations being clinically relevant compared with adults. The 80% PPV varied from 3 to 10 mm depending on the allergen. These 'reading keys' for 18 inhalant allergens can help interpret SPT results with respect to their clinical significance. A SPT form with the standard allergens including mm decision points for each allergen is offered for clinical use. © 2013 The Authors. Clinical & Experimental Allergy published by John Wiley & Sons Ltd.
Scale out databases for CERN use cases
NASA Astrophysics Data System (ADS)
Baranowski, Zbigniew; Grzybek, Maciej; Canali, Luca; Lanza Garcia, Daniel; Surdy, Kacper
2015-12-01
Data generation rates are expected to grow very fast for some database workloads going into LHC run 2 and beyond. In particular this is expected for data coming from controls, logging and monitoring systems. Storing, administering and accessing big data sets in a relational database system can quickly become a very hard technical challenge, as the size of the active data set and the number of concurrent users increase. Scale-out database technologies are a rapidly developing set of solutions for deploying and managing very large data warehouses on commodity hardware and with open source software. In this paper we will describe the architecture and tests on database systems based on Hadoop and the Cloudera Impala engine. We will discuss the results of our tests, including tests of data loading and integration with existing data sources and in particular with relational databases. We will report on query performance tests done with various data sets of interest at CERN, notably data from the accelerator log database.
Questionnaire-based assessment of executive functioning: Psychometrics.
Castellanos, Irina; Kronenberger, William G; Pisoni, David B
2018-01-01
The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.
Computerized tomography calibrator
NASA Technical Reports Server (NTRS)
Engel, Herbert P. (Inventor)
1991-01-01
A set of interchangeable pieces comprising a computerized tomography calibrator, and a method of use thereof, permits focusing of a computerized tomographic (CT) system. The interchangeable pieces include a plurality of nestable, generally planar mother rings, adapted for the receipt of planar inserts of predetermined sizes, and of predetermined material densities. The inserts further define openings therein for receipt of plural sub-inserts. All pieces are of known sizes and densities, permitting the assembling of different configurations of materials of known sizes and combinations of densities, for calibration (i.e., focusing) of a computerized tomographic system through variation of operating variables thereof. Rather than serving as a phanton, which is intended to be representative of a particular workpiece to be tested, the set of interchangeable pieces permits simple and easy standardized calibration of a CT system. The calibrator and its related method of use further includes use of air or of particular fluids for filling various openings, as part of a selected configuration of the set of pieces.
Data-poor management of African lion hunting using a relative index of abundance.
Edwards, Charles T T; Bunnefeld, Nils; Balme, Guy A; Milner-Gulland, E J
2014-01-07
Sustainable management of terrestrial hunting requires managers to set quotas restricting offtake. This often takes place in the absence of reliable information on the population size, and as a consequence, quotas are set in an arbitrary fashion, leading to population decline and revenue loss. In this investigation, we show how an indirect measure of abundance can be used to set quotas in a sustainable manner, even in the absence of information on population size. Focusing on lion hunting in Africa, we developed a simple algorithm to convert changes in the number of safari days required to kill a lion into a quota for the following year. This was tested against a simulation model of population dynamics, accounting for uncertainties in demography, observation, and implementation. Results showed it to reliably set sustainable quotas despite these uncertainties, providing a robust foundation for the conservation of hunted species.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James
2018-06-01
Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.
Usability-driven pruning of large ontologies: the case of SNOMED CT
Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan
2012-01-01
Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217
NASA Astrophysics Data System (ADS)
Nanni, Ambra; Marigo, Paola; Groenewegen, Martin A. T.; Aringer, Berhard; Girardi, Léo; Pastorelli, Giada; Bressan, Alessandro; Bladh, Sara
2016-07-01
We present our recent investigation aimed at constraining the typical size and optical properties of carbon dust grains in Circumstellar envelopes (CSEs) of carbon-rich stars (C-stars) in the Small Magellanic Cloud (SMC).We applied our recent dust growth model, coupled with a radiative transfer code, to the dusty CSEs of C-stars along the TP-AGB phase, for which we computed spectra and colors. We then compared our modeled colors in the Near and Mid Infrared (NIR and MIR) bands with the observed ones, testing different assumptions in our dust scheme and employing different optical constants data sets for carbon dust. We constrained the optical properties of carbon dust by identifying the combinations of typical grain size and optical constants data set which simultaneously reproduce several colors in the NIR and MIR wavelengths. In particular, the different choices of optical properties and grain size lead to differences in the NIR and MIR colors greater than two magnitudes in some cases. We concluded that the complete set of selected NIR and MIR colors are best reproduced by small grains, with sizes between 0.06 and 0.1 mum, rather than by large grains of 0.2-0.4 mum. The inability of large grains to reproduce NIR and MIR colors is found to be independent of the adopted optical data set and the deviations between models and observations tend to increase for increasing grain sizes. We also find a possible trend of the typical grain size with mss-loss and/or carbon-excess in the CSEs of these stars.The work presented is preparatory to future studies aimed at calibrating the TP-AGB phase through resolved stellar populations in the framework of the STARKEY project.
A Generally Robust Approach for Testing Hypotheses and Setting Confidence Intervals for Effect Sizes
ERIC Educational Resources Information Center
Keselman, H. J.; Algina, James; Lix, Lisa M.; Wilcox, Rand R.; Deering, Kathleen N.
2008-01-01
Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of…
The Golden Rule Agreement is Psychometrically Defensible.
ERIC Educational Resources Information Center
Gonzalez-Tamayo, Eulogio
The agreement between the Educational Testing Service (ETS) and the Golden Rule Insurance Company of Illinois is interpreted as setting the general principles on which items must be selected to be included in a licensure test. These principles put a limit to the difficulty level of any item, and they also limit the size of the difference in…
Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan
2017-02-01
To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Yin, Robert K.; Schmidt, R. James; Besag, Frank
2006-01-01
The study of federal education initiatives that takes place over multiple years in multiple settings often calls for aggregating and comparing data-in particular, student achievement data-across a broad set of schools, districts, and states. The need to track the trends over time is complicated by the fact that the data from the different schools,…
An evaluation of grease type ball bearing lubricants operating in various environments
NASA Technical Reports Server (NTRS)
Mcmurtrey, E. L.
1981-01-01
Because many future spacecraft or space stations will require mechanisms to operate for long periods of time in environments which are adverse to most bearing lubricants, a series of tests is continuing to evaluate 38 grease type lubricants in R-4 size bearings in five different environments for a 1 year period. Four repetitions of each test are made to provide statistical samples. These tests were used to select four lubricants for 5 year tests in selected environments with five repetitions of each test for statistical samples. At the present time, 100 test sets are completed and 22 test sets are underway. Three 5 year tests were started in (1) continuous operation and (2) start-stop operation, with both in vacuum at ambient temperatures, and (3) continuous operation at 93.3 C. In the 1 year tests the best results to date in all environments were obtained with a high viscosity index perfluoroalkylpolyether (PFPE) grease.
Petito Boyce, Catherine; Sax, Sonja N; Cohen, Joel M
2017-08-01
Inhalation plays an important role in exposures to lead in airborne particulate matter in occupational settings, and particle size determines where and how much of airborne lead is deposited in the respiratory tract and how much is subsequently absorbed into the body. Although some occupational airborne lead particle size data have been published, limited information is available reflecting current workplace conditions in the U.S. To address this data gap, the Battery Council International (BCI) conducted workplace monitoring studies at nine lead acid battery manufacturing facilities (BMFs) and five secondary smelter facilities (SSFs) across the U.S. This article presents the results of the BCI studies focusing on the particle size distributions calculated from Personal Marple Impactor sampling data and particle deposition estimates in each of the three major respiratory tract regions derived using the Multiple-Path Particle Dosimetry model. The BCI data showed the presence of predominantly larger-sized particles in the work environments evaluated, with average mass median aerodynamic diameters (MMADs) ranging from 21-32 µm for the three BMF job categories and from 15-25 µm for the five SSF job categories tested. The BCI data also indicated that the percentage of lead mass measured at the sampled facilities in the submicron range (i.e., <1 µm, a particle size range associated with enhanced absorption of associated lead) was generally small. The estimated average percentages of lead mass in the submicron range for the tested job categories ranged from 0.8-3.3% at the BMFs and from 0.44-6.1% at the SSFs. Variability was observed in the particle size distributions across job categories and facilities, and sensitivity analyses were conducted to explore this variability. The BCI results were compared with results reported in the scientific literature. Screening-level analyses were also conducted to explore the overall degree of lead absorption potentially associated with the observed particle size distributions and to identify key issues associated with applying such data to set occupational exposure limits for lead.
Luomajoki, Hannu; Kool, Jan; de Bruin, Eling D; Airaksinen, Olavi
2008-01-01
Background To determine whether there is a difference between patients with low back pain and healthy controls in a test battery score for movement control of the lumbar spine. Methods This was a case control study, carried out in five outpatient physiotherapy practices in the German-speaking part of Switzerland. Twelve physiotherapists tested the ability of 210 subjects (108 patients with non-specific low back pain and 102 control subjects without back pain) to control their movements in the lumbar spine using a set of six tests. We observed the number of positive tests out of six (mean, standard deviation and 95% confidence interval of the mean). The significance of the differences between the groups was calculated with Mann-Whitney U test and p was set on <0.05. The effect size (d) between the groups was calculated and d>0.8 was considered a large difference. Results On average, patients with low back pain had 2.21(95%CI 1.94–2.48) positive tests and the healthy controls 0.75 (95%CI 0.55–0.95). The effect size was d = 1.18 (p < 0.001). There was a significant difference between acute and chronic (p < 0.01), as well as between subacute and chronic patient groups (p < 0.03), but not between acute and subacute patient groups (p > 0.7). Conclusion This is the first study demonstrating a significant difference between patients with low back pain and subjects without back pain regarding their ability to actively control the movements of the low back. The effect size between patients with low back pain and healthy controls in movement control is large. PMID:19108735
Learning and liking an artificial musical system: Effects of set size and repeated exposure
Loui, Psyche; Wessel, David
2009-01-01
We report an investigation of humans' musical learning ability using a novel musical system. We designed an artificial musical system based on the Bohlen-Pierce scale, a scale very different from Western music. Melodies were composed from chord progressions in the new scale by applying the rules of a finite-state grammar. After exposing participants to sets of melodies, we conducted listening tests to assess learning, including recognition tests, generalization tests, and subjective preference ratings. In Experiment 1, participants were presented with 15 melodies 27 times each. Forced choice results showed that participants were able to recognize previously encountered melodies and generalize their knowledge to new melodies, suggesting internalization of the musical grammar. Preference ratings showed no differentiation among familiar, new, and ungrammatical melodies. In Experiment 2, participants were given 10 melodies 40 times each. Results showed superior recognition but unsuccessful generalization. Additionally, preference ratings were significantly higher for familiar melodies. Results from the two experiments suggest that humans can internalize the grammatical structure of a new musical system following exposure to a sufficiently large set size of melodies, but musical preference results from repeated exposure to a small number of items. This dissociation between grammar learning and preference will be further discussed. PMID:20151034
Learning and liking an artificial musical system: Effects of set size and repeated exposure.
Loui, Psyche; Wessel, David
2008-10-01
We report an investigation of humans' musical learning ability using a novel musical system. We designed an artificial musical system based on the Bohlen-Pierce scale, a scale very different from Western music. Melodies were composed from chord progressions in the new scale by applying the rules of a finite-state grammar. After exposing participants to sets of melodies, we conducted listening tests to assess learning, including recognition tests, generalization tests, and subjective preference ratings. In Experiment 1, participants were presented with 15 melodies 27 times each. Forced choice results showed that participants were able to recognize previously encountered melodies and generalize their knowledge to new melodies, suggesting internalization of the musical grammar.Preference ratings showed no differentiation among familiar, new, and ungrammatical melodies. In Experiment 2, participants were given 10 melodies 40 times each. Results showed superior recognition but unsuccessful generalization. Additionally, preference ratings were significantly higher for familiar melodies. Results from the two experiments suggest that humans can internalize the grammatical structure of a new musical system following exposure to a sufficiently large set size of melodies, but musical preference results from repeated exposure to a small number of items. This dissociation between grammar learning and preference will be further discussed.
Comparison of cavitation bubbles evolution in viscous media
NASA Astrophysics Data System (ADS)
Jasikova, Darina; Schovanec, Petr; Kotek, Michal; Kopecky, Vaclav
2018-06-01
There have been tried many types of liquids with different ranges of viscosity values that have been tested to form a single cavitation bubble. The purpose of these experiments was to observe the behaviour of cavitation bubbles in media with different ranges of absorbance. The most of the method was based on spark to induced superheat limit of liquid. Here we used arrangement of the laser-induced breakdown (LIB) method. There were described the set cavitation setting that affects the size bubble in media with different absorbance. We visualized the cavitation bubble with a 60 kHz high speed camera. We used here shadowgraphy setup for the bubble visualization. There were observed time development and bubble extinction in various media, where the size of the bubble in the silicone oil was extremely small, due to the absorbance size of silicon oil.
Optimal placement and sizing of wind / solar based DG sources in distribution system
NASA Astrophysics Data System (ADS)
Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng
2017-06-01
Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua
2018-06-01
Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.
French, Helen P; Fitzpatrick, Martina; FitzGerald, Oliver
2011-12-01
To compare the responsiveness of two self-report measures and three physical performance measures of function following physiotherapy for osteoarthritis of the knee. Single centre study in acute hospital setting. Patients referred for physiotherapy with osteoarthritis of the knee were recruited. The Western Ontario and McMaster Universities (WOMAC), Lequesne Algofunctional Index (LAI), timed-up-and-go test (TUGT), timed-stand test (TST) and six-minute walk test (6MWT) were administered at first and final physiotherapy visits. Wilcoxon Signed Rank tests were used to determine the effect of physiotherapy on each outcome. Responsiveness was calculated using effect size, standardised response mean and a median-based measure of responsiveness due to some outlying data. Thirty-nine patients with a mean age of 65.3 (standard deviation 6.9) years were investigated before and after a course of exercise-based physiotherapy. There was a significant improvement in all outcomes except the WOMAC scores. All measures demonstrated small effect sizes for all statistics (<0.50), except the 6MWT which was in the moderate range for one of the indices (standardised response mean 0.54). The LAI was more responsive than the WOMAC total score and the WOMAC physical function subscale for all responsiveness statistics, whilst the 6MWT was more responsive than the TST and the TUGT. The median-based effect size index produced the smallest effect sizes for all measures (0.1 to 0.43). These results can be used to guide decision making about which physical function outcome measures should be used to evaluate effectiveness of rehabilitation of people with osteoarthritis of the knee at group level in a clinical setting. Copyright © 2010 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
SkData: data sets and algorithm evaluation protocols in Python
NASA Astrophysics Data System (ADS)
Bergstra, James; Pinto, Nicolas; Cox, David D.
2015-01-01
Machine learning benchmark data sets come in all shapes and sizes, whereas classification algorithms assume sanitized input, such as (x, y) pairs with vector-valued input x and integer class label y. Researchers and practitioners know all too well how tedious it can be to get from the URL of a new data set to a NumPy ndarray suitable for e.g. pandas or sklearn. The SkData library handles that work for a growing number of benchmark data sets (small and large) so that one-off in-house scripts for downloading and parsing data sets can be replaced with library code that is reliable, community-tested, and documented. The SkData library also introduces an open-ended formalization of training and testing protocols that facilitates direct comparison with published research. This paper describes the usage and architecture of the SkData library.
Impact of Steel Fiber Size and Shape on the Mechanical Properties of Ultra-High Performance Concrete
2015-08-01
32 Appendix B: DTT Specimens Pretest / Posttest ..................................................................................... 36 Appendix C...type and then subsequently compared to that material’s UCS. Figure 6 shows the splitting tensile testing set-up, both pretest and posttest . Figure...9 show the actual test configuration, pretest and posttest , respectively. This configuration utilized a chucking mechanism suited to the shape of
Is Preoperative Biochemical Testing for Pheochromocytoma Necessary for All Adrenal Incidentalomas?
Jun, Joo Hyun; Ahn, Hyun Joo; Lee, Sangmin M.; Kim, Jie Ae; Park, Byung Kwan; Kim, Jee Soo; Kim, Jung Han
2015-01-01
Abstract This study examined whether imaging phenotypes obtained from computed tomography (CT) can replace biochemical tests to exclude pheochromocytoma among adrenal incidentalomas (AIs) in the preoperative setting. We retrospectively reviewed the medical records of all patients (n = 251) who were admitted for operations and underwent adrenal-protocol CT for an incidentally discovered adrenal mass from January 2011 to December 2012. Various imaging phenotypes were assessed for their screening power for pheochromocytoma. Final diagnosis was confirmed by biopsy, biochemical tests, and follow-up CT. Pheochromocytomas showed similar imaging phenotypes as malignancies, but were significantly different from adenomas. Unenhanced attenuation values ≤10 Hounsfield units (HU) showed the highest specificity (97%) for excluding pheochromocytoma as a single phenotype. A combination of size ≤3 cm, unenhanced attenuation values ≤ 10 HU, and absence of suspicious morphology showed 100% specificity for excluding pheochromocytoma. Routine noncontrast CT can be used as a screening tool for pheochromocytoma by combining 3 imaging phenotypes: size ≤3 cm, unenhanced attenuation values ≤10 HU, and absence of suspicious morphology, and may substitute for biochemical testing in the preoperative setting. PMID:26559265
Malacarne, D; Pesenti, R; Paolucci, M; Parodi, S
1993-01-01
For a database of 826 chemicals tested for carcinogenicity, we fragmented the structural formula of the chemicals into all possible contiguous-atom fragments with size between two and eight (nonhydrogen) atoms. The fragmentation was obtained using a new software program based on graph theory. We used 80% of the chemicals as a training set and 20% as a test set. The two sets were obtained by random sorting. From the training sets, an average (8 computer runs with independently sorted chemicals) of 315 different fragments were significantly (p < 0.125) associated with carcinogenicity or lack thereof. Even using this relatively low level of statistical significance, 23% of the molecules of the test sets lacked significant fragments. For 77% of the molecules of the test sets, we used the presence of significant fragments to predict carcinogenicity. The average level of accuracy of the predictions in the test sets was 67.5%. Chemicals containing only positive fragments were predicted with an accuracy of 78.7%. The level of accuracy was around 60% for chemicals characterized by contradictory fragments or only negative fragments. In a parallel manner, we performed eight paired runs in which carcinogenicity was attributed randomly to the molecules of the training sets. The fragments generated by these pseudo-training sets were devoid of any predictivity in the corresponding test sets. Using an independent software program, we confirmed (for the complex biological endpoint of carcinogenicity) the validity of a structure-activity relationship approach of the type proposed by Klopman and Rosenkranz with their CASE program. Images Figure 1. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. PMID:8275991
Full-field digital mammography image data storage reduction using a crop tool.
Kang, Bong Joo; Kim, Sung Hun; An, Yeong Yi; Choi, Byung Gil
2015-05-01
The storage requirements for full-field digital mammography (FFDM) in a picture archiving and communication system are significant, so methods to reduce the data set size are needed. A FFDM crop tool for this purpose was designed, implemented, and tested. A total of 1,651 screening mammography cases with bilateral FFDMs were included in this study. The images were cropped using a DICOM editor while maintaining image quality. The cases were evaluated according to the breast volume (1/4, 2/4, 3/4, and 4/4) in the craniocaudal view. The image sizes between the cropped image group and the uncropped image group were compared. The overall image quality and reader's preference were independently evaluated by the consensus of two radiologists. Digital storage requirements for sets of four uncropped to cropped FFDM images were reduced by 3.8 to 82.9 %. The mean reduction rates according to the 1/4-4/4 breast volumes were 74.7, 61.1, 38, and 24 %, indicating that the lower the breast volume, the smaller the size of the cropped data set. The total image data set size was reduced from 87 to 36.7 GB, or a 57.7 % reduction. The overall image quality and the reader's preference for the cropped images were higher than those of the uncropped images. FFDM mammography data storage requirements can be significantly reduced using a crop tool.
Baron, Danielle M; Ramirez, Alejandro J; Bulitko, Vadim; Madan, Christopher R; Greiner, Ariel; Hurd, Peter L; Spetch, Marcia L
2015-01-01
Visiting multiple locations and returning to the start via the shortest route, referred to as the traveling salesman (or salesperson) problem (TSP), is a valuable skill for both humans and non-humans. In the current study, pigeons were trained with increasing set sizes of up to six goals, with each set size presented in three distinct configurations, until consistency in route selection emerged. After training at each set size, the pigeons were tested with two novel configurations. All pigeons acquired routes that were significantly more efficient (i.e., shorter in length) than expected by chance selection of the goals. On average, the pigeons also selected routes that were more efficient than expected based on a local nearest-neighbor strategy and were as efficient as the average route generated by a crossing-avoidance strategy. Analysis of the routes taken indicated that they conformed to both a nearest-neighbor and a crossing-avoidance strategy significantly more often than expected by chance. Both the time taken to visit all goals and the actual distance traveled decreased from the first to the last trials of training in each set size. On the first trial with novel configurations, average efficiency was higher than chance, but was not higher than expected from a nearest-neighbor or crossing-avoidance strategy. These results indicate that pigeons can learn to select efficient routes on a TSP problem.
Girard, Todd A; Wilkins, Leanne K; Lyons, Kathleen M; Yang, Lixia; Christensen, Bruce K
2018-05-31
Introduction Working-memory (WM) is a core cognitive deficit among individuals with Schizophrenia Spectrum Disorders (SSD). However, the underlying cognitive mechanisms of this deficit are less known. This study applies a modified version of the Corsi Block Test to investigate the role of proactive interference in visuospatial WM (VSWM) impairment in SSD. Methods Healthy and SSD participants completed a modified version of the Corsi Block Test involving both high (typical ascending set size from 4 to 7 items) and low (descending set size from 7 to 4 items) proactive interference conditions. Results The results confirmed that the SSD group performed worse overall relative to a healthy comparison group. More importantly, the SSD group demonstrated greater VSWM scores under low (Descending) versus high (Ascending) proactive interference; this pattern is opposite to that of healthy participants. Conclusions This differential pattern of performance supports that proactive interference associated with the traditional administration format contributes to VSWM impairment in SSD. Further research investigating associated neurocognitive mechanisms and the contribution of proactive interference across other domains of cognition in SSD is warranted.
Effect of Particle Size Distribution on Wall Heat Flux in Pulverized-Coal Furnaces and Boilers
NASA Astrophysics Data System (ADS)
Lu, Jun
A mathematical model of combustion and heat transfer within a cylindrical enclosure firing pulverized coal has been developed and tested against two sets of measured data (one is 1993 WSU/DECO Pilot test data, the other one is the International Flame Research Foundation 1964 Test (Beer, 1964)) and one independent code FURN3D from the Argonne National Laboratory (Ahluwalia and IM, 1992). The model called PILC assumes that the system is a sequence of many well-stirred reactors. A char burnout model combining diffusion to the particle surface, pore diffusion, and surface reaction is employed for predicting the char reaction, heat release, and evolution of char. The ash formation model included relates the ash particle size distribution to the particle size distribution of pulverized coal. The optical constants of char and ash particles are calculated from dispersion relations derived from reflectivity, transmissivity and extinction measurements. The Mie theory is applied to determine the extinction and scattering coefficients. The radiation heat transfer is modeled using the virtual zone method, which leads to a set of simultaneous nonlinear algebraic equations for the temperature field within the furnace and on its walls. This enables the heat fluxes to be evaluated. In comparisons with the experimental data and one independent code, the model is successful in predicting gas temperature, wall temperature, and wall radiative flux. When the coal with greater fineness is burnt, the particle size of pulverized coal has a consistent influence on combustion performance: the temperature peak was higher and nearer to burner, the radiation flux to combustor wall increased, and also the absorption and scattering coefficients of the combustion products increased. The effect of coal particle size distribution on absorption and scattering coefficients and wall heat flux is significant. But there is only a small effect on gas temperature and fuel fraction burned; it is speculated that this may be a characteristic special to the test combustor used.
Metcalfe, Kristian; Vaughan, Gregory; Vaz, Sandrine; Smith, Robert J
2015-12-01
Marine protected areas (MPAs) are the cornerstone of most marine conservation strategies, but the effectiveness of each one partly depends on its size and distance to other MPAs in a network. Despite this, current recommendations on ideal MPA size and spacing vary widely, and data are lacking on how these constraints might influence the overall spatial characteristics, socio-economic impacts, and connectivity of the resultant MPA networks. To address this problem, we tested the impact of applying different MPA size constraints in English waters. We used the Marxan spatial prioritization software to identify a network of MPAs that met conservation feature targets, whilst minimizing impacts on fisheries; modified the Marxan outputs with the MinPatch software to ensure each MPA met a minimum size; and used existing data on the dispersal distances of a range of species found in English waters to investigate the likely impacts of such spatial constraints on the region's biodiversity. Increasing MPA size had little effect on total network area or the location of priority areas, but as MPA size increased, fishing opportunity cost to stakeholders increased. In addition, as MPA size increased, the number of closely connected sets of MPAs in networks and the average distance between neighboring MPAs decreased, which consequently increased the proportion of the planning region that was isolated from all MPAs. These results suggest networks containing large MPAs would be more viable for the majority of the region's species that have small dispersal distances, but dispersal between MPA sets and spill-over of individuals into unprotected areas would be reduced. These findings highlight the importance of testing the impact of applying different MPA size constraints because there are clear trade-offs that result from the interaction of size, number, and distribution of MPAs in a network. © 2015 Society for Conservation Biology.
Shape Comparison Between 0.4–2.0 and 20–60 lm Cement Particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzer, L.; Flatt, R; Erdogan, S
Portland cement powder, ground from much larger clinker particles, has a particle size distribution from about 0.1 to 100 {micro}m. An important question is then: does particle shape depend on particle size? For the same cement, X-ray computed tomography has been used to examine the 3-D shape of particles in the 20-60 {micro}m sieve range, and focused ion beam nanotomography has been used to examine the 3-D shape of cement particles found in the 0.4-2.0 {micro}m sieve range. By comparing various kinds of computed particle shape data for each size class, the conclusion is made that, within experimental uncertainty, bothmore » size classes are prolate, but the smaller size class particles, 0.4-2.0 {micro}m, tend to be somewhat more prolate than the 20-60 {micro}m size class. The practical effect of this shape difference on the set-point was assessed using the Virtual Cement and Concrete Testing Laboratory to simulate the hydration of five cement powders. Results indicate that nonspherical aspect ratio is more important in determining the set-point than are the actual shape details.« less
Kalderstam, Jonas; Edén, Patrik; Bendahl, Pär-Ola; Strand, Carina; Fernö, Mårten; Ohlsson, Mattias
2013-06-01
The concordance index (c-index) is the standard way of evaluating the performance of prognostic models in the presence of censored data. Constructing prognostic models using artificial neural networks (ANNs) is commonly done by training on error functions which are modified versions of the c-index. Our objective was to demonstrate the capability of training directly on the c-index and to evaluate our approach compared to the Cox proportional hazards model. We constructed a prognostic model using an ensemble of ANNs which were trained using a genetic algorithm. The individual networks were trained on a non-linear artificial data set divided into a training and test set both of size 2000, where 50% of the data was censored. The ANNs were also trained on a data set consisting of 4042 patients treated for breast cancer spread over five different medical studies, 2/3 used for training and 1/3 used as a test set. A Cox model was also constructed on the same data in both cases. The two models' c-indices on the test sets were then compared. The ranking performance of the models is additionally presented visually using modified scatter plots. Cross validation on the cancer training set did not indicate any non-linear effects between the covariates. An ensemble of 30 ANNs with one hidden neuron was therefore used. The ANN model had almost the same c-index score as the Cox model (c-index=0.70 and 0.71, respectively) on the cancer test set. Both models identified similarly sized low risk groups with at most 10% false positives, 49 for the ANN model and 60 for the Cox model, but repeated bootstrap runs indicate that the difference was not significant. A significant difference could however be seen when applied on the non-linear synthetic data set. In that case the ANN ensemble managed to achieve a c-index score of 0.90 whereas the Cox model failed to distinguish itself from the random case (c-index=0.49). We have found empirical evidence that ensembles of ANN models can be optimized directly on the c-index. Comparison with a Cox model indicates that near identical performance is achieved on a real cancer data set while on a non-linear data set the ANN model is clearly superior. Copyright © 2013 Elsevier B.V. All rights reserved.
Determination of Slake Durability Index (Sdi) Values on Different Shape of Laminated Marl Samples
NASA Astrophysics Data System (ADS)
Ankara, Hüseyin; Çiçek, Fatma; Talha Deniz, İsmail; Uçak, Emre; Yerel Kandemir, Süheyla
2016-10-01
The slake durability index (SDI) test is widely used to determine the disintegration characteristic of the weak and clay-bearing rocks in geo-engineering problems. However, due to the different shapes of sample pieces, such as, irregular shapes displayed mechanical breakages in the slaking process, the SDI test has some limitations that affect the index values. In addition, shape and surface roughness of laminated marl samples have a severe influence on the SDI. In this study, a new sample preparation method called Pasha Method was used to prepare spherical specimens from the laminated marl collected from Seyitomer collar (SLI). Moreover the SDI tests were performed on equal size and weight specimens: three sets with different shapes were used. The three different sets were prepared as the test samples which had sphere shape, parallel to the layers in irregular shape, and vertical to the layers in irregular shape. Index values were determined for the three different sets subjected to the SDI test for 4 cycles. The index values at the end of fourth cycle were found to be 98.43, 98.39 and 97.20 %, respectively. As seen, the index values of the sphere sample set were found to be higher than irregular sample sets.
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Pearce, Michael; Hee, Siew Wan; Madan, Jason; Posch, Martin; Day, Simon; Miller, Frank; Zohar, Sarah; Stallard, Nigel
2018-02-08
Most confirmatory randomised controlled clinical trials (RCTs) are designed with specified power, usually 80% or 90%, for a hypothesis test conducted at a given significance level, usually 2.5% for a one-sided test. Approval of the experimental treatment by regulatory agencies is then based on the result of such a significance test with other information to balance the risk of adverse events against the benefit of the treatment to future patients. In the setting of a rare disease, recruiting sufficient patients to achieve conventional error rates for clinically reasonable effect sizes may be infeasible, suggesting that the decision-making process should reflect the size of the target population. We considered the use of a decision-theoretic value of information (VOI) method to obtain the optimal sample size and significance level for confirmatory RCTs in a range of settings. We assume the decision maker represents society. For simplicity we assume the primary endpoint to be normally distributed with unknown mean following some normal prior distribution representing information on the anticipated effectiveness of the therapy available before the trial. The method is illustrated by an application in an RCT in haemophilia A. We explicitly specify the utility in terms of improvement in primary outcome and compare this with the costs of treating patients, both financial and in terms of potential harm, during the trial and in the future. The optimal sample size for the clinical trial decreases as the size of the population decreases. For non-zero cost of treating future patients, either monetary or in terms of potential harmful effects, stronger evidence is required for approval as the population size increases, though this is not the case if the costs of treating future patients are ignored. Decision-theoretic VOI methods offer a flexible approach with both type I error rate and power (or equivalently trial sample size) depending on the size of the future population for whom the treatment under investigation is intended. This might be particularly suitable for small populations when there is considerable information about the patient population.
The perceptual processing capacity of summary statistics between and within feature dimensions
Attarha, Mouna; Moore, Cathleen M.
2015-01-01
The simultaneous–sequential method was used to test the processing capacity of statistical summary representations both within and between feature dimensions. Sixteen gratings varied with respect to their size and orientation. In Experiment 1, the gratings were equally divided into four separate smaller sets, one of which with a mean size that was larger or smaller than the other three sets, and one of which with a mean orientation that was tilted more leftward or rightward. The task was to report the mean size and orientation of the oddball sets. This therefore required four summary representations for size and another four for orientation. The sets were presented at the same time in the simultaneous condition or across two temporal frames in the sequential condition. Experiment 1 showed evidence of a sequential advantage, suggesting that the system may be limited with respect to establishing multiple within-feature summaries. Experiment 2 eliminates the possibility that some aspect of the task, other than averaging, was contributing to this observed limitation. In Experiment 3, the same 16 gratings appeared as one large superset, and therefore the task only required one summary representation for size and another one for orientation. Equal simultaneous–sequential performance indicated that between-feature summaries are capacity free. These findings challenge the view that within-feature summaries drive a global sense of visual continuity across areas of the peripheral visual field, and suggest a shift in focus to seeking an understanding of how between-feature summaries in one area of the environment control behavior. PMID:26360153
Park, Hyung-Bum; Han, Ji-Eun; Hyun, Joo-Seok
2015-05-01
An expressionless face is often perceived as rude whereas a smiling face is considered as hospitable. Repetitive exposure to such perceptions may have developed stereotype of categorizing an expressionless face as expressing negative emotion. To test this idea, we displayed a search array where the target was an expressionless face and the distractors were either smiling or frowning faces. We manipulated set size. Search reaction times were delayed with frowning distractors. Delays became more evident as the set size increased. We also devised a short-term comparison task where participants compared two sequential sets of expressionless, smiling, and frowning faces. Detection of an expression change across the sets was highly inaccurate when the change was made between frowning and expressionless face. These results indicate that subjects were confused with expressed emotions on frowning and expressionless faces, suggesting that it is difficult to distinguish expressionless face from frowning faces. Copyright © 2015 Elsevier B.V. All rights reserved.
Parallel digital forensics infrastructure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, Lorie M.; Duggan, David Patrick
2009-10-01
This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexicomore » Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.« less
The effect of size and competition on tree growth rate in old-growth coniferous forests
Das, Adrian
2012-01-01
Tree growth and competition play central roles in forest dynamics. Yet models of competition often neglect important variation in species-specific responses. Furthermore, functions used to model changes in growth rate with size do not always allow for potential complexity. Using a large data set from old-growth forests in California, models were parameterized relating growth rate to tree size and competition for four common species. Several functions relating growth rate to size were tested. Competition models included parameters for tree size, competitor size, and competitor distance. Competitive strength was allowed to vary by species. The best ranked models (using Akaike’s information criterion) explained between 18% and 40% of the variance in growth rate, with each species showing a strong response to competition. Models indicated that relationships between competition and growth varied substantially among species. The results also suggested that the relationship between growth rate and tree size can be complex and that how we model it can affect not only our ability to detect that complexity but also whether we obtain misleading results. In this case, for three of four species, the best model captured an apparent and unexpected decline in potential growth rate for the smallest trees in the data set.
Lamont, Scott; Brunero, Scott
2018-05-19
Workplace violence prevalence has attracted significant attention within the international nursing literature. Little attention to non-mental health settings and a lack of evaluation rigor have been identified within review literature. To examine the effects of a workplace violence training program in relation to risk assessment and management practices, de-escalation skills, breakaway techniques, and confidence levels, within an acute hospital setting. A quasi-experimental study of nurses using pretest-posttest measurements of educational objectives and confidence levels, with two week follow-up. A 440 bed metropolitan tertiary referral hospital in Sydney, Australia. Nurses working in specialties identified as a 'high risk' for violence. A pre-post-test design was used with participants attending a one day workshop. The workshop evaluation comprised the use of two validated questionnaires: the Continuing Professional Development Reaction questionnaire, and the Confidence in Coping with Patient Aggression Instrument. Descriptive and inferential statistics were calculated. The paired t-test was used to assess the statistical significance of changes in the clinical behaviour intention and confidence scores from pre- to post-intervention. Cohen's d effect sizes were calculated to determine the extent of the significant results. Seventy-eight participants completed both pre- and post-workshop evaluation questionnaires. Statistically significant increases in behaviour intention scores were found in fourteen of the fifteen constructs relating to the three broad workshop objectives, and confidence ratings, with medium to large effect sizes observed in some constructs. A significant increase in overall confidence in coping with patient aggression was also found post-test with large effect size. Positive results were observed from the workplace violence training. Training needs to be complimented by a multi-faceted organisational approach which includes governance, quality and review processes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ease of Access to List Items in Short-Term Memory Depends on the Order of the Recognition Probes
ERIC Educational Resources Information Center
Lange, Elke B.; Cerella, John; Verhaeghen, Paul
2011-01-01
We report data from 4 experiments using a recognition design with multiple probes to be matched to specific study positions. Items could be accessed rapidly, independent of set size, when the test order matched the study order (forward condition). When the order of testing was random, backward, or in a prelearned irregular sequence (reordered…
Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson
2008-01-01
Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860
Phase II Trials for Heterogeneous Patient Populations with a Time-to-Event Endpoint.
Jung, Sin-Ho
2017-07-01
In this paper, we consider a single-arm phase II trial with a time-to-event end-point. We assume that the study population has multiple subpopulations with different prognosis, but the study treatment is expected to be similarly efficacious across the subpopulations. We review a stratified one-sample log-rank test and present its sample size calculation method under some practical design settings. Our sample size method requires specification of the prevalence of subpopulations. We observe that the power of the resulting sample size is not very sensitive to misspecification of the prevalence.
New International Program to Asses the Reliability of Emerging Nondestructive Techniques (PARENT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokofiev, Iouri; Cumblidge, Stephen E.; Csontos, Aladar A.
2013-01-25
The Nuclear Regulatory Commission (NRC) established the Program to Assess the Reliability of Emerging Nondestructive Techniques (PARENT) to follow on from the successful Program for the Inspection of Nickel alloy Components (PINC). The goal of the PARENT is to conduct a confirmatory assessment of the reliability of nondestructive evaluation (NDE) techniques for detecting and sizing primary water stress corrosion cracks (PWSCC) and applying the lessons learned from PINC to a series of round-robin tests. These open and blind round-robin tests will comprise a new set of typical pressure boundary components including dissimilar metal welds (DMWs) and bottom-mounted instrumentation penetrations. Openmore » round-robin tests will engage research and industry teams worldwide to investigate and demonstrate the reliability of emerging NDE techniques to detect and size flaws with a wide range of lengths, depths, orientations, and locations. Blind round-robin tests will utilize various testing organizations, whose inspectors and procedures are certified by the standards for the nuclear industry in their respective countries, to investigate the ability of established NDE techniques to detect and size flaws whose characteristics range from relatively easy to very difficult for detection and sizing. Blind and open round-robin testing started in late 2011 and early 2012, respectively. This paper will present the work scope with reports on progress, NDE methods evaluated, and project timeline for PARENT.« less
Parallel effects of memory set activation and search on timing and working memory capacity.
Schweickert, Richard; Fortin, Claudette; Xi, Zhuangzhuang; Viau-Quesnel, Charles
2014-01-01
Accurately estimating a time interval is required in everyday activities such as driving or cooking. Estimating time is relatively easy, provided a person attends to it. But a brief shift of attention to another task usually interferes with timing. Most processes carried out concurrently with timing interfere with it. Curiously, some do not. Literature on a few processes suggests a general proposition, the Timing and Complex-Span Hypothesis: A process interferes with concurrent timing if and only if process performance is related to complex span. Complex-span is the number of items correctly recalled in order, when each item presented for study is followed by a brief activity. Literature on task switching, visual search, memory search, word generation and mental time travel supports the hypothesis. Previous work found that another process, activation of a memory set in long term memory, is not related to complex-span. If the Timing and Complex-Span Hypothesis is true, activation should not interfere with concurrent timing in dual-task conditions. We tested such activation in single-task memory search task conditions and in dual-task conditions where memory search was executed with concurrent timing. In Experiment 1, activating a memory set increased reaction time, with no significant effect on time production. In Experiment 2, set size and memory set activation were manipulated. Activation and set size had a puzzling interaction for time productions, perhaps due to difficult conditions, leading us to use a related but easier task in Experiment 3. In Experiment 3 increasing set size lengthened time production, but memory activation had no significant effect. Results here and in previous literature on the whole support the Timing and Complex-Span Hypotheses. Results also support a sequential organization of activation and search of memory. This organization predicts activation and set size have additive effects on reaction time and multiplicative effects on percent correct, which was found.
Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J.
2018-01-01
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future. PMID:29538331
Li, Hongjian; Peng, Jiangjun; Leung, Yee; Leung, Kwong-Sak; Wong, Man-Hon; Lu, Gang; Ballester, Pedro J
2018-03-14
It has recently been claimed that the outstanding performance of machine-learning scoring functions (SFs) is exclusively due to the presence of training complexes with highly similar proteins to those in the test set. Here, we revisit this question using 24 similarity-based training sets, a widely used test set, and four SFs. Three of these SFs employ machine learning instead of the classical linear regression approach of the fourth SF (X-Score which has the best test set performance out of 16 classical SFs). We have found that random forest (RF)-based RF-Score-v3 outperforms X-Score even when 68% of the most similar proteins are removed from the training set. In addition, unlike X-Score, RF-Score-v3 is able to keep learning with an increasing training set size, becoming substantially more predictive than X-Score when the full 1105 complexes are used for training. These results show that machine-learning SFs owe a substantial part of their performance to training on complexes with dissimilar proteins to those in the test set, against what has been previously concluded using the same data. Given that a growing amount of structural and interaction data will be available from academic and industrial sources, this performance gap between machine-learning SFs and classical SFs is expected to enlarge in the future.
Naugle, Alecia Larew; Barlow, Kristina E; Eblen, Denise R; Teter, Vanessa; Umholtz, Robert
2006-11-01
The U.S. Food Safety and Inspection Service (FSIS) tests sets of samples of selected raw meat and poultry products for Salmonella to ensure that federally inspected establishments meet performance standards defined in the pathogen reduction-hazard analysis and critical control point system (PR-HACCP) final rule. In the present report, sample set results are described and associations between set failure and set and establishment characteristics are identified for 4,607 sample sets collected from 1998 through 2003. Sample sets were obtained from seven product classes: broiler chicken carcasses (n = 1,010), cow and bull carcasses (n = 240), market hog carcasses (n = 560), steer and heifer carcasses (n = 123), ground beef (n = 2,527), ground chicken (n = 31), and ground turkey (n = 116). Of these 4,607 sample sets, 92% (4,255) were collected as part of random testing efforts (A sets), and 93% (4,166) passed. However, the percentage of positive samples relative to the maximum number of positive results allowable in a set increased over time for broilers but decreased or stayed the same for the other product classes. Three factors associated with set failure were identified: establishment size, product class, and year. Set failures were more likely early in the testing program (relative to 2003). Small and very small establishments were more likely to fail than large ones. Set failure was less likely in ground beef than in other product classes. Despite an overall decline in set failures through 2003, these results highlight the need for continued vigilance to reduce Salmonella contamination in broiler chicken and continued implementation of programs designed to assist small and very small establishments with PR-HACCP compliance issues.
Classification of urine sediment based on convolution neural network
NASA Astrophysics Data System (ADS)
Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian
2018-04-01
By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.
Mental chronometry with simple linear regression.
Chen, J Y
1997-10-01
Typically, mental chronometry is performed by means of introducing an independent variable postulated to affect selectively some stage of a presumed multistage process. However, the effect could be a global one that spreads proportionally over all stages of the process. Currently, there is no method to test this possibility although simple linear regression might serve the purpose. In the present study, the regression approach was tested with tasks (memory scanning and mental rotation) that involved a selective effect and with a task (word superiority effect) that involved a global effect, by the dominant theories. The results indicate (1) the manipulation of the size of a memory set or of angular disparity affects the intercept of the regression function that relates the times for memory scanning with different set sizes or for mental rotation with different angular disparities and (2) the manipulation of context affects the slope of the regression function that relates the times for detecting a target character under word and nonword conditions. These ratify the regression approach as a useful method for doing mental chronometry.
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
An evaluation of grease-type ball bearing lubricants operation in various environments
NASA Technical Reports Server (NTRS)
Mcmurtrey, E. L.
1983-01-01
Because many future spacecraft or space stations will require mechanisms to operate for long periods of time in environments which are adverse to most bearing lubricants, a series of tests is continuing to evaluate 38 grease type lubricants in R-4 size bearings in five different environments for a 1 year period. Four repetitions of each test are made to provide statistical samples. These tests have also been used to select four lubricants for 5 year tests in selected environments with five repetitions of each test for statistical samples. At the present time, 142 test sets have been completed and 30 test sets are underway. The three 5 year tests in (1) continuous operation and (2) start stop operation, with both in vacuum at ambient temperatures, and (3) continuous vacuum operation at 93.3 C are now completed. To date, in both the 1 year and 5 year tests, the best results in all environments have been obtained with a high viscosity index perfluoroalkylpolyether (PFPE) grease.
Efficiency of parallel direct optimization
NASA Technical Reports Server (NTRS)
Janies, D. A.; Wheeler, W. C.
2001-01-01
Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.
Ensemble representations: effects of set size and item heterogeneity on average size perception.
Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W
2013-02-01
Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.
Picone, Marco; Bergamin, Martina; Losso, Chiara; Delaney, Eugenia; Arizzi Novelli, Alessandra; Ghirardini, Annamaria Volpi
2016-01-01
Within the framework of a Weight of Evidence (WoE) approach, a set of four toxicity bioassays involving the amphipod Corophium volutator (10 d lethality test on whole sediment), the sea urchin Paracentrotus lividus (fertilization and embryo toxicity tests on elutriate) and the pacific oyster Crassostrea gigas (embryo toxicity test on elutriate) was applied to sediments from 10 sampling sites of the Venice Lagoon (Italy). Sediments were collected during three campaigns carried out in May 2004 (spring campaign), October 2004 (autumn campaign) and February 2005 (winter campaign). Toxicity tests were performed on all sediment samples. Sediment grain-size and chemistry were measured during spring and autumn campaigns. This research investigated (i) the ability of toxicity tests in discriminating among sites with different contamination level, (ii) the occurrence of a gradient of effect among sampling sites, (iii) the possible correlation among toxicity tests, sediment chemistry, grain size and organic carbon, and (iv) the possible occurrence of toxicity seasonal variability. Sediment contamination levels were from low to moderate. No acute toxicity toward amphipods was observed, while sea urchin fertilization was affected only in few sites in just a single campaign. Short-term effects on larval development of sea urchin and oyster evidenced a clear spatial trend among sites, with increasing effects along the axis connecting the sea-inlets with the industrial area. The set of bioassays allowed the identification of a spatial gradient of effect, with decreasing toxicity from the industrial area toward the sea-inlets. Multivariate data analysis showed that the malformations of oyster embryos were significantly correlated to the industrial contamination (metals, polynuclear aromatic hydrocarbons, hexachlorobenzene and polychlorinated biphenyls), while sea urchin development to sediment concentrations of As, Cr and organic carbon. Both embryo toxicity tests were significantly affected by high ammonia concentrations found in the elutriates extracted from some mudflat and industrial sediments. No significant temporal variation of the toxicity was observed within the experimental period. Amendments to the set of bioassays, with inclusion of chronic tests, can certainly provide more reliability and consistency to the characterization of the (possible) toxic effects. Copyright © 2015 Elsevier Inc. All rights reserved.
The widespread misuse of effect sizes.
Dankel, Scott J; Mouser, J Grant; Mattocks, Kevin T; Counts, Brittany R; Jessee, Matthew B; Buckner, Samuel L; Loprinzi, Paul D; Loenneke, Jeremy P
2017-05-01
Studies comparing multiple groups (i.e., experimental and control) often examine the efficacy of an intervention by calculating within group effect sizes using Cohen's d. This method is inappropriate and largely impacted by the pre-test variability as opposed to the variability in the intervention itself. Furthermore, the percentage change is often analyzed, but this is highly impacted by the baseline values and can be potentially misleading. Thus, the objective of this study was to illustrate the common misuse of the effect size and percent change measures. Here we provide a realistic sample data set comparing two resistance training groups with the same pre-test to post-test change. Statistical tests that are commonly performed within the literature were computed. Analyzing the within group effect size favors the control group, while the percent change favors the experimental group. The most appropriate way to present the data would be to plot the individual responses or, for larger samples, provide the mean change and 95% confidence intervals of the mean change. This details the magnitude and variability within the response to the intervention itself in units that are easily interpretable. This manuscript demonstrates the common misuse of the effect size and details the importance for investigators to always report raw values, even when alternative statistics are performed. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Comparative Performance Analysis of Different Fingerprint Biometric Scanners for Patient Matching.
Kasiiti, Noah; Wawira, Judy; Purkayastha, Saptarshi; Were, Martin C
2017-01-01
Unique patient identification within health services is an operational challenge in healthcare settings. Use of key identifiers, such as patient names, hospital identification numbers, national ID, and birth date are often inadequate for ensuring unique patient identification. In addition approximate string comparator algorithms, such as distance-based algorithms, have proven suboptimal for improving patient matching, especially in low-resource settings. Biometric approaches may improve unique patient identification. However, before implementing the technology in a given setting, such as health care, the right scanners should be rigorously tested to identify an optimal package for the implementation. This study aimed to investigate the effects of factors such as resolution, template size, and scan capture area on the matching performance of different fingerprint scanners for use within health care settings. Performance analysis of eight different scanners was tested using the demo application distributed as part of the Neurotech Verifinger SDK 6.0.
Nosofsky, Robert M; Cox, Gregory E; Cao, Rui; Shiffrin, Richard M
2014-11-01
Experiments were conducted to test a modern exemplar-familiarity model on its ability to account for both short-term and long-term probe recognition within the same memory-search paradigm. Also, making connections to the literature on attention and visual search, the model was used to interpret differences in probe-recognition performance across diverse conditions that manipulated relations between targets and foils across trials. Subjects saw lists of from 1 to 16 items followed by a single item recognition probe. In a varied-mapping condition, targets and foils could switch roles across trials; in a consistent-mapping condition, targets and foils never switched roles; and in an all-new condition, on each trial a completely new set of items formed the memory set. In the varied-mapping and all-new conditions, mean correct response times (RTs) and error proportions were curvilinear increasing functions of memory set size, with the RT results closely resembling ones from hybrid visual-memory search experiments reported by Wolfe (2012). In the consistent-mapping condition, new-probe RTs were invariant with set size, whereas old-probe RTs increased slightly with increasing study-test lag. With appropriate choice of psychologically interpretable free parameters, the model accounted well for the complete set of results. The work provides support for the hypothesis that a common set of processes involving exemplar-based familiarity may govern long-term and short-term probe recognition across wide varieties of memory- search conditions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Palkovits, Stefan; Hirnschall, Nino; Georgiev, Stefan; Leisser, Christoph; Findl, Oliver
2018-02-01
To evaluate the test-retest reproducibility of a novel microperimeter with fundus image tracking (MP3, Nidek Co, Japan) in healthy subjects and patients with macular disease. Ten healthy subjects and 20 patients suffering from range of macular diseases were included. After training measurements, two additional microperimetry measurements were scheduled. Test-retest reproducibility was assessed for mean retinal sensitivity, pointwise sensitivity, and deep scotoma size using the coefficient of repeatability and Bland-Altman diagrams. In addition, in a subgroup of patients microperimetry was compared with conventional perimetry. Average differences in mean retinal sensitivity between the two study measurements were 0.26 ± 1.7 dB (median 0 dB; interquartile range [IQR] -1 to 1) for the healthy and 0.36 ± 2.5 dB (median 0 dB; IQR -1 to 2) for the macular patient group. Coefficients of repeatability for mean retinal sensitivity and pointwise retinal sensitivity were 1.2 and 3.3 dB for the healthy subjects and 1.6 and 5.0 dB for the macular disease patients, respectively. Absolute agreement in deep scotoma size between both study days was found in 79.9% of the test loci. The microperimeter MP3 shows an adequate test-retest reproducibility for mean retinal sensitivity, pointwise retinal sensitivity, and deep scotoma size in healthy subjects and patients suffering from macular disease. Furthermore, reproducibility of microperimetry is higher than conventional perimetry. Reproducibility is an important measure for each diagnostic device. Especially in a clinical setting high reproducibility set the basis to achieve reliable results using the specific device. Therefore, assessment of the reproducibility is of eminent importance to interpret the findings of future studies.
Evidence of a Conserved Molecular Response to Selection for Increased Brain Size in Primates
Harrison, Peter W.; Caravas, Jason A.; Raghanti, Mary Ann; Phillips, Kimberley A.; Mundy, Nicholas I.
2017-01-01
The adaptive significance of human brain evolution has been frequently studied through comparisons with other primates. However, the evolution of increased brain size is not restricted to the human lineage but is a general characteristic of primate evolution. Whether or not these independent episodes of increased brain size share a common genetic basis is unclear. We sequenced and de novo assembled the transcriptome from the neocortical tissue of the most highly encephalized nonhuman primate, the tufted capuchin monkey (Cebus apella). Using this novel data set, we conducted a genome-wide analysis of orthologous brain-expressed protein coding genes to identify evidence of conserved gene–phenotype associations and species-specific adaptations during three independent episodes of brain size increase. We identify a greater number of genes associated with either total brain mass or relative brain size across these six species than show species-specific accelerated rates of evolution in individual large-brained lineages. We test the robustness of these associations in an expanded data set of 13 species, through permutation tests and by analyzing how genome-wide patterns of substitution co-vary with brain size. Many of the genes targeted by selection during brain expansion have glutamatergic functions or roles in cell cycle dynamics. We also identify accelerated evolution in a number of individual capuchin genes whose human orthologs are associated with human neuropsychiatric disorders. These findings demonstrate the value of phenotypically informed genome analyses, and suggest at least some aspects of human brain evolution have occurred through conserved gene–phenotype associations. Understanding these commonalities is essential for distinguishing human-specific selection events from general trends in brain evolution. PMID:28391320
Comparing Pattern Recognition Feature Sets for Sorting Triples in the FIRST Database
NASA Astrophysics Data System (ADS)
Proctor, D. D.
2006-07-01
Pattern recognition techniques have been used with increasing success for coping with the tremendous amounts of data being generated by automated surveys. Usually this process involves construction of training sets, the typical examples of data with known classifications. Given a feature set, along with the training set, statistical methods can be employed to generate a classifier. The classifier is then applied to process the remaining data. Feature set selection, however, is still an issue. This paper presents techniques developed for accommodating data for which a substantive portion of the training set cannot be classified unambiguously, a typical case for low-resolution data. Significance tests on the sort-ordered, sample-size-normalized vote distribution of an ensemble of decision trees is introduced as a method of evaluating relative quality of feature sets. The technique is applied to comparing feature sets for sorting a particular radio galaxy morphology, bent-doubles, from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) database. Also examined are alternative functional forms for feature sets. Associated standard deviations provide the means to evaluate the effect of the number of folds, the number of classifiers per fold, and the sample size on the resulting classifications. The technique also may be applied to situations for which, although accurate classifications are available, the feature set is clearly inadequate, but is desired nonetheless to make the best of available information.
NASA Technical Reports Server (NTRS)
Klemin, Alexander; Warner, Edward P; Denkinger, George M
1918-01-01
Part 1 gives details of models tested and methods of testing of the Eiffel 36 wing alone and the JN2 aircraft. Characteristics and performance curves for standard JN are included. Part 2 presents a statistical analysis of the following: lift and drag contributed by body and chassis tested without wings; lift and drag contributed by tail, tested without wings; the effect on lift and drift of interference between the wings of a biplane combination; lift and drag contributed by the addition of body, chassis, and tail to a biplane combination; total parasite resistance; effect of varying size of tail, keeping angle of setting constant; effect of varying length of body and size of tail at the same time, keeping constant moment of tail surface about the center of gravity; forces on the tail and the effects of downwash; effect of size and setting of tail on statical longitudinal stability effects of length of body on stability; the effects of the various elements of an airplane on longitudinal stability and the placing of the force vectors. Part 3 presents the fundamental principals of dynamical stability; computations of resistance derivatives; solution of the stability equation; dynamical stability of the Curtiss JN2; tabulation of resistance derivatives; discussion of the resistance derivatives; formation and solution of stability equations; physical conceptions of the resistance derivatives; elements contributing to damping and an investigation of low speed conditions. Part 4 includes a summary of the results of the statistical investigation and a summary of the results for dynamic stability.
Assessment of Fencing on the Orion Heatshield
NASA Technical Reports Server (NTRS)
Alunni, Antonella I.; Gokcen, Tahir
2016-01-01
This paper presents recent experimental results from arc-jet tests of the Orion heatshield that were conducted at NASA Ames Research Center. Test conditions that simulated a set of heating profiles in time representative of the Orion flight environments were used to observe their effect on Orion's block architecture in terms of differential recession or fencing. Surface recession of arc-jet models was characterized during and after testing to derive fencing profiles used for the baseline sizing of the heatshield. Arc-jet test data show that the block architecture produces varying degrees of fencing.
Zieliński, Tomasz G
2015-04-01
This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters-responsible for sound propagation in a porous medium-are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.
Asadi, Abbas; Ramírez-Campillo, Rodrigo
2016-01-01
The aim of this study was to compare the effects of 6-week cluster versus traditional plyometric training sets on jumping ability, sprint and agility performance. Thirteen college students were assigned to a cluster sets group (N=6) or traditional sets group (N=7). Both training groups completed the same training program. The traditional group completed five sets of 20 repetitions with 2min of rest between sets each session, while the cluster group completed five sets of 20 [2×10] repetitions with 30/90-s rest each session. Subjects were evaluated for countermovement jump (CMJ), standing long jump (SLJ), t test, 20-m and 40-m sprint test performance before and after the intervention. Both groups had similar improvements (P<0.05) in CMJ, SLJ, t test, 20-m, and 40-m sprint. However, the magnitude of improvement in CMJ, SLJ and t test was greater for the cluster group (effect size [ES]=1.24, 0.81 and 1.38, respectively) compared to the traditional group (ES=0.84, 0.60 and 0.55). Conversely, the magnitude of improvement in 20-m and 40-m sprint test was greater for the traditional group (ES=1.59 and 0.96, respectively) compared to the cluster group (ES=0.94 and 0.75, respectively). Although both plyometric training methods improved lower body maximal-intensity exercise performance, the traditional sets methods resulted in greater adaptations in sprint performance, while the cluster sets method resulted in greater jump and agility adaptations. Copyright © 2016 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
Sequential CFAR detectors using a dead-zone limiter
NASA Astrophysics Data System (ADS)
Tantaratana, Sawasd
1990-09-01
The performances of some proposed sequential constant-false-alarm-rate (CFAR) detectors are evaluated. The observations are passed through a dead-zone limiter, the output of which is -1, 0, or +1, depending on whether the input is less than -c, between -c and c, or greater than c, where c is a constant. The test statistic is the sum of the outputs. The test is performed on a reduced set of data (those with absolute value larger than c), with the test statistic being the sum of the signs of the reduced set of data. Both constant and linear boundaries are considered. Numerical results show a significant reduction of the average number of observations needed to achieve the same false alarm and detection probabilities as a fixed-sample-size CFAR detector using the same kind of test statistic.
Nichols, James D.; Pollock, Kenneth H.; Hines, James E.
1984-01-01
The robust design of Pollock (1982) was used to estimate parameters of a Maryland M. pennsylvanicus population. Closed model tests provided strong evidence of heterogeneity of capture probability, and model M eta (Otis et al., 1978) was selected as the most appropriate model for estimating population size. The Jolly-Seber model goodness-of-fit test indicated rejection of the model for this data set, and the M eta estimates of population size were all higher than the Jolly-Seber estimates. Both of these results are consistent with the evidence of heterogeneous capture probabilities. The authors thus used M eta estimates of population size, Jolly-Seber estimates of survival rate, and estimates of birth-immigration based on a combination of the population size and survival rate estimates. Advantages of the robust design estimates for certain inference procedures are discussed, and the design is recommended for future small mammal capture-recapture studies directed at estimation.
Gungor, Anil; Houser, Steven M; Aquino, Benjamin F; Akbar, Imran; Moinuddin, Rizwan; Mamikoglu, Bulent; Corey, Jacquelynne P
2004-01-01
Among the many methods of allergy diagnosis are intradermal testing (IDT) and skin-prick testing (SPT). The usefulness of IDT has been called into question by some authors, while others believe that studies demonstrating that SPT was superior might have been subject to bias. We conducted a study to compare the validity of SPT and IDT--specifically, the skin endpoint titration (SET) type of IDT--in diagnosing allergic rhinitis. We performed nasal provocation testing on 62 patients to establish an unbiased screening criterion for study entry. Acoustic rhinometric measurements of the nasal responses revealed that 34 patients tested positive and 28 negative. All patients were subsequently tested by SET and SPT. We found that SPT was more sensitive (85.3 vs 79.4%) and more specific (78.6 vs 67.9%) than SET as a screening procedure. The positive predictive value of SPT was greater than that of SET (82.9 vs 75.0%), as was the negative predictive value (81.5 vs 73.0%). None of these differences was statistically significant; because of the relatively small sample size, our study was powered to show only equivalency. The results of our study suggest that the information obtained by the SET method of IDT is comparable to that obtained by SPT in terms of sensitivity, specificity, and overall performance and that both SET and SPT correlate well with nasal provocation testing for ragweed. Therefore, the decision as to which to use can be based on other factors, such as the practitioner's training, the desire for quantitative results, the desire for rapid results, and the type of treatment (i.e., immunotherapy or pharmacotherapy) that is likely to be chosen on the basis of test results.
NASA Technical Reports Server (NTRS)
Jeng, Frank F.; Lafuse, Sharon; Smith, Frederick D.; Lu, Sao-Dung; Knox, James C.; Campbell, Mellssa L.; Scull, Timothy D.; Green Steve
2010-01-01
A tool has been developed by the Sabatier Team for analyzing/optimizing CO2 removal assembly, CO2 compressor size, its operation logic, water generation from Sabatier, utilization of CO2 from crew metabolic output, and Hz from oxygen generation assembly. Tests had been conducted using CDRA/Simulation compressor set-up at MSFC in 2003. Analysis of test data has validated CO2 desorption rate profile, CO2 compressor performance, CO2 recovery and CO2 vacuum vent in CDRA desorption. Optimizing the compressor size and compressor operation logic for an integrated closed air revitalization system Is being conducted by the Sabatier Team.
Statistical characterization of a large geochemical database and effect of sample size
Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.
2005-01-01
The authors investigated statistical distributions for concentrations of chemical elements from the National Geochemical Survey (NGS) database of the U.S. Geological Survey. At the time of this study, the NGS data set encompasses 48,544 stream sediment and soil samples from the conterminous United States analyzed by ICP-AES following a 4-acid near-total digestion. This report includes 27 elements: Al, Ca, Fe, K, Mg, Na, P, Ti, Ba, Ce, Co, Cr, Cu, Ga, La, Li, Mn, Nb, Nd, Ni, Pb, Sc, Sr, Th, V, Y and Zn. The goal and challenge for the statistical overview was to delineate chemical distributions in a complex, heterogeneous data set spanning a large geographic range (the conterminous United States), and many different geological provinces and rock types. After declustering to create a uniform spatial sample distribution with 16,511 samples, histograms and quantile-quantile (Q-Q) plots were employed to delineate subpopulations that have coherent chemical and mineral affinities. Probability groupings are discerned by changes in slope (kinks) on the plots. Major rock-forming elements, e.g., Al, Ca, K and Na, tend to display linear segments on normal Q-Q plots. These segments can commonly be linked to petrologic or mineralogical associations. For example, linear segments on K and Na plots reflect dilution of clay minerals by quartz sand (low in K and Na). Minor and trace element relationships are best displayed on lognormal Q-Q plots. These sensitively reflect discrete relationships in subpopulations within the wide range of the data. For example, small but distinctly log-linear subpopulations for Pb, Cu, Zn and Ag are interpreted to represent ore-grade enrichment of naturally occurring minerals such as sulfides. None of the 27 chemical elements could pass the test for either normal or lognormal distribution on the declustered data set. Part of the reasons relate to the presence of mixtures of subpopulations and outliers. Random samples of the data set with successively smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.
A support vector machine based test for incongruence between sets of trees in tree space
2012-01-01
Background The increased use of multi-locus data sets for phylogenetic reconstruction has increased the need to determine whether a set of gene trees significantly deviate from the phylogenetic patterns of other genes. Such unusual gene trees may have been influenced by other evolutionary processes such as selection, gene duplication, or horizontal gene transfer. Results Motivated by this problem we propose a nonparametric goodness-of-fit test for two empirical distributions of gene trees, and we developed the software GeneOut to estimate a p-value for the test. Our approach maps trees into a multi-dimensional vector space and then applies support vector machines (SVMs) to measure the separation between two sets of pre-defined trees. We use a permutation test to assess the significance of the SVM separation. To demonstrate the performance of GeneOut, we applied it to the comparison of gene trees simulated within different species trees across a range of species tree depths. Applied directly to sets of simulated gene trees with large sample sizes, GeneOut was able to detect very small differences between two set of gene trees generated under different species trees. Our statistical test can also include tree reconstruction into its test framework through a variety of phylogenetic optimality criteria. When applied to DNA sequence data simulated from different sets of gene trees, results in the form of receiver operating characteristic (ROC) curves indicated that GeneOut performed well in the detection of differences between sets of trees with different distributions in a multi-dimensional space. Furthermore, it controlled false positive and false negative rates very well, indicating a high degree of accuracy. Conclusions The non-parametric nature of our statistical test provides fast and efficient analyses, and makes it an applicable test for any scenario where evolutionary or other factors can lead to trees with different multi-dimensional distributions. The software GeneOut is freely available under the GNU public license. PMID:22909268
NASA Astrophysics Data System (ADS)
Nanni, Ambra; Marigo, Paola; Groenewegen, Martin A. T.; Aringer, Bernhard; Girardi, Léo; Pastorelli, Giada; Bressan, Alessandro; Bladh, Sara
2016-10-01
We present a new approach aimed at constraining the typical size and optical properties of carbon dust grains in circumstellar envelopes (CSEs) of carbon-rich stars (C-stars) in the Small Magellanic Cloud (SMC). To achieve this goal, we apply our recent dust growth description, coupled with a radiative transfer code to the CSEs of C-stars evolving along the thermally pulsing asymptotic giant branch, for which we compute spectra and colours. Then, we compare our modelled colours in the near- and mid-infrared (NIR and MIR) bands with the observed ones, testing different assumptions in our dust scheme and employing several data sets of optical constants for carbon dust available in the literature. Different assumptions adopted in our dust scheme change the typical size of the carbon grains produced. We constrain carbon dust properties by selecting the combination of grain size and optical constants which best reproduce several colours in the NIR and MIR at the same time. The different choices of optical properties and grain size lead to differences in the NIR and MIR colours greater than 2 mag in some cases. We conclude that the complete set of observed NIR and MIR colours are best reproduced by small grains, with sizes between ˜0.035 and ˜0.12 μm, rather than by large grains between ˜0.2 and 0.7 μm. The inability of large grains to reproduce NIR and MIR colours seems independent of the adopted optical data set. We also find a possible trend of the grain size with mass-loss and/or carbon excess in the CSEs of these stars.
Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-11-01
Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.
NASA Astrophysics Data System (ADS)
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-01
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Studying the Relative Strengths of Environmental Factors that Influence Echinoderm Body Size Trends
NASA Astrophysics Data System (ADS)
Low, A.; Randhawa, S.; Heim, N. A.; Payne, J.
2013-12-01
Body size is often a useful metric in observing how a clade responds to environmental changes. Previous research has uncovered how environmental factors such as carbon dioxide and oxygen levels influence body size evolution. However, we wanted to look into how these natural factors interact and which factors seem to have a stronger relative influence on echinoderm body size. We analyzed carbon dioxide levels, a proxy for paleotemperature, oxygen levels, and sea level. Our research process involved measuring and calculating the volume of Phanerozoic echinoderm fossils recorded in the Treatise on Invertebrate Paleontology, plotting their mean volumes over various natural factors, and using statistical tools such as correlation tests and the PaleoTS statistical analysis software to compare the relative strengths of these factors. Furthermore, we divided our data into the following three subsets to uncover more specific relationships: 1) A set that included all data of the phylum Echinodermata 2) A set that focused on the two classes with the most recorded data, Echinoidea and Crinoidea 3) A set that focused on the crinoid specimens that originated in the Paleozoic and in the post-Paleozoic. In the first subset, echinoderms had the strongest correlation with carbon dioxide, a proxy for temperature, and possessed a weaker correlation with oxygen. In the second subset, we discovered that the echinoid data also possessed a strong correlation with carbon dioxide and a weaker correlation with oxygen. For crinoids, we found that the class as a whole showed no strong correlation with any measured environmental factors. However, when we divided the crinoids based on age, we found that both Paleozoic and post-Paleozoic crinoids individually correlated strongly with sea level. However, some uncertainty with this correlation arose as the comparison of the environmental correlate models suggested that an unbiased random walk was the best fit for the data. This stands as a sharp contrast to the strong evidence provided by the appropriate graphs and correlation tests that indicate strong, dominant relationships between body size and environmental factors. Thus, though further research is necessary to settle such uncertainty, we were able to identify, observe, and compare the diversity in body size responses to environmental factors within echinoderms.
A cryogenic tensile testing apparatus for micro-samples cooled by miniature pulse tube cryocooler
NASA Astrophysics Data System (ADS)
Chen, L. B.; Liu, S. X.; Gu, K. X.; Zhou, Y.; Wang, J. J.
2015-12-01
This paper introduces a cryogenic tensile testing apparatus for micro-samples cooled by a miniature pulse tube cryocooler. At present, tensile tests are widely applied to measure the mechanical properties of materials; most of the cryogenic tensile testing apparatus are designed for samples with standard sizes, while for non-standard size samples, especially for microsamples, the tensile testing cannot be conducted. The general approach to cool down the specimens for tensile testing is by using of liquid nitrogen or liquid helium, which is not convenient: it is difficult to keep the temperature of the specimens at an arbitrary set point precisely, besides, in some occasions, liquid nitrogen, especially liquid helium, is not easily available. To overcome these limitations, a cryogenic tensile testing apparatus cooled by a high frequency pulse tube cryocooler has been designed, built and tested. The operating temperatures of the developed tensile testing apparatus cover from 20 K to room temperature with a controlling precision of ±10 mK. The apparatus configurations, the methods of operation and some cooling performance will be described in this paper.
Testlet-Based Multidimensional Adaptive Testing.
Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen
2016-01-01
Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.
Placebo can enhance creativity.
Rozenkrantz, Liron; Mayo, Avraham E; Ilan, Tomer; Hart, Yuval; Noy, Lior; Alon, Uri
2017-01-01
The placebo effect is usually studied in clinical settings for decreasing negative symptoms such as pain, depression and anxiety. There is interest in exploring the placebo effect also outside the clinic, for enhancing positive aspects of performance or cognition. Several studies indicate that placebo can enhance cognitive abilities including memory, implicit learning and general knowledge. Here, we ask whether placebo can enhance creativity, an important aspect of human cognition. Subjects were randomly assigned to a control group who smelled and rated an odorant (n = 45), and a placebo group who were treated identically but were also told that the odorant increases creativity and reduces inhibitions (n = 45). Subjects completed a recently developed automated test for creativity, the creative foraging game (CFG), and a randomly chosen subset (n = 57) also completed two manual standardized creativity tests, the alternate uses test (AUT) and the Torrance test (TTCT). In all three tests, participants were asked to create as many original solutions and were scored for originality, flexibility and fluency. The placebo group showed higher originality than the control group both in the CFG (p<0.04, effect size = 0.5) and in the AUT (p<0.05, effect size = 0.4), but not in the Torrance test. The placebo group also found more shapes outside of the standard categories found by a set of 100 CFG players in a previous study, a feature termed out-of-the-boxness (p<0.01, effect size = 0.6). The findings indicate that placebo can enhance the originality aspect of creativity. This strengthens the view that placebo can be used not only to reduce negative clinical symptoms, but also to enhance positive aspects of cognition. Furthermore, we find that the impact of placebo on creativity can be tested by CFG, which can quantify multiple aspects of creative search without need for manual coding. This approach opens the way to explore the behavioral and neural mechanisms by which placebo might amplify creativity.
Placebo can enhance creativity
Rozenkrantz, Liron; Mayo, Avraham E.; Ilan, Tomer; Hart, Yuval
2017-01-01
Background The placebo effect is usually studied in clinical settings for decreasing negative symptoms such as pain, depression and anxiety. There is interest in exploring the placebo effect also outside the clinic, for enhancing positive aspects of performance or cognition. Several studies indicate that placebo can enhance cognitive abilities including memory, implicit learning and general knowledge. Here, we ask whether placebo can enhance creativity, an important aspect of human cognition. Methods Subjects were randomly assigned to a control group who smelled and rated an odorant (n = 45), and a placebo group who were treated identically but were also told that the odorant increases creativity and reduces inhibitions (n = 45). Subjects completed a recently developed automated test for creativity, the creative foraging game (CFG), and a randomly chosen subset (n = 57) also completed two manual standardized creativity tests, the alternate uses test (AUT) and the Torrance test (TTCT). In all three tests, participants were asked to create as many original solutions and were scored for originality, flexibility and fluency. Results The placebo group showed higher originality than the control group both in the CFG (p<0.04, effect size = 0.5) and in the AUT (p<0.05, effect size = 0.4), but not in the Torrance test. The placebo group also found more shapes outside of the standard categories found by a set of 100 CFG players in a previous study, a feature termed out-of-the-boxness (p<0.01, effect size = 0.6). Conclusions The findings indicate that placebo can enhance the originality aspect of creativity. This strengthens the view that placebo can be used not only to reduce negative clinical symptoms, but also to enhance positive aspects of cognition. Furthermore, we find that the impact of placebo on creativity can be tested by CFG, which can quantify multiple aspects of creative search without need for manual coding. This approach opens the way to explore the behavioral and neural mechanisms by which placebo might amplify creativity. PMID:28892513
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Gros, Eric
2018-03-01
Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.
A Statistical Analysis Plan to Support the Joint Forward Area Air Defense Test.
1984-08-02
hy estahlishing a specific significance level prior to performing the statistical test (traditionally a levels are set at .01 or .05). What is often...undesirable increase in 8. For constant a levels , the power (I - 8) of a statistical test can he increased by Increasing the sample size of the test. fRef...ANOVA Iparison Test on MOP I=--ferences Exist AmongF "Upon MOP "A" Factor I "A" Factor I 1MOP " A " Levels ? I . I I I _ _ ________ IPerform k-Sample Com- I
Effects of sediment supply on surface textures of gravel‐bed rivers
Buffington, John M.; Montgomery, David R.
1999-01-01
Using previously published data from flume studies, we test a new approach for quantifying the effects of sediment supply (i.e., bed material supply) on surface grain size of equilibrium gravel channels. Textural response to sediment supply is evaluated relative to a theoretical prediction of competent median grain size (D50′). We find that surface median grain size (D50) varies inversely with sediment supply rate and systematically approaches the competent value (D50′) at low equilibrium transport rates. Furthermore, equilibrium transport rate is a power function of the difference between applied and critical shear stresses and is therefore a power function of the difference between competent and observed median grain sizes (D50′ and D50). Consequently, we propose that the difference between predicted and observed median grain sizes can be used to determine sediment supply rate in equilibrium channels. Our analysis framework collapses data from different studies toward a single relationship between sediment supply rate and surface grain size. While the approach appears promising, we caution that it has been tested only on a limited set of laboratory data and a narrow range of channel conditions.
Immediate Judgments of Learning are Insensitive to Implicit Interference Effects at Retrieval
Eakin, Deborah K.; Hertzog, Christopher
2013-01-01
We conducted three experiments to determine whether metamemory predictions at encoding, immediate judgments of learning (IJOLs) are sensitive to implicit interference effects that will occur at retrieval. Implicit interference was manipulated by varying the association set size of the cue (Exps. 1 & 2) or the target (Exp. 3). The typical finding is that memory is worse for large-set-size cues and targets, but only when the target is studied alone and later prompted with a related cue (extralist). When the pairs are studied together (intralist), recall is the same regardless of set size; set-size effects are eliminated. Metamemory predictions at retrieval, such as delayed JOLs (DJOLs) and feeling of knowing (FOK) judgments accurately reflect implicit interference effects (e.g., Eakin & Hertzog, 2006). In Experiment 1, we contrasted cue-set-size effects on IJOLs, DJOLs, and FOKs. After wrangling with an interesting methodological conundrum related to set size effects (Exp. 2), we found that whereas DJOLs and FOKs accurately predicted set size effects on retrieval, a comparison between IJOLs and no-cue IJOLs demonstrated that immediate judgments did not vary with set size. In Experiment 3, we confirmed this finding by manipulating target set size. Again, IJOLs did not vary with set size whereas DJOLs and FOKs did. The findings provide further evidence for the inferential view regarding the source of metamemory predictions, as well as indicate that inferences are based on different sources depending on when in the memory process predictions are made. PMID:21915761
DOT National Transportation Integrated Search
1970-01-01
Distribution Characteristics of Materials: Ten bituminous distributors and ten chip spreading operations were investigated the former by cotton pad, cup, and trough tests; the latter by measuring the distance covered by a truckload and by placing pan...
Measuring droplet size of agriuclutral spray nozzles - Measurement distance and airspeed effects
USDA-ARS?s Scientific Manuscript database
With a number of new spray testing laboratories going into operation within the U.S. and each gearing up to measure spray atomization from agricultural spray nozzles using laser diffraction, establishing and following a set of scientific standard procedures is crucial to long term data generation an...
Hydrocode predictions of collisional outcomes: Effects of target size
NASA Technical Reports Server (NTRS)
Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.
1991-01-01
Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.
Testing and extension of a sea lamprey feeding model
Cochran, Philip A.; Swink, William D.; Kinziger, Andrew P.
1999-01-01
A previous model of feeding by sea lamprey Petromyzon marinus predicted energy intake and growth by lampreys as a function of lamprey size, host size, and duration of feeding attachments, but it was applicable only to lampreys feeding at 10°C and it was tested against only a single small data set of limited scope. We extended the model to other temperatures and tested it against an extensive data set (more than 700 feeding bouts) accumulated during experiments with captive sea lampreys. Model predictions of instantaneous growth were highly correlated with observed growth, and a partitioning of mean squared error between model predictions and observed results showed that 88.5% of the variance was due to random variation rather than to systematic errors. However, deviations between observed and predicted values varied substantially, especially for short feeding bouts. Predicted and observed growth trajectories of individual lampreys during multiple feeding bouts during the summer tended to correspond closely, but predicted growth was generally much higher than observed growth late in the year. This suggests the possibility that large overwintering lampreys reduce their feeding rates while attached to hosts. Seasonal or size-related shifts in the fate of consumed energy may provide an alternative explanation. The lamprey feeding model offers great flexibility in assessing growth of captive lampreys within various experimental protocols (e.g., different host species or thermal regimes) because it controls for individual differences in feeding history.
Blue treatment enhances cyclic fatigue resistance of vortex nickel-titanium rotary files.
Plotino, Gianluca; Grande, Nicola M; Cotti, Elisabetta; Testarelli, Luca; Gambarini, Gianluca
2014-09-01
The aim of the present study was to evaluate the difference in cyclic fatigue resistance between Vortex Blue (Dentsply Tulsa Dental, Tulsa, OK) and Profile Vortex nickel-titanium (Dentsply Tulsa Dental) rotary instruments. Two groups of nickel-titanium endodontic instruments, ProFile Vortex and Vortex Blue, consisting of identical instruments in tip size and taper (15/.04, 20/.06, 25/.04, 25/.06, 30/.06, 35/.06, and 40/.04) were tested. Ten instruments from each system and size were tested for cyclic fatigue resistance, resulting in a total of 140 new instruments. All instruments were rotated in a simulated root canal with a 60° angle of curvature and a 5-mm radius of curvature of a specific cyclic fatigue testing device until fracture occurred. The number of cycles to failure and the length of the fractured tip were recorded for each instrument in each group. The mean values and standard deviation were calculated, and data were subjected to 1-way analysis of variance and a Bonferroni t test. Significance was set at the 95% confidence level. When comparing the same size of the 2 different instruments, a statistically significant difference (P < .05) was noted between all sizes of Vortex Blue and Profile Vortex instruments except for tip size 15 and .04 taper (P = 1.000). No statistically significant difference (P > .05) was noted among all groups tested in terms of fragment length. Vortex Blue showed a significant increase in cyclic fatigue resistance when compared with the same sizes of ProFile Vortex. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne
2017-02-28
This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Immediate-type hypersensitivity reactions and hypnosis: problems in methodology.
Laidlaw, T M; Richardson, D H; Booth, R J; Large, R G
1994-08-01
Hypnosis has been used to ameliorate skin test reactivity in studies dating back to the 1930s. This study using modern methodology and statistical analyses sets out to test the hypothesis that it was possible to decrease reactions to histamine by hypnotic suggestion. Five subjects, all asthmatic and untrained in hypnosis, were given three hypnotic sessions where they were asked to control their reactions to histamine administered by the Pepys technique to forearm skin. These sessions were to be compared with three non-hypnotic sessions. The flare sizes but not wheal sizes were found to be significantly reduced after the hypnosis sessions, compared to sessions without hypnosis. Skin temperature was correlated with the size of reactions. The day upon which the sessions took place contributed significant amounts of the remaining unexplained variance, giving rise to questions about what could cause these day to day changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less
Program Flow Analyzer. Volume 3
1984-08-01
metrics are defined using these basic terms. Of interest is another measure for the size of the program, called the volume: V N x log 2 n. 5 The unit of...correlated to actual data and most useful for test. The formula des - cribing difficulty may be expressed as: nl X N2D - 2 -I/L *Difficulty then, is the...linearly independent program paths through any program graph. A maximal set of these linearly independent paths, called a "basis set," can always be found
Effect of spray application technique on spray deposition in greenhouse strawberries and tomatoes.
Braekman, Pascal; Foque, Dieter; Messens, Winy; Van Labeke, Marie-Christine; Pieters, Jan G; Nuyttens, David
2010-02-01
Increasingly, Flemish greenhouse growers are using spray booms instead of spray guns to apply plant protection products. Although the advantages of spray booms are well known, growers still have many questions concerning nozzle choice and settings. Spray deposition using a vertical spray boom in tomatoes and strawberries was compared with reference spray equipment. Five different settings of nozzle type, size and pressure were tested with the spray boom. In general, the standard vertical spray boom performed better than the reference spray equipment in strawberries (spray gun) and in tomatoes (air-assisted sprayer). Nozzle type and settings significantly affected spray deposition and crop penetration. Highest overall deposits in strawberries were achieved using air-inclusion or extended-range nozzles. In tomatoes, the extended-range nozzles and the twin air-inclusion nozzles performed best. Using smaller-size extended-range nozzles above the recommended pressure range resulted in lower deposits, especially inside the crop canopy. The use of a vertical spray boom is a promising technique for applying plant protection products in a safe and efficient way in tomatoes and strawberries, and nozzle choice and setting should be carefully considered.
Shelmerdine, Susan C; Simcock, Ian C; Hutchinson, John Ciaran; Aughwane, Rosalind; Melbourne, Andrew; Nikitichev, Daniil I; Ong, Ju-Ling; Borghi, Alessandro; Cole, Garrard; Kingham, Emilia; Calder, Alistair D; Capelli, Claudio; Akhtar, Aadam; Cook, Andrew C; Schievano, Silvia; David, Anna; Ourselin, Sebastian; Sebire, Neil J; Arthurs, Owen J
2018-06-14
Microfocus CT (micro-CT) is an imaging method that provides three-dimensional digital data sets with comparable resolution to light microscopy. Although it has traditionally been used for non-destructive testing in engineering, aerospace industries and in preclinical animal studies, new applications are rapidly becoming available in the clinical setting including post-mortem fetal imaging and pathological specimen analysis. Printing three-dimensional models from imaging data sets for educational purposes is well established in the medical literature, but typically using low resolution (0.7 mm voxel size) data acquired from CT or MR examinations. With higher resolution imaging (voxel sizes below 1 micron, <0.001 mm) at micro-CT, smaller structures can be better characterised, and data sets post-processed to create accurate anatomical models for review and handling. In this review, we provide examples of how three-dimensional printing of micro-CT imaged specimens can provide insight into craniofacial surgical applications, developmental cardiac anatomy, placental imaging, archaeological remains and high-resolution bone imaging. We conclude with other potential future usages of this emerging technique.
Models for the hotspot distribution
NASA Technical Reports Server (NTRS)
Jurdy, Donna M.; Stefanick, Michael
1990-01-01
Published hotspot catalogs all show a hemispheric concentration beyond what can be expected by chance. Cumulative distributions about the center of concentration are described by a power law with a fractal dimension closer to 1 than 2. Random sets of the corresponding sizes do not show this effect. A simple shift of the random sets away from a point would produce distributions similar to those of hotspot sets. The possible relation of the hotspots to the locations of ridges and subduction zones is tested using large sets of randomly-generated points to estimate areas within given distances of the plate boundaries. The probability of finding the observed number of hotspots within 10 deg of the ridges is about what is expected.
Strain controlled cyclic tests on miniaturized specimens
NASA Astrophysics Data System (ADS)
Procházka, R.; Džugan, J.
2017-02-01
The paper is dealing with strain controlled cyclic tests using a non-contact strain measurement based on digital image correlation techniques on proportional sizes of conventional specimens. The cyclic behaviour of 34CrNiMo6 high-strength steel was investigated on miniaturized round specimens with diameter of 2mm that were compared with specimens in accordance with ASTM E606 standards. The cycle asymmetry coefficient was R= -1. This application is intended to be used for life time assessment of in service components in future work which enables to carried out a group of mechanical tests from a limited amount of the experimental material. The attention was paid to confirm the suitability of the proposed size miniaturization geometry, testing set up and procedure. The test results obtained enabled to construct Manson-Coffin curves and assess fatigue parameters. The purpose of this study is to present differences between cyclic curves and cyclic parameters which have been evaluated based on conventional and miniaturized specimens.
Impact of pulse duration on Ho:YAG laser lithotripsy: fragmentation and dusting performance.
Bader, Markus J; Pongratz, Thomas; Khoder, Wael; Stief, Christian G; Herrmann, Thomas; Nagele, Udo; Sroka, Ronald
2015-04-01
In vitro investigations of Ho:YAG laser-induced stone fragmentation were performed to identify potential impacts of different pulse durations on stone fragmentation characteristics. A Ho:YAG laser system (Swiss LaserClast, EMS S.A., Nyon, Switzerland) with selectable long or short pulse mode was tested with regard to its fragmentation and laser hardware compatibility properties. The pulse duration is depending on the specific laser parameters. Fragmentation tests (hand-held, hands-free, single-pulse-induced crater) on artificial BEGO stones were performed under reproducible experimental conditions (fibre sizes: 365 and 200 µm; laser settings: 10 W through combinations of 0.5, 1, 2 J/pulse and 20, 10, 5 Hz, respectively). Differences in fragmentation rates between the two pulse duration regimes were detected with statistical significance for defined settings. Hand-held and motivated Ho:YAG laser-assisted fragmentation of BEGO stones showed no significant difference between short pulse mode and long pulse mode, neither in fragmentation rates nor in number of fragments and fragment sizes. Similarly, the results of the hands-free fragmentation tests (with and without anti-repulsion device) showed no statistical differences between long pulse and short pulse modes. The study showed that fragmentation rates for long and short pulse durations at identical power settings remain at a comparable level. Longer holmium laser pulse duration reduces stone pushback. Therefore, longer laser pulses may result in better clinical outcome of laser lithotripsy and more convenient handling during clinical use without compromising fragmentation effectiveness.
Experimental Spin Testing of Integrally Damped Composite Plates
NASA Technical Reports Server (NTRS)
Kosmatka, John
1998-01-01
The experimental behavior of spinning laminated composite pretwisted plates (turbo-fan blade-like) with small (less than 10% by volume) integral viscoelastic damping patches was investigated at NASA-Lewis Research Center. Ten different plate sets were experimentally spin tested and the resulting data was analyzed. The first-four plate sets investigated tailoring patch locations and definitions to damp specific modes on spinning flat graphite/epoxy plates as a function of rotational speed. The remaining six plate sets investigated damping patch size and location on specific modes of pretwisted (30 degrees) graphite/epoxy plates. The results reveal that: (1) significant amount of damping can be added using a small amount of damping material, (2) the damped plates experienced no failures up to the tested 28,000 g's and 750,000 cycles, (3) centrifugal loads caused an increase in bending frequencies and corresponding reductions in bending damping levels that are proportional to the bending stiffness increase, and (4) the centrifugal loads caused a decrease in torsion natural frequency and increase in damping levels of pretwisted composite plates.
Fabrication and evaluation of cold/formed/weldbrazed beta-titanium skin-stiffened compression panels
NASA Technical Reports Server (NTRS)
Royster, D. M.; Bales, T. T.; Davis, R. C.; Wiant, H. R.
1983-01-01
The room temperature and elevated temperature buckling behavior of cold formed beta titanium hat shaped stiffeners joined by weld brazing to alpha-beta titanium skins was determined. A preliminary set of single stiffener compression panels were used to develop a data base for material and panel properties. These panels were tested at room temperature and 316 C (600 F). A final set of multistiffener compression panels were fabricated for room temperature tests by the process developed in making the single stiffener panels. The overall geometrical dimensions for the multistiffener panels were determined by the structural sizing computer code PASCO. The data presented from the panel tests include load shortening curves, local buckling strengths, and failure loads. Experimental buckling loads are compared with the buckling loads predicted by the PASCO code. Material property data obtained from tests of ASTM standard dogbone specimens are also presented.
Dilution: atheoretical burden or just load? A reply to Tsal and Benoni (2010).
Lavie, Nilli; Torralbo, Ana
2010-12-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere "dilution") for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load.
Attentional priorities and access to short-term memory: parietal interactions.
Gillebert, Céline R; Dyrholm, Mads; Vangkilde, Signe; Kyllingsbæk, Søren; Peeters, Ronald; Vandenberghe, Rik
2012-09-01
The intraparietal sulcus (IPS) has been implicated in selective attention as well as visual short-term memory (VSTM). To contrast mechanisms of target selection, distracter filtering, and access to VSTM, we combined behavioral testing, computational modeling and functional magnetic resonance imaging. Sixteen healthy subjects participated in a change detection task in which we manipulated both target and distracter set sizes. We directly compared the IPS response as a function of the number of targets and distracters in the display and in VSTM. When distracters were not present, the posterior and middle segments of IPS showed the predicted asymptotic activity increase with an increasing target set size. When distracters were added to a single target, activity also increased as predicted. However, the addition of distracters to multiple targets suppressed both middle and posterior IPS activities, thereby displaying a significant interaction between the two factors. The interaction between target and distracter set size in IPS could not be accounted for by a simple explanation in terms of number of items accessing VSTM. Instead, it led us to a model where items accessing VSTM receive differential weights depending on their behavioral relevance, and secondly, a suppressive effect originates during the selection phase when multiple targets and multiple distracters are simultaneously present. The reverse interaction between target and distracter set size was significant in the right temporoparietal junction (TPJ), where activity was highest for a single target compared to any other condition. Our study reconciles the role of middle IPS in attentional selection and biased competition with its role in VSTM access. Copyright © 2012 Elsevier Inc. All rights reserved.
Lavie, Nilli; Torralbo, Ana
2010-01-01
Load theory of attention proposes that distractor processing is reduced in tasks with high perceptual load that exhaust attentional capacity within task-relevant processing. In contrast, tasks of low perceptual load leave spare capacity that spills over, resulting in the perception of task-irrelevant, potentially distracting stimuli. Tsal and Benoni (2010) find that distractor response competition effects can be reduced under conditions with a high search set size but low perceptual load (due to a singleton color target). They claim that the usual effect of search set size on distractor processing is not due to attentional load but instead attribute this to lower level visual interference. Here, we propose an account for their findings within load theory. We argue that in tasks of low perceptual load but high set size, an irrelevant distractor competes with the search nontargets for remaining capacity. Thus, distractor processing is reduced under conditions in which the search nontargets receive the spillover of capacity instead of the irrelevant distractor. We report a new experiment testing this prediction. Our new results demonstrate that, when peripheral distractor processing is reduced, it is the search nontargets nearest to the target that are perceived instead. Our findings provide new evidence for the spare capacity spillover hypothesis made by load theory and rule out accounts in terms of lower level visual interference (or mere “dilution”) for cases of reduced distractor processing under low load in displays of high set size. We also discuss additional evidence that discounts the viability of Tsal and Benoni's dilution account as an alternative to perceptual load. PMID:21133554
Studies of lead tungstate crystals for the ALICE electromagnetic calorimeter PHOS
NASA Astrophysics Data System (ADS)
Ippolitov, M.; Beloglovsky, S.; Bogolubsky, M.; Burachas, S.; Erin, S.; Klovning, A.; Kuriakin, A.; Lebedev, V.; Lobanov, M.; Maeland, O.; Manko, V.; Nikulin, S.; Nyanin, A.; Odland, O. H.; Punin, V.; Sadovsky, S.; Samoilenko, V.; Sibiriak, Yu.; Skaali, B.; Tsvetkov, A.; Vinogradov, Yu.; Vasiliev, A.
2002-06-01
Full-size (22×22×180 mm 3) A LICE crystals were delivered by "North Crystals" company, Apatity, Russia. These crystals were tested with test benches, specially built for measurements of the crystals optical transmission and light yield. Beam-test results of different sets of 3×3 matrices with Hamamatsu APD light readout are presented. Data were taken at electron momenta from 600 MeV/ c up to 10 GeV/ c. Energy resolution and linearity curves are measured. The tests were carried out at the C ERN PS and SPS secondary beam-lines.
Dual representation of item positions in verbal short-term memory: Evidence for two access modes.
Lange, Elke B; Verhaeghen, Paul; Cerella, John
Memory sets of N = 1~5 digits were exposed sequentially from left-to-right across the screen, followed by N recognition probes. Probes had to be compared to memory list items on identity only (Sternberg task) or conditional on list position. Positions were probed randomly or in left-to-right order. Search functions related probe response times to set size. Random probing led to ramped, "Sternbergian" functions whose intercepts were elevated by the location requirement. Sequential probing led to flat search functions-fast responses unaffected by set size. These results suggested that items in STM could be accessed either by a slow search-on-identity followed by recovery of an associated location tag, or in a single step by following item-to-item links in study order. It is argued that this dual coding of location information occurs spontaneously at study, and that either code can be utilised at retrieval depending on test demands.
Code of Federal Regulations, 2013 CFR
2013-10-01
... with the following: (a) The pipe must be made of steel of the carbon, low alloy-high strength, or alloy... sets forth the chemical requirements for the pipe steel and mechanical tests for the pipe to provide... made, the specified minimum yield strength or grade, and the pipe size. The marking must be applied in...
Code of Federal Regulations, 2012 CFR
2012-10-01
... with the following: (a) The pipe must be made of steel of the carbon, low alloy-high strength, or alloy... sets forth the chemical requirements for the pipe steel and mechanical tests for the pipe to provide... made, the specified minimum yield strength or grade, and the pipe size. The marking must be applied in...
Code of Federal Regulations, 2014 CFR
2014-10-01
... with the following: (a) The pipe must be made of steel of the carbon, low alloy-high strength, or alloy... sets forth the chemical requirements for the pipe steel and mechanical tests for the pipe to provide... made, the specified minimum yield strength or grade, and the pipe size. The marking must be applied in...
Code of Federal Regulations, 2011 CFR
2011-10-01
... with the following: (a) The pipe must be made of steel of the carbon, low alloy-high strength, or alloy... sets forth the chemical requirements for the pipe steel and mechanical tests for the pipe to provide... made, the specified minimum yield strength or grade, and the pipe size. The marking must be applied in...
Code of Federal Regulations, 2010 CFR
2010-10-01
... with the following: (a) The pipe must be made of steel of the carbon, low alloy-high strength, or alloy... sets forth the chemical requirements for the pipe steel and mechanical tests for the pipe to provide... made, the specified minimum yield strength or grade, and the pipe size. The marking must be applied in...
Gender Differences in Subjective Well-Being: Comparing Societies with Respect to Gender Equality
ERIC Educational Resources Information Center
Tesch-Romer, Clemens; Motel-Klingebiel, Andreas; Tomasik, Martin J.
2008-01-01
These analyses explore the relationship between gender inequality and subjective well-being. The hypothesis was tested as to whether societal gender inequality is related to the size of gender differences in subjective well-being in various societies. Results come from comparative data sets (World Values Survey, involving 57 countries; OASIS…
The contribution of stimulus frequency and recency to set-size effects.
van 't Wout, Félice
2018-06-01
Hick's law describes the increase in choice reaction time (RT) with the number of stimulus-response (S-R) mappings. However, in choice RT experiments, set-size is typically confounded with stimulus recency and frequency: With a smaller set-size, each stimulus occurs on average more frequently and more recently than with a larger set-size. To determine to what extent stimulus recency and frequency contribute to the set-size effect, stimulus set-size was manipulated independently of stimulus recency and frequency, by keeping recency and frequency constant for a subset of the stimuli. Although this substantially reduced the set-size effect (by approximately two-thirds for these stimuli), it did not eliminate it. Thus, the time required to retrieve an S-R mapping from memory is (at least in part) determined by the number of alternatives. In contrast, a recent task switching study (Van 't Wout et al. in Journal of Experimental Psychology: Learning, Memory & Cognition., 41, 363-376, 2015) using the same manipulation found that the time required to retrieve a task-set from memory is not influenced by the number of alternatives per se. Hence, this experiment further supports a distinction between two levels of representation in task-set control: The level of task-sets, and the level of S-R mappings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720
2015-02-21
A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Mardirossian, Narbe; Head-Gordon, Martin
2015-02-20
We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less
Nosewitness Identification: Effects of Lineup Size and Retention Interval.
Alho, Laura; Soares, Sandra C; Costa, Liliana P; Pinto, Elisa; Ferreira, Jacqueline H T; Sorjonen, Kimmo; Silva, Carlos F; Olsson, Mats J
2016-01-01
Although canine identification of body odor (BO) has been widely used as forensic evidence, the concept of nosewitness identification by human observers was only recently put to the test. The results indicated that BOs associated with male characters in authentic crime videos could later be identified in BO lineup tests well above chance. To further evaluate nosewitness memory, we assessed the effects of lineup size (Experiment 1) and retention interval (Experiment 2), using a forced-choice memory test. The results showed that nosewitness identification works for all lineup sizes (3, 5, and 8 BOs), but that larger lineups compromise identification performance in similarity to observations from eye- and earwitness studies. Also in line with previous eye- and earwitness studies, but in disagreement with some studies on odor memory, Experiment 2 showed significant forgetting between shorter retention intervals (15 min) and longer retention intervals (1-week) using lineups of five BOs. Altogether this study shows that identification of BO in a forensic setting is possible and has limits and characteristics in line with witness identification through other sensory modalities.
Nosewitness Identification: Effects of Lineup Size and Retention Interval
Alho, Laura; Soares, Sandra C.; Costa, Liliana P.; Pinto, Elisa; Ferreira, Jacqueline H. T.; Sorjonen, Kimmo; Silva, Carlos F.; Olsson, Mats J.
2016-01-01
Although canine identification of body odor (BO) has been widely used as forensic evidence, the concept of nosewitness identification by human observers was only recently put to the test. The results indicated that BOs associated with male characters in authentic crime videos could later be identified in BO lineup tests well above chance. To further evaluate nosewitness memory, we assessed the effects of lineup size (Experiment 1) and retention interval (Experiment 2), using a forced-choice memory test. The results showed that nosewitness identification works for all lineup sizes (3, 5, and 8 BOs), but that larger lineups compromise identification performance in similarity to observations from eye- and earwitness studies. Also in line with previous eye- and earwitness studies, but in disagreement with some studies on odor memory, Experiment 2 showed significant forgetting between shorter retention intervals (15 min) and longer retention intervals (1-week) using lineups of five BOs. Altogether this study shows that identification of BO in a forensic setting is possible and has limits and characteristics in line with witness identification through other sensory modalities. PMID:27303317
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging
Ibrahim, Mohd Firdaus; Ahmad Sa’ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-01-01
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t-test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass. PMID:27801799
In-Line Sorting of Harumanis Mango Based on External Quality Using Visible Imaging.
Ibrahim, Mohd Firdaus; Ahmad Sa'ad, Fathinul Syahir; Zakaria, Ammar; Md Shakaff, Ali Yeon
2016-10-27
The conventional method of grading Harumanis mango is time-consuming, costly and affected by human bias. In this research, an in-line system was developed to classify Harumanis mango using computer vision. The system was able to identify the irregularity of mango shape and its estimated mass. A group of images of mangoes of different size and shape was used as database set. Some important features such as length, height, centroid and parameter were extracted from each image. Fourier descriptor and size-shape parameters were used to describe the mango shape while the disk method was used to estimate the mass of the mango. Four features have been selected by stepwise discriminant analysis which was effective in sorting regular and misshapen mango. The volume from water displacement method was compared with the volume estimated by image processing using paired t -test and Bland-Altman method. The result between both measurements was not significantly different (P > 0.05). The average correct classification for shape classification was 98% for a training set composed of 180 mangoes. The data was validated with another testing set consist of 140 mangoes which have the success rate of 92%. The same set was used for evaluating the performance of mass estimation. The average success rate of the classification for grading based on its mass was 94%. The results indicate that the in-line sorting system using machine vision has a great potential in automatic fruit sorting according to its shape and mass.
Mechanical limits to maximum weapon size in a giant rhinoceros beetle.
McCullough, Erin L
2014-07-07
The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
NASA Astrophysics Data System (ADS)
Jux, Maximilian; Finke, Benedikt; Mahrholz, Thorsten; Sinapius, Michael; Kwade, Arno; Schilde, Carsten
2017-04-01
Several epoxy Al(OH)O (boehmite) dispersions in an epoxy resin are produced in a kneader to study the mechanistic correlation between the nanoparticle size and mechanical properties of the prepared nanocomposites. The agglomerate size is set by a targeted variation in solid content and temperature during dispersion, resulting in a different level of stress intensity and thus a different final agglomerate size during the process. The suspension viscosity was used for the estimation of stress energy in laminar shear flow. Agglomerate size measurements are executed via dynamic light scattering to ensure the quality of the produced dispersions. Furthermore, various nanocomposite samples are prepared for three-point bending, tension, and fracture toughness tests. The screening of the size effect is executed with at least seven samples per agglomerate size and test method. The variation of solid content is found to be a reliable method to adjust the agglomerate size between 138-354 nm during dispersion. The size effect on the Young's modulus and the critical stress intensity is only marginal. Nevertheless, there is a statistically relevant trend showing a linear increase with a decrease in agglomerate size. In contrast, the size effect is more dominant to the sample's strain and stress at failure. Unlike microscaled agglomerates or particles, which lead to embrittlement of the composite material, nanoscaled agglomerates or particles cause the composite elongation to be nearly of the same level as the base material. The observed effect is valid for agglomerate sizes between 138-354 nm and a particle mass fraction of 10 wt%.
Cian, C; Esquivié, D; Barraud, P A; Raphel, C
1995-01-01
The visual angle subtended by the frame seems to be an important determinant of the contribution of orientation contrast and illusion of self-tilt (ie vection) to the rod-and-frame effect. Indeed, the visuovestibular factor (which produces vection) seems to be predominant in large displays and the contrast effect in small displays. To determine how these two phenomena are combined to account for the rod-and-frame effect, independent estimates of the magnitude of each component in relation to the angular size subtended by the display were examined. Thirty-five observers were exposed to three sets of experimental situations: body-adjustment test (illusion of self-tilt only), the tilt illusion (contrast only) and the rod-and-frame test, each display subtending 7, 12, 28, and 45 deg of visual angle. Results showed that errors recorded in the three situations increased linearly with the angular size. Whatever the size of the frame, both mechanisms, contrast effect (tilt illusion) and illusory effect on self-orientation (body-adjustment test), are always present. However, rod-and-frame errors became greater at a faster rate than the other two effects as the size of teh stimuli became larger. Neither one nor the other independent phenomenen, nor the combined effect could fully account for the rod-and-frame effect whatever the angular size of the apparatus.
Hyperspectral data discrimination methods
NASA Astrophysics Data System (ADS)
Casasent, David P.; Chen, Xuewen
2000-12-01
Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Ormsbee, A. I.; Bragg, M. B.; Maughmer, M. D.
1981-01-01
A set of relationships used to scale small sized dispersion studies to full size results are experimentally verified and, with some qualifications, basic deposition patterns are presented. In the process of validating these scaling laws, the basic experimental techniques used in conducting such studies both with and without an operational propeller were developed. The procedures that evolved are outlined in some detail. The envelope of test conditions that can be accommodated in the Langley Vortex Research Facility, which were developed theoretically, are verified using a series of vortex trajectory experiments that help to define the limitations due to wall interference effects for models of different sizes.
Photoacoustic absorption spectroscopy of single optically trapped aerosol droplets
NASA Astrophysics Data System (ADS)
Covert, Paul A.; Cremer, Johannes W.; Signorell, Ruth
2017-08-01
Photoacoustics have been widely used for the study of aerosol optical properties. To date, these studies have been performed on particle ensembles, with minimal ability to control for particle size. Here, we present our singleparticle photoacoustic spectrometer. The sensitivity and stability of the instrument is discussed, along with results from two experiments that illustrate the unique capabilities of this instrument. In the first experiment, we present a measurement of the particle size-dependence of the photoacoustic response. Our results confirm previous models of aerosol photoacoustics that had yet to be experimentally tested. The second set of results reveals a size-dependence of photochemical processes within aerosols that results from the nanofocusing of light within individual droplets.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Reynolds, Joseph P.; Vos, Wouter K.; Hogervorst, Maarten A.; Fanning, Jonathan D.
2011-05-01
The TTP (Targeting Task Performance) metric, developed at NVESD, is the current standard US Army model to predict EO/IR Target Acquisition performance. This model however does not have a corresponding lab or field test to empirically assess the performance of a camera system. The TOD (Triangle Orientation Discrimination) method, developed at TNO in The Netherlands, provides such a measurement. In this study, we make a direct comparison between TOD performance for a range of sensors and the extensive historical US observer performance database built to develop and calibrate the TTP metric. The US perception data were collected doing an identification task by military personnel on a standard 12 target, 12 aspect tactical vehicle image set that was processed through simulated sensors for which the most fundamental sensor parameters such as blur, sampling, spatial and temporal noise were varied. In the present study, we measured TOD sensor performance using exactly the same sensors processing a set of TOD triangle test patterns. The study shows that good overall agreement is obtained when the ratio between target characteristic size and TOD test pattern size at threshold equals 6.3. Note that this number is purely based on empirical data without any intermediate modeling. The calibration of the TOD to the TTP is highly beneficial to the sensor modeling and testing community for a variety of reasons. These include: i) a connection between requirement specification and acceptance testing, and ii) a very efficient method to quickly validate or extend the TTP range prediction model to new systems and tasks.
TEGS-CN: A Statistical Method for Pathway Analysis of Genome-wide Copy Number Profile.
Huang, Yen-Tsung; Hsu, Thomas; Christiani, David C
2014-01-01
The effects of copy number alterations make up a significant part of the tumor genome profile, but pathway analyses of these alterations are still not well established. We proposed a novel method to analyze multiple copy numbers of genes within a pathway, termed Test for the Effect of a Gene Set with Copy Number data (TEGS-CN). TEGS-CN was adapted from TEGS, a method that we previously developed for gene expression data using a variance component score test. With additional development, we extend the method to analyze DNA copy number data, accounting for different sizes and thus various numbers of copy number probes in genes. The test statistic follows a mixture of X (2) distributions that can be obtained using permutation with scaled X (2) approximation. We conducted simulation studies to evaluate the size and the power of TEGS-CN and to compare its performance with TEGS. We analyzed a genome-wide copy number data from 264 patients of non-small-cell lung cancer. With the Molecular Signatures Database (MSigDB) pathway database, the genome-wide copy number data can be classified into 1814 biological pathways or gene sets. We investigated associations of the copy number profile of the 1814 gene sets with pack-years of cigarette smoking. Our analysis revealed five pathways with significant P values after Bonferroni adjustment (<2.8 × 10(-5)), including the PTEN pathway (7.8 × 10(-7)), the gene set up-regulated under heat shock (3.6 × 10(-6)), the gene sets involved in the immune profile for rejection of kidney transplantation (9.2 × 10(-6)) and for transcriptional control of leukocytes (2.2 × 10(-5)), and the ganglioside biosynthesis pathway (2.7 × 10(-5)). In conclusion, we present a new method for pathway analyses of copy number data, and causal mechanisms of the five pathways require further study.
Barnard, P.L.; Rubin, D.M.; Harney, J.; Mustain, N.
2007-01-01
This extensive field test of an autocorrelation technique for determining grain size from digital images was conducted using a digital bed-sediment camera, or 'beachball' camera. Using 205 sediment samples and >1200 images from a variety of beaches on the west coast of the US, grain size ranging from sand to granules was measured from field samples using both the autocorrelation technique developed by Rubin [Rubin, D.M., 2004. A simple autocorrelation algorithm for determining grain size from digital images of sediment. Journal of Sedimentary Research, 74(1): 160-165.] and traditional methods (i.e. settling tube analysis, sieving, and point counts). To test the accuracy of the digital-image grain size algorithm, we compared results with manual point counts of an extensive image data set in the Santa Barbara littoral cell. Grain sizes calculated using the autocorrelation algorithm were highly correlated with the point counts of the same images (r2 = 0.93; n = 79) and had an error of only 1%. Comparisons of calculated grain sizes and grain sizes measured from grab samples demonstrated that the autocorrelation technique works well on high-energy dissipative beaches with well-sorted sediment such as in the Pacific Northwest (r2 ??? 0.92; n = 115). On less dissipative, more poorly sorted beaches such as Ocean Beach in San Francisco, results were not as good (r2 ??? 0.70; n = 67; within 3% accuracy). Because the algorithm works well compared with point counts of the same image, the poorer correlation with grab samples must be a result of actual spatial and vertical variability of sediment in the field; closer agreement between grain size in the images and grain size of grab samples can be achieved by increasing the sampling volume of the images (taking more images, distributed over a volume comparable to that of a grab sample). In all field tests the autocorrelation method was able to predict the mean and median grain size with ???96% accuracy, which is more than adequate for the majority of sedimentological applications, especially considering that the autocorrelation technique is estimated to be at least 100 times faster than traditional methods.
The impact of image-size manipulation and sugar content on children's cereal consumption.
Neyens, E; Aerts, G; Smits, T
2015-12-01
Previous studies have demonstrated that portion sizes and food energy-density influence children's eating behavior. However, the potential effects of front-of-pack image-sizes of serving suggestions and sugar content have not been tested. Using a mixed experimental design among young children, this study examines the effects of image-size manipulation and sugar content on cereal and milk consumption. Children poured and consumed significantly more cereal and drank significantly more milk when exposed to a larger sized image of serving suggestion as compared to a smaller image-size. Sugar content showed no main effects. Nevertheless, cereal consumption only differed significantly between small and large image-sizes when sugar content was low. An advantage of this study was the mundane setting in which the data were collected: a school's dining room instead of an artificial lab. Future studies should include a control condition, with children eating by themselves to reflect an even more natural context. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Perneczky, L.; Rauwolf, M.; Ingerle, D.; Eichert, D.; Brigidi, F.; Jark, W.; Bjeoumikhova, S.; Pepponi, G.; Wobrauschek, P.; Streli, C.; Turyanskaya, A.
2018-07-01
The confocal μXRF spectrometer of Atominstitut (ATI) was transported and set up at the X-ray Fluorescence beamline at Elettra - Sincrotrone Trieste. It was successfully adjusted to the incoming beam (9.2 keV). Test measurements on a free-standing Cu wire were performed to determine the size of the focused micro-beam (non-confocal mode, 56 × 35 μm2) and the size of the confocal volume (confocal mode, 41 × 24 × 34 μm2) for the Cu-K α emission. In order to test the setup's capabilities, two areas on different human bone samples were measured in confocal scanning mode. For one of the samples the comparison with a previous μ XRF measurement, obtained with a low power X-ray tube in the lab, is presented.
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
Testlet-Based Multidimensional Adaptive Testing
Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen
2016-01-01
Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range. PMID:27917132
Comparison of disease prevalence in two populations in the presence of misclassification.
Tang, Man-Lai; Qiu, Shi-Fang; Poon, Wai-Yin
2012-11-01
Comparing disease prevalence in two groups is an important topic in medical research, and prevalence rates are obtained by classifying subjects according to whether they have the disease. Both high-cost infallible gold-standard classifiers or low-cost fallible classifiers can be used to classify subjects. However, statistical analysis that is based on data sets with misclassifications leads to biased results. As a compromise between the two classification approaches, partially validated sets are often used in which all individuals are classified by fallible classifiers, and some of the individuals are validated by the accurate gold-standard classifiers. In this article, we develop several reliable test procedures and approximate sample size formulas for disease prevalence studies based on the difference between two disease prevalence rates with two independent partially validated series. Empirical studies show that (i) the Score test produces close-to-nominal level and is preferred in practice; and (ii) the sample size formula based on the Score test is also fairly accurate in terms of the empirical power and type I error rate, and is hence recommended. A real example from an aplastic anemia study is used to illustrate the proposed methodologies. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sabri, M; Melara, R D; Algom, D
2001-06-01
In 6 experiments probing selective attention through Stroop classification, 4 factors of context were manipulated: (a) psychophysical context, the distinctiveness of values along the color and word dimensions; (b) set size context, the number of stimulus values tested; (c) production context, the mode used to respond; and (d) covariate context, the correlation between the dimensions. The psychophysical and production contexts mainly caused an asymmetry in selective attention failure between colors and words, whereas the set size and covariate contexts contributed primarily to the average or global magnitudes of attentional disruption across dimensions. The results suggest that (a) Stroop dimensions are perceptually separable, (b) J.R. Stroop's (1935) classic findings arose from his particular combination of contexts, and (c) stimulus uncertainty and dimensional imbalance are the primary sources of task and congruity effects in the Stroop paradigm.
Comparison between Pludix and impact/optical disdrometers during rainfall measurement campaigns
NASA Astrophysics Data System (ADS)
Caracciolo, Clelia; Prodi, Franco; Uijlenhoet, Remko
2006-11-01
The performances of two couples of disdrometers based on different measuring principles are compared: a classical Joss-Waldvogel disdrometer and a recently developed device, called the Pludix tested in Ferrara, Italy, and Pludix and the two-dimensional video disdrometer (2DVD) tested in Cabauw, The Netherlands. First, the measuring principles of the different instruments are presented and compared. Secondly, the performances of the two pairs of disdrometers are analysed by comparing their rain amounts with nearby tipping bucket rain gauges and the inferred drop size distributions. The most important rainfall integral parameters (e.g. rain rate and radar reflectivity) and drop size distribution parameters are also analysed and compared. The data set for Ferrara comprises 13 rainfall events, with a total of 20 mm of rainfall and a maximum rain rate of 4 mm h - 1 . The data set for Cabauw consists of 9 events, with 25-50 mm of rainfall and a maximum rain rate of 20-40 mm h - 1 . The Pludix tends to underestimate slightly the bulk rainfall variables in less intense events, whereas it tends to overestimate with respect to the other instruments in heavier events. The correspondence of the inferred drop size distributions with those measured by the other disdrometers is reasonable, particularly with the Joss-Waldvogel disdrometer. Considering that the Pludix is still in a calibration and testing phase, the reported results are encouraging. A new signal inversion algorithm, which will allow the detection of rain drops throughout the entire diameter interval between 0.3 and 7.0 mm, is under development.
Evaluating DFT for Transition Metals and Binaries: Developing the V/DM-17 Test Set
NASA Astrophysics Data System (ADS)
Decolvenaere, Elizabeth; Mattsson, Ann
We have developed the V-DM/17 test set to evaluate the experimental accuracy of DFT calculations of transition metals. When simulation and experiment disagree, the disconnect in length-scales and temperatures makes determining ``who is right'' difficult. However, methods to evaluate the experimental accuracy of functionals in the context of solid-state materials science, especially for transition metals, is lacking. As DFT undergoes a shift from a descriptive to a predictive tool, these issues of verification are becoming increasingly important. With undertakings like the Materials Project leading the way in high-throughput predictions and discoveries, the development of a one-size-fits-most approach to verification is critical. Our test set evaluates 26 transition metal elements and 80 transition metal alloys across three physical observables: lattice constants, elastic coefficients, and formation energy of alloys. Whether or not the formation energy can be reproduced measures whether the relevant physics are captured in a calculation. This is especially important question in transition metals, where active d-electrons can thwart commonly used techniques. In testing the V/DM-17 test set, we offer new views into the performance of existing functionals. Sandia National Labs is a multi-mission laboratory managed and operated by Sandia Corp., a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Ranked set sampling: cost and optimal set size.
Nahhas, Ramzi W; Wolfe, Douglas A; Chen, Haiying
2002-12-01
McIntyre (1952, Australian Journal of Agricultural Research 3, 385-390) introduced ranked set sampling (RSS) as a method for improving estimation of a population mean in settings where sampling and ranking of units from the population are inexpensive when compared with actual measurement of the units. Two of the major factors in the usefulness of RSS are the set size and the relative costs of the various operations of sampling, ranking, and measurement. In this article, we consider ranking error models and cost models that enable us to assess the effect of different cost structures on the optimal set size for RSS. For reasonable cost structures, we find that the optimal RSS set sizes are generally larger than had been anticipated previously. These results will provide a useful tool for determining whether RSS is likely to lead to an improvement over simple random sampling in a given setting and, if so, what RSS set size is best to use in this case.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
Clutch size declines with elevation in tropical birds
Boyce, A.J.; Freeman, Benjamin G.; Mitchell, Adam E.; Martin, Thomas E.
2015-01-01
Clutch size commonly decreases with increasing elevation among temperate-zone and subtropical songbird species. Tropical songbirds typically lay small clutches, thus the ability to evolve even smaller clutch sizes at higher elevations is unclear and untested. We conducted a comparative phylogenetic analysis using data gathered from the literature to test whether clutch size varied with elevation among forest passerines from three tropical biogeographic regions—the Venezuelan Andes and adjacent lowlands, Malaysian Borneo, and New Guinea. We found a significant negative effect of elevation on variation in clutch size among species. We found the same pattern using field data sampled across elevational gradients in Venezuela and Malaysian Borneo. Field data were not available for New Guinea. Both sets of results demonstrate that tropical montane species across disparate biogeographic realms lay smaller clutches than closely related low-elevation species. The environmental sources of selection underlying this pattern remain uncertain and merit further investigation.
Reliably detectable flaw size for NDE methods that use calibration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Reliably Detectable Flaw Size for NDE Methods that Use Calibration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Development and psychometric evaluation of the breast size satisfaction scale.
Pahlevan Sharif, Saeed
2017-10-09
Purpose The purpose of this paper is to develop and evaluate psychometrically an instrument named the Breast Size Satisfaction Scale (BSSS) to assess breast size satisfaction. Design/methodology/approach The present scale was developed using a set of 16 computer-generated 3D images of breasts to overcome some of the limitations of existing instruments. The images were presented to participants and they were asked to select the figure that most accurately depicted their actual breast size and the figure that most closely represented their ideal breast size. Breast size satisfaction was computed by subtracting the absolute value of the difference between ideal and actual perceived size from 16, such that higher values indicate greater breast size satisfaction. Findings Study 1 ( n=65 female undergraduate students) showed good test-retest reliability and study 2 ( n=1,000 Iranian women, aged 18 years and above) provided support for convergent validity using a nomological network approach. Originality/value The BSSS demonstrated good psychometric properties and thus can be used in future studies to assess breast size satisfaction among women.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de
2014-06-14
Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less
Adjemian, Jennifer C Z; Girvetz, Evan H; Beckett, Laurel; Foley, Janet E
2006-01-01
More than 20 species of fleas in California are implicated as potential vectors of Yersinia pestis. Extremely limited spatial data exist for plague vectors-a key component to understanding where the greatest risks for human, domestic animal, and wildlife health exist. This study increases the spatial data available for 13 potential plague vectors by using the ecological niche modeling system Genetic Algorithm for Rule-Set Production (GARP) to predict their respective distributions. Because the available sample sizes in our data set varied greatly from one species to another, we also performed an analysis of the robustness of GARP by using the data available for flea Oropsylla montana (Baker) to quantify the effects that sample size and the chosen explanatory variables have on the final species distribution map. GARP effectively modeled the distributions of 13 vector species. Furthermore, our analyses show that all of these modeled ranges are robust, with a sample size of six fleas or greater not significantly impacting the percentage of the in-state area where the flea was predicted to be found, or the testing accuracy of the model. The results of this study will help guide the sampling efforts of future studies focusing on plague vectors.
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Measurement of pH in whole blood by near-infrared spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, M. Kathleen; Maynard, John D.; Robinson, M. Ries
1999-03-01
Whole blood pH has been determined {ital in vitro} by using near-infrared spectroscopy over the wavelength range of 1500 to 1785 nm with multivariate calibration modeling of the spectral data obtained from two different sample sets. In the first sample set, the pH of whole blood was varied without controlling cell size and oxygen saturation (O{sub 2} Sat) variation. The result was that the red blood cell (RBC) size and O{sub 2} Sat correlated with pH. Although the partial least-squares (PLS) multivariate calibration of these data produced a good pH prediction cross-validation standard error of prediction (CVSEP)=0.046, R{sup 2}=0.982, themore » spectral data were dominated by scattering changes due to changing RBC size that correlated with the pH changes. A second experiment was carried out where the RBC size and O{sub 2} Sat were varied orthogonally to the pH variation. A PLS calibration of the spectral data obtained from these samples produced a pH prediction with an R{sup 2} of 0.954 and a cross-validated standard error of prediction of 0.064 pH units. The robustness of the PLS calibration models was tested by predicting the data obtained from the other sets. The predicted pH values obtained from both data sets yielded R{sup 2} values greater than 0.9 once the data were corrected for differences in hemoglobin concentration. For example, with the use of the calibration produced from the second sample set, the pH values from the first sample set were predicted with an R{sup 2} of 0.92 after the predictions were corrected for bias and slope. It is shown that spectral information specific to pH-induced chemical changes in the hemoglobin molecule is contained within the PLS loading vectors developed for both the first and second data sets. It is this pH specific information that allows the spectra dominated by pH-correlated scattering changes to provide robust pH predictive ability in the uncorrelated data, and visa versa. {copyright} {ital 1999} {ital Society for Applied Spectroscopy}« less
Herbst, M; Lehmhus, H; Oldenburg, B; Orlowski, C; Ohgke, H
1983-04-01
A simple experimental set for the production and investigation of bacterially contaminated solid-state aerosols with constant concentration is described. The experimental set consists mainly of a fluidized bed-particle generator within a modified chamber for formaldehyde desinfection. The special conditions for the production of a defined concentration of particles and microorganisms are to be found out empirically. In a first application aerosol-sizing of an Andersen sampler is investigated. The findings of Andersen (1) are confirmed with respect to our experimental conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji Hyun, Yoon; Byun, Thak Sang; Strizak, Joe P
2011-01-01
The mechanical properties of NBG-18 nuclear grade graphite have been characterized using small specimen test techniques and statistical treatment on the test results. New fracture strength and toughness test techniques were developed to use subsize cylindrical specimens with glued heads and to reuse their broken halves. Three sets of subsize cylindrical specimens with the different diameters of 4 mm, 8 mm, and 12 mm were tested to obtain tensile fracture strength. The longer piece of the broken halves was cracked from side surfaces and tested under three-point bend loading to obtain fracture toughness. Both the strength and fracture toughness datamore » were analyzed using Weibull distribution models focusing on size effect. The mean fracture strength decreased from 22.9 MPa to 21.5 MPa as the diameter increased from 4 mm to 12 mm, and the mean strength of 15.9 mm diameter standard specimen, 20.9 MPa, was on the extended trend line. These fracture strength data indicate that in the given diameter range the size effect is not significant and much smaller than that predicted by the Weibull statistics-based model. Further, no noticeable size effect existed in the fracture toughness data, whose mean values were in a narrow range of 1.21 1.26 MPa. The Weibull moduli measured for fracture strength and fracture toughness datasets were around 10. It is therefore believed that the small or negligible size effect enables to use the subsize specimens and that the new fracture toughness test method to reuse the broken specimens to help minimize irradiation space and radioactive waste.« less
Using Spatial-Temporal Primitives to Improve Geographic Skills for Preservice Teachers
ERIC Educational Resources Information Center
Kaufman, Martin M.
2004-01-01
An exercise to help improve the geographic skills of preservice teachers was developed and tested during a six year period on over 500 students. The exercise required these students to map two arrangements of roads and facilities within a small neighborhood. A set of special-temporal primitives (place, size, shape, distance, direction,…
ERIC Educational Resources Information Center
Millar, Susanna
1978-01-01
This research tested the hypothesis that grouping has adverse effects on the recall of tactual shapes but facilitates the recall of tactual letters on the assumption that this depends on different processes. A further question was the relation of grouping to letter recall span (set-size).
Towards General Evaluation of Intelligent Systems: Lessons Learned from Reproducing AIQ Test Results
NASA Astrophysics Data System (ADS)
Vadinský, Ondřej
2018-03-01
This paper attempts to replicate the results of evaluating several artificial agents using the Algorithmic Intelligence Quotient test originally reported by Legg and Veness. Three experiments were conducted: One using default settings, one in which the action space was varied and one in which the observation space was varied. While the performance of freq, Q0, Qλ, and HLQλ corresponded well with the original results, the resulting values differed, when using MC-AIXI. Varying the observation space seems to have no qualitative impact on the results as reported, while (contrary to the original results) varying the action space seems to have some impact. An analysis of the impact of modifying parameters of MC-AIXI on its performance in the default settings was carried out with the help of data mining techniques used to identifying highly performing configurations. Overall, the Algorithmic Intelligence Quotient test seems to be reliable, however as a general artificial intelligence evaluation method it has several limits. The test is dependent on the chosen reference machine and also sensitive to changes to its settings. It brings out some differences among agents, however, since they are limited in size, the test setting may not yet be sufficiently complex. A demanding parameter sweep is needed to thoroughly evaluate configurable agents that, together with the test format, further highlights computational requirements of an agent. These and other issues are discussed in the paper along with proposals suggesting how to alleviate them. An implementation of some of the proposals is also demonstrated.
Fabrication of angleply carbon-aluminum composites
NASA Technical Reports Server (NTRS)
Novak, R. C.
1974-01-01
A study was conducted to fabricate and test angleply composite consisting of NASA-Hough carbon base monofilament in a matrix of 2024 aluminum. The effect of fabrication variables on the tensile properties was determined, and an optimum set of conditions was established. The size of the composite panels was successfully scaled up, and the material was tested to measure tensile behavior as a function of temperature, stress-rupture and creep characteristics at two elevated temperatures, bending fatigue behavior, resistance to thermal cycling, and Izod impact response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braatz, Brett G.; Cumblidge, Stephen E.; Doctor, Steven R.
2012-12-31
The U.S. Nuclear Regulatory Commission has established the Program to Assess the Reliability of Emerging Nondestructive Techniques (PARENT) as a follow-on to the international cooperative Program for the Inspection of Nickel Alloy Components (PINC). The goal of PINC was to evaluate the capabilities of various nondestructive evaluation (NDE) techniques to detect and characterize surface-breaking primary water stress corrosion cracks in dissimilar-metal welds (DMW) in bottom-mounted instrumentation (BMI) penetrations and small-bore (≈400-mm diameter) piping components. A series of international blind round-robin tests were conducted by commercial and university inspection teams. Results from these tests showed that a combination of conventional andmore » phased-array ultrasound techniques provided the highest performance for flaw detection and depth sizing in dissimilar metal piping welds. The effective detection of flaws in BMIs by eddy current and ultrasound shows that it may be possible to reliably inspect these components in the field. The goal of PARENT is to continue the work begun in PINC and apply the lessons learned to a series of open and blind international round-robin tests that will be conducted on a new set of piping components including large-bore (≈900-mm diameter) DMWs, small-bore DMWs, and BMIs. Open round-robin testing will engage universities and industry worldwide to investigate the reliability of emerging NDE techniques to detect and accurately size flaws having a wide range of lengths, depths, orientations, and locations. Blind round-robin testing will invite testing organizations worldwide, whose inspectors and procedures are certified by the standards for the nuclear industry in their respective countries, to investigate the ability of established NDE techniques to detect and size flaws whose characteristics range from easy to very difficult to detect and size. This paper presents highlights of PINC and reports on the plans and progress for PARENT round-robin tests.« less
Factors Affecting Planting Depth and Standing of Rice Seedling in Parachute Rice Transplanting
NASA Astrophysics Data System (ADS)
Astika, I. W.; Subrata, I. D. M.; Pramuhadi, G.
2018-05-01
Parachute rice transplanting is a simple and practical rice transplanting method. It can be done manually or mechanically, with various possible designs of machines or tools. This research aimed at quantitatively formulating related factors to the planting depth and standing of rice seedling. Parachute seedlings of rice were grown at several sizes of parachute soil bulb sizes. The trays were specially designed with a 3D printer having bulb sizes 7, 8, 9, 10 mm in square sides and 15 mm depth. At seedling ages of 8-12 days after sowing the seedling bulbs were drops into puddled soil. Soil hardness was set at 3 levels of hardness, measured in hardness index using golf ball test. Angle of dropping was set at 3 levels: 0°, 30°and 45° from the vertical axis. The height of droppings was set at 100 cm, 75 cm, and 50 cm. The relationship between bulb size, height of dropping, soil hardness, dropping angle and planting depth was formulated with ANN. Most of input variables did not significantly affect the planting depth, except that hard soil significantly differs from mild soil and soft soil. The dropping also resulted in various positions of the planted seedlings: vertical standing, sloped, and falling. However, at any position of the planted seedlings, the seedlings would recover themselves into normally vertical position. With this result, the design of planting machinery, as well as the manual planting operation, can be made easier.
NASA Astrophysics Data System (ADS)
Aishah Syed Ali, Sharifah
2017-09-01
This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.
Writers Identification Based on Multiple Windows Features Mining
NASA Astrophysics Data System (ADS)
Fadhil, Murad Saadi; Alkawaz, Mohammed Hazim; Rehman, Amjad; Saba, Tanzila
2016-03-01
Now a days, writer identification is at high demand to identify the original writer of the script at high accuracy. The one of the main challenge in writer identification is how to extract the discriminative features of different authors' scripts to classify precisely. In this paper, the adaptive division method on the offline Latin script has been implemented using several variant window sizes. Fragments of binarized text a set of features are extracted and classified into clusters in the form of groups or classes. Finally, the proposed approach in this paper has been tested on various parameters in terms of text division and window sizes. It is observed that selection of the right window size yields a well positioned window division. The proposed approach is tested on IAM standard dataset (IAM, Institut für Informatik und angewandte Mathematik, University of Bern, Bern, Switzerland) that is a constraint free script database. Finally, achieved results are compared with several techniques reported in the literature.
Pisano, E D; Zong, S; Hemminger, B M; DeLuca, M; Johnston, R E; Muller, K; Braeuning, M P; Pizer, S M
1998-11-01
The purpose of this project was to determine whether Contrast Limited Adaptive Histogram Equalization (CLAHE) improves detection of simulated spiculations in dense mammograms. Lines simulating the appearance of spiculations, a common marker of malignancy when visualized with masses, were embedded in dense mammograms digitized at 50 micron pixels, 12 bits deep. Film images with no CLAHE applied were compared to film images with nine different combinations of clip levels and region sizes applied. A simulated spiculation was embedded in a background of dense breast tissue, with the orientation of the spiculation varied. The key variables involved in each trial included the orientation of the spiculation, contrast level of the spiculation and the CLAHE settings applied to the image. Combining the 10 CLAHE conditions, 4 contrast levels and 4 orientations gave 160 combinations. The trials were constructed by pairing 160 combinations of key variables with 40 backgrounds. Twenty student observers were asked to detect the orientation of the spiculation in the image. There was a statistically significant improvement in detection performance for spiculations with CLAHE over unenhanced images when the region size was set at 32 with a clip level of 2, and when the region size was set at 32 with a clip level of 4. The selected CLAHE settings should be tested in the clinic with digital mammograms to determine whether detection of spiculations associated with masses detected at mammography can be improved.
Bedenić, B; Boras, A
2001-01-01
The plasmid-mediated extended-spectrum beta-lactamases (ESBL) confer resistance to oxymino-cephalosporins, such as cefotaxime, ceftazidime, and ceftriaxone and to monobactams such as aztreonam. It is well known fact that ESBL producing bacteria exhibit a pronounced inoculum effect against broad spectrum cephalosporins like ceftazidime, cefotaxime, ceftriaxone and cefoperazone. The aim of this investigation was to determine the effect of inoculum size on the sensitivity and specificity of double-disk synergy test (DDST) which is the test most frequently used for detection of ESBLs, in comparison with other two methods (determination of ceftazidime MIC with and without clavulanate and inhibitor potentiated disk-diffusion test) which are seldom used in clinical laboratories. The experiments were performed on a set of K. pneumoniae strains with previously characterized beta-lactamases which comprise: 10 SHV-5 beta-lactamase producing K. pneumoniae, 20 SHV-2 + 1 SHV 2a beta-lactamase producing K. pneumoniae, 7 SHV-12 beta-lactamase producing K. pneumoniae, 39 putative SHV ESBL producing K. pneumoniae and 26 K. pneumoniae isolates highly susceptible to ceftazidime according to Kirby-Bauer disk-diffusion method and thus considered to be ESBL negative. According to the results of this investigation, increase in inoculum size affected more significantly the sensitivity of DDST than of other two methods. The sensitivity of the DDST was lower when a higher inoculum size of 10(8) CFU/ml was applied, in distinction from other two methods (MIC determination and inhibitor potentiated disk-diffusion test) which retained high sensitivity regardless of the density of bacterial suspension. On the other hand, DDST displayed higher specificity compared to other two methods regardless of the inoculum size. This investigation found that DDST is a reliable method but it is important to standardize the inoculum size.
Keeping an eye on the truth? Pupil size changes associated with recognition memory.
Heaver, Becky; Hutton, Sam B
2011-05-01
During recognition memory tests participants' pupils dilate more when they view old items compared to novel items. We sought to replicate this "pupil old/new effect" and to determine its relationship to participants' responses. We compared changes in pupil size during recognition when participants were given standard recognition memory instructions, instructions to feign amnesia, and instructions to report all items as new. Participants' pupils dilated more to old items compared to new items under all three instruction conditions. This finding suggests that the increase in pupil size that occurs when participants encounter previously studied items is not under conscious control. Given that pupil size can be reliably and simply measured, the pupil old/new effect may have potential in clinical settings as a means for determining whether patients are feigning memory loss.
Leverage Between the Buffering Effect and the Bystander Effect in Social Networking.
Chiu, Yu-Ping; Chang, Shu-Chen
2015-08-01
This study examined encouraged and inhibited social feedback behaviors based on the theories of the buffering effect and the bystander effect. A system program was used to collect personal data and social feedback from a Facebook data set to test the research model. The results revealed that the buffering effect induced a positive relationship between social network size and feedback gained from friends when people's social network size was under a certain cognitive constraint. For people with a social network size that exceeds this cognitive constraint, the bystander effect may occur, in which having more friends may inhibit social feedback. In this study, two social psychological theories were applied to explain social feedback behavior on Facebook, and it was determined that social network size and social feedback exhibited no consistent linear relationship.
Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J; Dague, Taylor D
2017-03-01
The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target's physical size, and the remaining distractors shared the target's numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Melone, Kate
2016-01-01
Skills Acquired: Tensile Testing: Prepare materials and setting up the tensile tests; Collect and interpret (messy) data. Outgassing Testing: Understand TML (Total Mass Loss) and CVCM (Collected Volatile Condensable Material); Collaboration with other NASA centers. Z2 (NASA's Prototype Space Suit Development) Support: Hands on building mockups of components; Analyze data; Work with others, understanding what both parties need in order to make a run successful. LCVG (Liquid Cooling and Ventilation Garment) Flush and Purge Console: Both formal design and design review process; How to determine which components to use - flow calculations, pressure ratings, size, etc.; Hazard Analysis; How to make design tradeoffs.
Some controversial multiple testing problems in regulatory applications.
Hung, H M James; Wang, Sue-Jane
2009-01-01
Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.
Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong
2013-12-01
Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.
Gas electron multiplier (GEM) foil test, repair and effective gain calculation
NASA Astrophysics Data System (ADS)
Tahir, Muhammad; Zubair, Muhammad; Khan, Tufail A.; Khan, Ashfaq; Malook, Asad
2018-06-01
The focus of my research is based on the gas electron multiplier (GEM) foil test, repairing and effective gain calculation of GEM detector. During my research work define procedure of GEM foil testing short-circuit, detection short-circuits in the foil. Study different ways to remove the short circuits in the foils. Set and define the GEM foil testing procedures in the open air, and with nitrogen gas. Measure the leakage current of the foil and applying different voltages with specified step size. Define the Quality Control (QC) tests and different components of GEM detectors before assembly. Calculate the effective gain of GEM detectors using 109Cd and 55Fe radioactive source.
Choice Set Size and Decision-Making: The Case of Medicare Part D Prescription Drug Plans
Bundorf, M. Kate; Szrek, Helena
2013-01-01
Background The impact of choice on consumer decision-making is controversial in U.S. health policy. Objective Our objective was to determine how choice set size influences decision-making among Medicare beneficiaries choosing prescription drug plans. Methods We randomly assigned members of an internet-enabled panel age 65 and over to sets of prescription drug plans of varying sizes (2, 5, 10, and 16) and asked them to choose a plan. Respondents answered questions about the plan they chose, the choice set, and the decision process. We used ordered probit models to estimate the effect of choice set size on the study outcomes. Results Both the benefits of choice, measured by whether the chosen plan is close to the ideal plan, and the costs, measured by whether the respondent found decision-making difficult, increased with choice set size. Choice set size was not associated with the probability of enrolling in any plan. Conclusions Medicare beneficiaries face a tension between not wanting to choose from too many options and feeling happier with an outcome when they have more alternatives. Interventions that reduce cognitive costs when choice sets are large may make this program more attractive to beneficiaries. PMID:20228281
Beta Testing an Oral Health Edutainment Card Game Among 12-13-Year-Old Children in Bangalore, India.
Harikiran, Arkalgud Govindraju; Vadavi, Deepti; Shruti, Tulika
2017-12-01
Card games are easy, cost effective, culturally acceptable, as well as sustainable and require minimal infrastructure over other edutainment approaches in achieving health and oral health promotion goals. Therefore, we wanted to conceptualize, develop, and beta test an innovative oral health edutainment card game for preadolescent children in Bangalore, India. An innovative oral health card game, titled "32 warriors" was conceptualized and developed to incorporate age appropriate, medically accurate oral health information. The card game aimed at empowering children to take appropriate care of their oral health. The card game was beta tested on 45 children, aged between 12 and 13 years. Using prepost design, a 32-itemed, closed-ended questionnaire assessed children's oral health knowledge, attitude, and feedback on the game. Change in mean scores for knowledge and attitude was assessed using "Wilcoxon Sign Rank test" at P < 0.05. "Effect size" was calculated. Feedback was categorized in terms of type of response and its frequency. Statistically significant improvement was observed in group mean overall score, mean knowledge, and attitude scores, respectively (pre 14.7 ± 2.91 and post 18.6 ± 4.35, P = 0.003; 11.8 ± 2.73, 14.76 ± 4.0, P = 0.000; 2.93 ± 1.09, 3.84 ± 1.02, P = 0.000), with mean effect size 0.5. Participants reported that they enjoyed the game and learned new things about oral health. The card game is appealing to children and improves their oral health knowledge and attitude as evidenced by beta test results. We need to further explore the demand, feasibility, and cost effectiveness of introducing this game in formal settings (school based)/informal settings (family and other social settings).
Corvids Outperform Pigeons and Primates in Learning a Basic Concept.
Wright, Anthony A; Magnotti, John F; Katz, Jeffrey S; Leonard, Kevin; Vernouillet, Alizée; Kelly, Debbie M
2017-04-01
Corvids (birds of the family Corvidae) display intelligent behavior previously ascribed only to primates, but such feats are not directly comparable across species. To make direct species comparisons, we used a same/different task in the laboratory to assess abstract-concept learning in black-billed magpies ( Pica hudsonia). Concept learning was tested with novel pictures after training. Concept learning improved with training-set size, and test accuracy eventually matched training accuracy-full concept learning-with a 128-picture set; this magpie performance was equivalent to that of Clark's nutcrackers (a species of corvid) and monkeys (rhesus, capuchin) and better than that of pigeons. Even with an initial 8-item picture set, both corvid species showed partial concept learning, outperforming both monkeys and pigeons. Similar corvid performance refutes the hypothesis that nutcrackers' prolific cache-location memory accounts for their superior concept learning, because magpies rely less on caching. That corvids with "primitive" neural architectures evolved to equal primates in full concept learning and even to outperform them on the initial 8-item picture test is a testament to the shared (convergent) survival importance of abstract-concept learning.
Elnaghy, A M; Elsaka, S E
2017-08-01
To assess and compare the mechanical properties of TRUShape (TRS) with several nickel-titanium rotary instruments. Cyclic fatigue, torsional resistance, flexibility and surface microhardness of TRS (size 25, 0.06v taper), ProTaper Next X2 (PTN X2, size 25, 0.06 taper), ProTaper Gold (PTG F2; size 25, 0.08 taper) and ProTaper Universal (PTU F2; size 25, 0.08 taper) instruments were evaluated. The topographical structures of the fracture surfaces of instruments were assessed using a scanning electron microscope. The cyclic fatigue resistance, torsional resistance and microhardness data were analysed using one-way analysis of variance (anova) and Tukey's post hoc tests. The fragment length and bending resistance data were analysed statistically with the Kruskal-Wallis H-test and Mann-Whitney U-tests. The statistical significance level was set at P < 0.05. PTN and PTG instruments revealed significantly higher resistance to cyclic fatigue than TRS and PTU instruments (P < 0.001). PTN instruments revealed significantly higher torsional resistance compared with the other instruments (P < 0.001). PTG instrument had significantly higher flexibility than the other tested brands (P < 0.05). However, for microhardness, the PTU had significantly higher surface microhardness values compared with other tested brands (P < 0.05). TRS instruments had lower resistance to cyclic fatigue and lower flexibility compared with PTG and PTN instruments. TRS, PTG and PTU instruments had lower resistance to torsional stress than PTN instruments. TRS and PTG instruments had comparable surface microhardness. © 2016 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Lipid globule size in total nutrient admixtures prepared in three-chamber plastic bags.
Driscoll, David F; Thoma, Andrea; Franke, Rolf; Klütsch, Karsten; Nehne, Jörg; Bistrian, Bruce R
2009-04-01
The stability of injectable lipid emulsions in three-chamber plastic (3CP) bags, applying the globule-size limits established by United States Pharmacopeia ( USP ) chapter 729, was studied. A total of five premixed total nutrient admixture (TNA) products packaged in 3CP bags from two different lipid manufacturers containing either 20% soybean oil or a mixture of soybean oil and medium-chain-triglyceride oil as injectable lipid emulsions were tested. Two low-osmolarity 3CP bags and three high-osmolarity 3CP bags were studied. All products were tested with the addition of trace elements and multivitamins. All additive conditions (with and without electrolytes) were tested in triplicate at time 0 (immediately after mixing) and at 6, 24, 30, and 48 hours after mixing; the bags were stored at 24-26 degrees C. All additives were equally distributed in each bag for comparative testing, applying both globule sizing methods outlined in USP chapter 729. Of the bags tested, all bags from one manufacturer were coarse emulsions, showing signs of significant growth in the large-diameter tail when mixed as a TNA formulation and failing the limits set by method II of USP chapter 729 from the outset and throughout the study, while the bags from the other manufacturer were fine emulsions and met these limits. Of the bags that failed, significant instability was noted in one series containing additional electrolytes. Injectable lipid emulsions provided in 3CP bags that did not meet the globule-size limits of USP chapter 729 produced coarser TNA formulations than emulsions that met the USP limits.
Accurate Classification of RNA Structures Using Topological Fingerprints
Li, Kejie; Gribskov, Michael
2016-01-01
While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571
Self-regulated learning in simulation-based training: a systematic review and meta-analysis.
Brydges, Ryan; Manzone, Julian; Shanks, David; Hatala, Rose; Hamstra, Stanley J; Zendejas, Benjamin; Cook, David A
2015-04-01
Self-regulated learning (SRL) requires an active learner who has developed a set of processes for managing the achievement of learning goals. Simulation-based training is one context in which trainees can safely practise learning how to learn. The purpose of the present study was to evaluate, in the simulation-based training context, the effectiveness of interventions designed to support trainees in SRL activities. We used the social-cognitive model of SRL to guide a systematic review and meta-analysis exploring the links between instructor supervision, supports or scaffolds for SRL, and educational outcomes. We searched databases including MEDLINE and Scopus, and previous reviews, for material published until December 2011. Studies comparing simulation-based SRL interventions with another intervention for teaching health professionals were included. Reviewers worked independently and in duplicate to extract information on learners, study quality and educational outcomes. We used random-effects meta-analysis to compare the effects of supervision (instructor present or absent) and SRL educational supports (e.g. goal-setting study guides present or absent). From 11,064 articles, we included 32 studies enrolling 2482 trainees. Only eight of the 32 studies included educational supports for SRL. Compared with instructor-supervised interventions, unsupervised interventions were associated with poorer immediate post-test outcomes (pooled effect size: -0.34, p = 0.09; n = 19 studies) and negligible effects on delayed (i.e. > 1 week) retention tests (pooled effect size: 0.11, p = 0.63; n = 8 studies). Interventions including SRL supports were associated with small benefits compared with interventions without supports on both immediate post-tests (pooled effect size: 0.23, p = 0.22; n = 5 studies) and delayed retention tests (pooled effect size: 0.44, p = 0.067; n = 3 studies). Few studies in the simulation literature have designed SRL training to explicitly support trainees' capacity to self-regulate their learning. We recommend that educators and researchers shift from thinking about SRL as learning alone to thinking of SRL as comprising a shared responsibility between the trainee and the instructional designer (i.e. learning using designed supports that help prepare individuals for future learning). © 2015 John Wiley & Sons Ltd.
Regulation of behaviorally associated gene networks in worker honey bee ovaries
Wang, Ying; Kocher, Sarah D.; Linksvayer, Timothy A.; Grozinger, Christina M.; Page, Robert E.; Amdam, Gro V.
2012-01-01
SUMMARY Several lines of evidence support genetic links between ovary size and division of labor in worker honey bees. However, it is largely unknown how ovaries influence behavior. To address this question, we first performed transcriptional profiling on worker ovaries from two genotypes that differ in social behavior and ovary size. Then, we contrasted the differentially expressed ovarian genes with six sets of available brain transcriptomes. Finally, we probed behavior-related candidate gene networks in wild-type ovaries of different sizes. We found differential expression in 2151 ovarian transcripts in these artificially selected honey bee strains, corresponding to approximately 20.3% of the predicted gene set of honey bees. Differences in gene expression overlapped significantly with changes in the brain transcriptomes. Differentially expressed genes were associated with neural signal transmission (tyramine receptor, TYR) and ecdysteroid signaling; two independently tested nuclear hormone receptors (HR46 and ftz-f1) were also significantly correlated with ovary size in wild-type bees. We suggest that the correspondence between ovary and brain transcriptomes identified here indicates systemic regulatory networks among hormones (juvenile hormone and ecdysteroids), pheromones (queen mandibular pheromone), reproductive organs and nervous tissues in worker honey bees. Furthermore, robust correlations between ovary size and neuraland endocrine response genes are consistent with the hypothesized roles of the ovaries in honey bee behavioral regulation. PMID:22162860
Heritability of body size in the polar bears of Western Hudson Bay.
Malenfant, René M; Davis, Corey S; Richardson, Evan S; Lunn, Nicholas J; Coltman, David W
2018-04-18
Among polar bears (Ursus maritimus), fitness is dependent on body size through males' abilities to win mates, females' abilities to provide for their young and all bears' abilities to survive increasingly longer fasting periods caused by climate change. In the Western Hudson Bay subpopulation (near Churchill, Manitoba, Canada), polar bears have declined in body size and condition, but nothing is known about the genetic underpinnings of body size variation, which may be subject to natural selection. Here, we combine a 4449-individual pedigree and an array of 5,433 single nucleotide polymorphisms (SNPs) to provide the first quantitative genetic study of polar bears. We used animal models to estimate heritability (h 2 ) among polar bears handled between 1966 and 2011, obtaining h 2 estimates of 0.34-0.48 for strictly skeletal traits and 0.18 for axillary girth (which is also dependent on fatness). We genotyped 859 individuals with the SNP array to test for marker-trait association and combined p-values over genetic pathways using gene-set analysis. Variation in all traits appeared to be polygenic, but we detected one region of moderately large effect size in body length near a putative noncoding RNA in an unannotated region of the genome. Gene-set analysis suggested that variation in body length was associated with genes in the regulatory cascade of cyclin expression, which has previously been associated with body size in mice. A greater understanding of the genetic architecture of body size variation will be valuable in understanding the potential for adaptation in polar bear populations challenged by climate change. © 2018 John Wiley & Sons Ltd.
Liu, Jingxia; Colditz, Graham A
2018-05-01
There is growing interest in conducting cluster randomized trials (CRTs). For simplicity in sample size calculation, the cluster sizes are assumed to be identical across all clusters. However, equal cluster sizes are not guaranteed in practice. Therefore, the relative efficiency (RE) of unequal versus equal cluster sizes has been investigated when testing the treatment effect. One of the most important approaches to analyze a set of correlated data is the generalized estimating equation (GEE) proposed by Liang and Zeger, in which the "working correlation structure" is introduced and the association pattern depends on a vector of association parameters denoted by ρ. In this paper, we utilize GEE models to test the treatment effect in a two-group comparison for continuous, binary, or count data in CRTs. The variances of the estimator of the treatment effect are derived for the different types of outcome. RE is defined as the ratio of variance of the estimator of the treatment effect for equal to unequal cluster sizes. We discuss a commonly used structure in CRTs-exchangeable, and derive the simpler formula of RE with continuous, binary, and count outcomes. Finally, REs are investigated for several scenarios of cluster size distributions through simulation studies. We propose an adjusted sample size due to efficiency loss. Additionally, we also propose an optimal sample size estimation based on the GEE models under a fixed budget for known and unknown association parameter (ρ) in the working correlation structure within the cluster. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Impact of Group Size on Classroom On-Task Behavior and Work Productivity in Children with ADHD
ERIC Educational Resources Information Center
Hart, Katie C.; Massetti, Greta M.; Fabiano, Gregory A.; Pariseau, Meaghan E.; Pelham, William E., Jr.
2011-01-01
This study sought to systematically examine the academic behavior of children with ADHD in different instructional contexts in an analogue classroom setting. A total of 33 children with ADHD participated in a reading comprehension activity followed by a testing period and were randomly assigned within days to either small-group instruction,…
ERIC Educational Resources Information Center
Unsworth, Nash
2007-01-01
Two experiments explored the possibility that individual differences in working memory capacity (WMC) partially reflect differences in the size of the search set from which items are retrieved. High- and low-WMC individuals were tested in delayed (Experiment 1) and continuous distractor (Experiment 2) free recall with varying list lengths. Across…
Microbiological testing of Skylab foods.
NASA Technical Reports Server (NTRS)
Heidelbaugh, N. D.; Mcqueen, J. L.; Rowley, D. B.; Powers , E. M.; Bourland, C. T.
1973-01-01
Review of some of the unique food microbiology problems and problem-generating circumstances the Skylab manned space flight program involves. The situations these problems arise from include: extended storage times, variations in storage temperatures, no opportunity to resupply or change foods after launch of the Skylab Workshop, first use of frozen foods in space, first use of a food-warming device in weightlessness, relatively small size of production lots requiring statistically valid sampling plans, and use of food as an accurately controlled part in a set of sophisticated life science experiments. Consideration of all of these situations produced the need for definite microbiological tests and test limits. These tests are described along with the rationale for their selection. Reported test results show good compliance with the test limits.
NASA Astrophysics Data System (ADS)
Kang, Jidong; Gianetto, James A.; Tyson, William R.
2018-03-01
Fracture toughness measurement is an integral part of structural integrity assessment of pipelines. Traditionally, a single-edge-notched bend (SE(B)) specimen with a deep crack is recommended in many existing pipeline structural integrity assessment procedures. Such a test provides high constraint and therefore conservative fracture toughness results. However, for girth welds in service, defects are usually subjected to primarily tensile loading where the constraint is usually much lower than in the three-point bend case. Moreover, there is increasing use of strain-based design of pipelines that allows applied strains above yield. Low-constraint toughness tests represent more realistic loading conditions for girth weld defects, and the corresponding increased toughness can minimize unnecessary conservatism in assessments. In this review, we present recent developments in low-constraint fracture toughness testing, specifically using single-edgenotched tension specimens, SENT or SE(T). We focus our review on the test procedure development and automation, round-robin test results and some common concerns such as the effect of crack tip, crack size monitoring techniques, and testing at low temperatures. Examples are also given of the integration of fracture toughness data from SE(T) tests into structural integrity assessment.
Differential item functioning analysis of the Vanderbilt Expertise Test for cars.
Lee, Woo-Yeol; Cho, Sun-Joo; McGugin, Rankin W; Van Gulick, Ana Beth; Gauthier, Isabel
2015-01-01
The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge.
Igne, Benoît; Drennen, James K; Anderson, Carl A
2014-01-01
Changes in raw materials and process wear and tear can have significant effects on the prediction error of near-infrared calibration models. When the variability that is present during routine manufacturing is not included in the calibration, test, and validation sets, the long-term performance and robustness of the model will be limited. Nonlinearity is a major source of interference. In near-infrared spectroscopy, nonlinearity can arise from light path-length differences that can come from differences in particle size or density. The usefulness of support vector machine (SVM) regression to handle nonlinearity and improve the robustness of calibration models in scenarios where the calibration set did not include all the variability present in test was evaluated. Compared to partial least squares (PLS) regression, SVM regression was less affected by physical (particle size) and chemical (moisture) differences. The linearity of the SVM predicted values was also improved. Nevertheless, although visualization and interpretation tools have been developed to enhance the usability of SVM-based methods, work is yet to be done to provide chemometricians in the pharmaceutical industry with a regression method that can supplement PLS-based methods.
Heuristics for Multiobjective Optimization of Two-Sided Assembly Line Systems
Jawahar, N.; Ponnambalam, S. G.; Sivakumar, K.; Thangadurai, V.
2014-01-01
Products such as cars, trucks, and heavy machinery are assembled by two-sided assembly line. Assembly line balancing has significant impacts on the performance and productivity of flow line manufacturing systems and is an active research area for several decades. This paper addresses the line balancing problem of a two-sided assembly line in which the tasks are to be assigned at L side or R side or any one side (addressed as E). Two objectives, minimum number of workstations and minimum unbalance time among workstations, have been considered for balancing the assembly line. There are two approaches to solve multiobjective optimization problem: first approach combines all the objectives into a single composite function or moves all but one objective to the constraint set; second approach determines the Pareto optimal solution set. This paper proposes two heuristics to evolve optimal Pareto front for the TALBP under consideration: Enumerative Heuristic Algorithm (EHA) to handle problems of small and medium size and Simulated Annealing Algorithm (SAA) for large-sized problems. The proposed approaches are illustrated with example problems and their performances are compared with a set of test problems. PMID:24790568
Savelkoul, Michael; Hewstone, Miles; Scheepers, Peer; Stolle, Dietlind
2015-07-01
We test whether a larger percentage of non-Whites in neighborhoods decreases associational involvement and build on earlier research in three ways. First, we explicitly consider the ethnic composition of organizations, distinguishing involvement in bridging (with out-group members) and bonding (only in-group members) organizations. Second, we start from constrict theory and test competing sets of predictions derived from conflict and contact theories to explain these relationships. Third, we examine whether relative out-group size affects involvement in different types of voluntary organizations equally. Using data from the 2005 U.S. 'Citizenship, Involvement, Democracy' survey, the percentage of non-Whites in neighborhoods is largely unrelated with associational involvement or perceived ethnic threat. However, perceiving ethnic threat is consistently negatively related with involvement in bridging organizations. Simultaneously, a larger percentage of non-Whites fosters intergroup contact, which is negatively related with perceptions of ethnic threat and involvement in bonding leisure organizations. Our results shed more light on the relationship between the relative out-group size in neighborhoods and associational involvement as well as underlying explanations for this link. Copyright © 2015 Elsevier Inc. All rights reserved.
Falcone, John L; Middleton, Donald B
2013-01-01
The Accreditation Council for Graduate Medical Education (ACGME) sets residency performance standards for the American Board of Family Medicine Certification Examination. This study aims are to describe the compliance of residency programs with ACGME standards and to determine whether residency pass rates depend on program size and location. In this retrospective cohort study, residency performance from 2007 to 2011 was compared with the ACGME performance standards. Simple linear regression was performed to see whether program pass rates were dependent on program size. Regional differences in performance were compared with χ(2) tests, using an α level of 0.05. Of 429 total residency programs, there were 205 (47.8%) that violate ACGME performance standards. Linear regression showed that program pass rates were positively correlated and dependent on program size (P < .001). The median pass rate per state was 86.4% (interquartile range, 82.0-90.8. χ(2) Tests showed that states in the West performed higher than the other 3 US Census Bureau Regions (all P < .001). Approximately half of the family medicine training programs do not meet the ACGME examination performance standards. Pass rates are associated with residency program size, and regional variation occurs. These findings have the potential to affect ACGME policy and residency program application patterns.
Awais, Muhammad; Palmerini, Luca; Bourke, Alan K.; Ihlen, Espen A. F.; Helbostad, Jorunn L.; Chiari, Lorenzo
2016-01-01
The popularity of using wearable inertial sensors for physical activity classification has dramatically increased in the last decade due to their versatility, low form factor, and low power requirements. Consequently, various systems have been developed to automatically classify daily life activities. However, the scope and implementation of such systems is limited to laboratory-based investigations. Furthermore, these systems are not directly comparable, due to the large diversity in their design (e.g., number of sensors, placement of sensors, data collection environments, data processing techniques, features set, classifiers, cross-validation methods). Hence, the aim of this study is to propose a fair and unbiased benchmark for the field-based validation of three existing systems, highlighting the gap between laboratory and real-life conditions. For this purpose, three representative state-of-the-art systems are chosen and implemented to classify the physical activities of twenty older subjects (76.4 ± 5.6 years). The performance in classifying four basic activities of daily life (sitting, standing, walking, and lying) is analyzed in controlled and free living conditions. To observe the performance of laboratory-based systems in field-based conditions, we trained the activity classification systems using data recorded in a laboratory environment and tested them in real-life conditions in the field. The findings show that the performance of all systems trained with data in the laboratory setting highly deteriorates when tested in real-life conditions, thus highlighting the need to train and test the classification systems in the real-life setting. Moreover, we tested the sensitivity of chosen systems to window size (from 1 s to 10 s) suggesting that overall accuracy decreases with increasing window size. Finally, to evaluate the impact of the number of sensors on the performance, chosen systems are modified considering only the sensing unit worn at the lower back. The results, similarly to the multi-sensor setup, indicate substantial degradation of the performance when laboratory-trained systems are tested in the real-life setting. This degradation is higher than in the multi-sensor setup. Still, the performance provided by the single-sensor approach, when trained and tested with real data, can be acceptable (with an accuracy above 80%). PMID:27973434
A goal attainment pain management program for older adults with arthritis.
Davis, Gail C; White, Terri L
2008-12-01
The purpose of this study was to test a pain management intervention that integrates goal setting with older adults (age > or =65) living independently in residential settings. This preliminary testing of the Goal Attainment Pain Management Program (GAPMAP) included a sample of 17 adults (mean age 79.29 years) with self-reported pain related to arthritis. Specific study aims were to: 1) explore the use of individual goal setting; 2) determine participants' levels of goal attainment; 3) determine whether changes occurred in the pain management methods used and found to be helpful by GAPMAP participants; and 4) determine whether changes occurred in selected pain-related variables (i.e., experience of living with persistent pain, the expected outcomes of pain management, pain management barriers, and global ratings of perceived pain intensity and success of pain management). Because of the small sample size, both parametric (t test) and nonparametric (Wilcoxon signed rank test) analyses were used to examine differences from pretest to posttest. Results showed that older individuals could successfully participate in setting and attaining individual goals. Thirteen of the 17 participants (76%) met their goals at the expected level or above. Two management methods (exercise and using a heated pool, tub, or shower) were used significantly more often after the intervention, and two methods (exercise and distraction) were identified as significantly more helpful. Two pain-related variables (experience of living with persistent pain and expected outcomes of pain management) revealed significant change, and all of those tested showed overall improvement.
Single DNA imaging and length quantification through a mobile phone microscope
NASA Astrophysics Data System (ADS)
Wei, Qingshan; Luo, Wei; Chiang, Samuel; Kappel, Tara; Mejia, Crystal; Tseng, Derek; Chan, Raymond Yan L.; Yan, Eddie; Qi, Hangfei; Shabbir, Faizan; Ozkan, Haydar; Feng, Steve; Ozcan, Aydogan
2016-03-01
The development of sensitive optical microscopy methods for the detection of single DNA molecules has become an active research area which cultivates various promising applications including point-of-care (POC) genetic testing and diagnostics. Direct visualization of individual DNA molecules usually relies on sophisticated optical microscopes that are mostly available in well-equipped laboratories. For POC DNA testing/detection, there is an increasing need for the development of new single DNA imaging and sensing methods that are field-portable, cost-effective, and accessible for diagnostic applications in resource-limited or field-settings. For this aim, we developed a mobile-phone integrated fluorescence microscopy platform that allows imaging and sizing of single DNA molecules that are stretched on a chip. This handheld device contains an opto-mechanical attachment integrated onto a smartphone camera module, which creates a high signal-to-noise ratio dark-field imaging condition by using an oblique illumination/excitation configuration. Using this device, we demonstrated imaging of individual linearly stretched λ DNA molecules (48 kilobase-pair, kbp) over 2 mm2 field-of-view. We further developed a robust computational algorithm and a smartphone app that allowed the users to quickly quantify the length of each DNA fragment imaged using this mobile interface. The cellphone based device was tested by five different DNA samples (5, 10, 20, 40, and 48 kbp), and a sizing accuracy of <1 kbp was demonstrated for DNA strands longer than 10 kbp. This mobile DNA imaging and sizing platform can be very useful for various diagnostic applications including the detection of disease-specific genes and quantification of copy-number-variations at POC settings.
Amarasekera, Dilru C; Resende, Arthur F; Waisbourd, Michael; Puri, Sanjeev; Moster, Marlene R; Hark, Lisa A; Katz, L Jay; Fudemberg, Scott J; Mantravadi, Anand V
2018-01-01
This study evaluates two rapid electrophysiological glaucoma diagnostic tests that may add a functional perspective to glaucoma diagnosis. This study aimed to determine the ability of two office-based electrophysiological diagnostic tests, steady-state pattern electroretinogram and short-duration transient visual evoked potentials, to discern between glaucomatous and healthy eyes. This is a cross-sectional study in a hospital setting. Forty-one patients with glaucoma and 41 healthy volunteers participated in the study. Steady-state pattern electroretinogram and short-duration transient visual evoked potential testing was conducted in glaucomatous and healthy eyes. A 64-bar-size stimulus with both a low-contrast and high-contrast setting was used to compare steady-state pattern electroretinogram parameters in both groups. A low-contrast and high-contrast checkerboard stimulus was used to measure short-duration transient visual evoked potential parameters in both groups. Steady-state pattern electroretinogram parameters compared were MagnitudeD, MagnitudeD/Magnitude ratio, and the signal-to-noise ratio. Short-duration transient visual evoked potential parameters compared were amplitude and latency. MagnitudeD was significantly lower in glaucoma patients when using a low-contrast (P = 0.001) and high-contrast (P < 0.001) 64-bar-size steady-state pattern electroretinogram stimulus. MagnitudeD/Magnitude ratio and SNR were significantly lower in the glaucoma group when using a high-contrast 64-bar-size stimulus (P < 0.001 and P = 0.010, respectively). Short-duration transient visual evoked potential amplitude and latency were not significantly different between the two groups. Steady-state pattern electroretinogram was effectively able to discern between glaucomatous and healthy eyes. Steady-state pattern electroretinogram may thus have a role as a clinically useful electrophysiological diagnostic tool. © 2017 Royal Australian and New Zealand College of Ophthalmologists.
NASA Astrophysics Data System (ADS)
Basner, Mathias; Mollicone, Daniel; Dinges, David F.
2011-12-01
The Psychomotor Vigilance Test (PVT) objectively assesses fatigue-related changes in alertness associated with sleep loss, extended wakefulness, circadian misalignment, and time on task. The standard 10-min PVT is often considered impractical in applied contexts. To address this limitation, we developed a modified brief 3-min version of the PVT (PVT-B). The PVT-B was validated in controlled laboratory studies with 74 healthy subjects (34 female, aged 22-45 years) that participated either in a total sleep deprivation (TSD) study involving 33 h awake ( N=31 subjects) or in a partial sleep deprivation (PSD) protocol involving 5 consecutive nights of 4 h time in bed ( N=43 subjects). PVT and PVT-B were performed regularly during wakefulness. Effect sizes of 5 key PVT outcomes were larger for TSD than PSD and larger for PVT than for PVT-B for all outcomes. Effect size was largest for response speed (reciprocal response time) for both the PVT-B and the PVT in both TSD and PSD. According to Cohen's criteria, effect sizes for the PVT-B were still large (TSD) or medium to large (PSD, except for fastest 10% RT). Compared to the 70% decrease in test duration the 22.7% (range 6.9-67.8%) average decrease in effect size was deemed an acceptable trade-off between duration and sensitivity. Overall, PVT-B performance had faster response times, more false starts and fewer lapses than PVT performance (all p<0.01). After reducing the lapse threshold from 500 to 355 ms for PVT-B, mixed model ANOVAs indicated no differential sensitivity to sleep loss between PVT-B and PVT for all outcome variables (all P>0.15) but the fastest 10% response times during PSD ( P<0.001), and effect sizes increased from 1.38 to 1.49 (TSD) and 0.65 to 0.76 (PSD), respectively. In conclusion, PVT-B tracked standard 10-min PVT performance throughout both TSD and PSD, and yielded medium to large effect sizes. PVT-B may be a useful tool for assessing behavioral alertness in settings where the duration of the 10-min PVT is considered impractical, although further validation in applied settings is needed.
Veturi, Yogasudha; Ritchie, Marylyn D
2018-01-01
Transcriptome-wide association studies (TWAS) have recently been employed as an approach that can draw upon the advantages of genome-wide association studies (GWAS) and gene expression studies to identify genes associated with complex traits. Unlike standard GWAS, summary level data suffices for TWAS and offers improved statistical power. Two popular TWAS methods include either (a) imputing the cis genetic component of gene expression from smaller sized studies (using multi-SNP prediction or MP) into much larger effective sample sizes afforded by GWAS - TWAS-MP or (b) using summary-based Mendelian randomization - TWAS-SMR. Although these methods have been effective at detecting functional variants, it remains unclear how extensive variability in the genetic architecture of complex traits and diseases impacts TWAS results. Our goal was to investigate the different scenarios under which these methods yielded enough power to detect significant expression-trait associations. In this study, we conducted extensive simulations based on 6000 randomly chosen, unrelated Caucasian males from Geisinger's MyCode population to compare the power to detect cis expression-trait associations (within 500 kb of a gene) using the above-described approaches. To test TWAS across varying genetic backgrounds we simulated gene expression and phenotype using different quantitative trait loci per gene and cis-expression /trait heritability under genetic models that differentiate the effect of causality from that of pleiotropy. For each gene, on a training set ranging from 100 to 1000 individuals, we either (a) estimated regression coefficients with gene expression as the response using five different methods: LASSO, elastic net, Bayesian LASSO, Bayesian spike-slab, and Bayesian ridge regression or (b) performed eQTL analysis. We then sampled with replacement 50,000, 150,000, and 300,000 individuals respectively from the testing set of the remaining 5000 individuals and conducted GWAS on each set. Subsequently, we integrated the GWAS summary statistics derived from the testing set with the weights (or eQTLs) derived from the training set to identify expression-trait associations using (a) TWAS-MP (b) TWAS-SMR (c) eQTL-based GWAS, or (d) standalone GWAS. Finally, we examined the power to detect functionally relevant genes using the different approaches under the considered simulation scenarios. In general, we observed great similarities among TWAS-MP methods although the Bayesian methods resulted in improved power in comparison to LASSO and elastic net as the trait architecture grew more complex while training sample sizes and expression heritability remained small. Finally, we observed high power under causality but very low to moderate power under pleiotropy.
Durability of an inorganic polymer concrete coating
NASA Astrophysics Data System (ADS)
Wasserman, Kenneth
The objective of the research program reported in this thesis is to evaluate the durability of an inorganic polymer composite coating exposed to freeze/thaw cycling and wet-dry cycling. Freeze/thaw cycling is performed following ASTM D6944-09 Standard Practice for Resistance of Cured Coatings to Thermal Cycling and wet/dry cycling is performed following guidelines set forth in a thesis written by Ronald Garon at Rutgers University. For both sets of experiments, four coating mixture proportions were evaluated. The variables were: silica/alumina ratio, mixing protocol using high shear and normal shear mixing, curing temperatures of 70 and 120 degrees Fahrenheit and use of nano size constituent materials. The mix with highest silica/alumina ratio was designated as Mix 1 and mixes with lower ratios were designated as Mix 2 and Mix 3. Mix 4 had nano silica particles. Four prisms were used for each variable including control that had no coating. The performance of the coating was evaluated using adhesion strength measured using: ASTM D7234 Test Method for Pull-Off Strength of Coatings on Concrete Using Portable Adhesion Testers. Tests were performed after every five consecutive cycles of thermal conditioning and six consecutive cycles of wet-dry exposure. Results from the thermal cycling and wet-dry testing demonstrate that all coating formulations are durable. The minimum adhesion strength was 300 psi even though a relatively weak base concrete surface was chosen for the study. The weak surface was chosen to simulate aged concrete surfaces present in actual field conditions. Due to the inherent nature of the test procedure the variation in test results is high. However, based on the test results, high shear mixer and high temperature curing are not recommended. As expected nano size constituent materials provide better performance.
Kreissel, K; Bösl, M; Lipp, P; Franzreb, M; Hambsch, B
2012-01-01
To determine the removal efficiency of ultrafiltration (UF) membranes for nano-particles in the size range of viruses the state of the art uses challenge tests with virus-spiked water. This work focuses on bench-scale and semi-technical scale experiments. Different experimental parameters influencing the removal efficiency of the tested UF membrane modules were analyzed and evaluated for bench- and semi-technical scale experiments. Organic matter in the water matrix highly influenced the removal of the tested bacteriophages MS2 and phiX174. Less membrane fouling (low ΔTMP) led to a reduced phage reduction. Increased flux positively affected phage removal in natural waters. The tested bacteriophages MS2 and phiX174 revealed different removal properties. MS2, which is widely used as a model organism to determine virus removal efficiencies of membranes, mostly showed a better removal than phiX174 for the natural water qualities tested. It seems that MS2 is possibly a less conservative surrogate for human enteric virus removal than phiX174. In bench-scale experiments log removal values (LRV) for MS2 of 2.5-6.0 and of 2.5-4.5 for phiX174 were obtained for the examined range of parameters. Phage removal obtained with differently fabricated semi-technical modules was quite variable for comparable parameter settings, indicating that module fabrication can lead to differing results. Potting temperature and module size were identified as influencing factors. In conclusion, careful attention has to be paid to the choice of experimental settings and module potting when using bench-scale or semi-technical scale experiments for UF membrane challenge tests.
Percent area coverage through image analysis
NASA Astrophysics Data System (ADS)
Wong, Chung M.; Hong, Sung M.; Liu, De-Ling
2016-09-01
The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.
Elnaghy, Amr; Elsaka, Shaymaa
2018-04-01
The aims of this study were to assess and compare the resistance to cyclic fatigue of XP-endo Shaper (XPS; FKG Dentaire, La Chaux-de-Fonds, Switzerland) instruments with TRUShape (TRS; Dentsply Tulsa Dental Specialties, Tulsa, OK, USA), HyFlex CM (HCM; Coltene, Cuyahoga Falls, OH, USA), Vortex Blue (VB; Dentsply Tulsa Dental Specialties), and iRace (iR; FKG Dentaire) nickel-titanium rotary instruments at body temperature. Size 30, 0.01 taper of XPS, size 30, 0.04 taper of HCM, VB, iR, and size 30, 0.06 taper of TRS instruments were immersed in saline at 37 ± 1 °C during cyclic fatigue testing. The instruments were tested with 60° angle of curvature and a 3-mm radius of curvature. The number of cycles to failure (NCF) was calculated and the length of the fractured segment was measured. Fractographic examination of the fractured surface was performed using a scanning electron microscope. The data were analyzed statistically using Kruskal-Wallis H test and Mann-Whitney U tests. Statistical significance was set at P < 0.05. XPS had a significantly greater NCF compared with the other instruments (P < 0.001). The topographic appearance of the fracture surfaces of tested instruments revealed ductile fracture of cyclic fatigue failure. XPS instruments exhibited greater cyclic fatigue resistance compared with the other tested instruments. XP-endo Shaper instruments could be used more safely in curved canals due to their higher fatigue resistance.
Using Small-Scale Randomized Controlled Trials to Evaluate the Efficacy of New Curricular Materials
Bass, Kristin M.; Stark, Louisa A.
2014-01-01
How can researchers in K–12 contexts stay true to the principles of rigorous evaluation designs within the constraints of classroom settings and limited funding? This paper explores this question by presenting a small-scale randomized controlled trial (RCT) designed to test the efficacy of curricular supplemental materials on epigenetics. The researchers asked whether the curricular materials improved students’ understanding of the content more than an alternative set of activities. The field test was conducted in a diverse public high school setting with 145 students who were randomly assigned to a treatment or comparison condition. Findings indicate that students in the treatment condition scored significantly higher on the posttest than did students in the comparison group (effect size: Cohen's d = 0.40). The paper discusses the strengths and limitations of the RCT, the contextual factors that influenced its enactment, and recommendations for others wishing to conduct small-scale rigorous evaluations in educational settings. Our intention is for this paper to serve as a case study for university science faculty members who wish to employ scientifically rigorous evaluations in K–12 settings while limiting the scope and budget of their work. PMID:25452482
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D
2014-10-01
We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.
Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.
2014-01-01
Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051
Urrestarazu, Jorge; Royo, José B.; Santesteban, Luis G.; Miranda, Carlos
2015-01-01
Fingerprinting information can be used to elucidate in a robust manner the genetic structure of germplasm collections, allowing a more rational and fine assessment of genetic resources. Bayesian model-based approaches are nowadays majorly preferred to infer genetic structure, but it is still largely unresolved how marker sets should be built in order to obtain a robust inference. The objective was to evaluate, in Pyrus germplasm collections, the influence of the SSR marker set size on the genetic structure inferred, also evaluating the influence of the criterion used to select those markers. Inferences were performed considering an increasing number of SSR markers that ranged from just two up to 25, incorporated one at a time into the analysis. The influence of the number of SSR markers used was evaluated comparing the number of populations and the strength of the signal detected, and also the similarity of the genotype assignments to populations between analyses. In order to test if those results were influenced by the criterion used to select the SSRs, several choosing scenarios based on the discrimination power or the fixation index values of the SSRs were tested. Our results indicate that population structure could be inferred accurately once a certain SSR number threshold was reached, which depended on the underlying structure within the genotypes, but the method used to select the markers included on each set appeared not to be very relevant. The minimum number of SSRs required to provide robust structure inferences and adequate measurements of the differentiation, even when low differentiation levels exist within populations, was proved similar to that of the complete list of recommended markers for fingerprinting. When a SSR set size similar to the minimum marker sets recommended for fingerprinting it is used, only major divisions or moderate (F ST>0.05) differentiation of the germplasm are detected. PMID:26382618
Impact of the Curve Diameter and Laser Settings on Laser Fiber Fracture.
Haddad, Mattieu; Emiliani, Esteban; Rouchausse, Yann; Coste, Frederic; Doizi, Steeve; Berthe, Laurent; Butticé, Salvatore; Somani, Bhaskar; Traxer, Olivier
2017-09-01
To analyze the risk factors for laser fiber fractures when deflected to form a curve, including laser settings, size of the laser fiber, and the fiber bending diameter. Single-use 272 and 365 μm fibers (Rocamed ® , Monaco) were employed along with a holmium laser (Rocamed). Five different fiber curve diameters were tested: 9, 12, 15, 18, and 20 mm. Fragmentation and dusting settings were used at a theoretical power of 7.5 W. The laser was activated for 5 minutes and the principal judgment criterion was fiber fracture. Every test for each parameter, bending diameter, and fiber size combinations was repeated 10 times. With dusting settings, fibers broke more frequently at a curved diameter of 9 mm for both 272 and 365 μm fibers (p = 0.037 and 0.006, respectively). Using fragmentation settings, fibers broke more frequently at 12 mm for 272 μm and 15 mm for 365 μm (p = 0.007 and 0.033, respectively). Short pulse and high energy were significant risk factors for fiber fracture using the 365 μm fibers (p = 0.02), but not for the 272 μm fibers (p = 0.35). Frequency was not a risk factor for fiber rupture. Fiber diameters also seemed to be involved in the failure with a higher number of broken fibers for the 365 μm fibers, but this was not statistically significant when compared with the 272 μm fibers (p > 0.05). Small-core fibers are more resistant than large-core fibers as lower bending diameters (<9 mm) are required to break smaller fibers. In acute angles, the use of small-core fibers, at a low energy and long-pulse (dusting) setting, will reduce the risk of fiber rupture.
NASA Astrophysics Data System (ADS)
Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho
2008-03-01
To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Morphology- and ion size-induced actuation of carbon nanotube architectures
NASA Astrophysics Data System (ADS)
Geier; Mahrholz; Wierach; Sinapius
2018-04-01
Future adaptive applications require lightweight and stiff materials with high active strain but low energy consumption. A suitable combination of these properties is offered by carbon nanotube-based actuators. Papers made of carbon nanotubes (CNTs) are charged within an electrolyte, which results in an electrical field forming a double-layer of ions at their surfaces and a deflection of the papers can be detected. Until now, there is no generally accepted theory for the actuation mechanism. This study focuses on the actuation mechanism of CNT papers, which represent architectures of randomly oriented CNTs. The samples are tested electrochemically in an in-plane set-up to detect the free strain. The elastic modulus of the CNT papers is analyzed in a tensile test facility. The influence of various ion sizes of water-based electrolytes is investigated.
Cryo-comminution of plastic waste.
Gente, Vincenzo; La Marca, Floriana; Lucci, Federica; Massacci, Paolo; Pani, Eleonora
2004-01-01
Recycling of plastics is a big issue in terms of environmental sustainability and of waste management. The development of proper technologies for plastic recycling is recognised as a priority. To achieve this aim, the technologies applied in mineral processing can be adapted to recycling systems. In particular, the improvement of comminution technologies is one of the main actions to improve the quality of recycled plastics. The aim of this work is to point out suitable comminution processes for different types of plastic waste. Laboratory comminution tests have been carried out under different conditions of temperature and sample pre-conditioning adopting as refrigerant agents CO2 and liquid nitrogen. The temperature has been monitored by thermocouples placed in the milling chamber. Also different internal mill screens have been adopted. A proper procedure has been set up in order to obtain a selective comminution and a size reduction suitable for further separation treatment. Tests have been performed on plastics coming from medical plastic waste and from a plant for spent lead batteries recycling. Results coming from different mill devices have been compared taking into consideration different indexes for representative size distributions. The results of the performed tests show as cryo-comminution improves the effectiveness of size reduction of plastics, promotes liberation of constituents and increases specific surface size of comminuted particles in comparison to a comminution process carried out at room temperature. Copyright 2004 Elsevier Ltd.
Smith, P; Kronvall, G
2015-07-01
The influence on the precision of disc diffusion data of the conditions under which the tests were performed was examined by analysing multilaboratory data sets generated after incubation at 35 °C for 18 h, at 28 °C for 24 h and 22 °C for 24 h and 48 h. Analyses of these data sets demonstrated that precision was significantly and progressively decreased as the test temperature was reduced from 35 to 22 °C. Analysis of the data obtained at 22 °C also showed the precision was inversely related to the time of incubation. Temperature and time related decreases in precision were not related to differences in the mean zone sizes of the data sets obtained under these test conditions. Analysis of the zone data obtained at 28 and 22 °C as single laboratory sets demonstrated that reductions of incubation temperature resulted in significant increases in both intralaboratory and interlaboratory variation. Increases in incubation time at 22 °C were, however, associated with statistically significant increases in interlaboratory variation but not with any significant increase in intralaboratory variation. The significance of these observations for the establishment of the acceptable limits of precision of data sets that can be used for the setting of valid epidemiological cut-off values is discussed. © 2014 John Wiley & Sons Ltd.
Pollinator limitation and the effect of breeding systems on plant reproduction in forest fragments
NASA Astrophysics Data System (ADS)
Nayak, K. Geetha; Davidar, Priya
2010-03-01
Reproduction of plants in fragmented habitats may be limited because of lower diversity or abundance of pollinators, and/or variation in local plant density. We assessed natural fruit set and pollinator limitation in ten species of woody plants in natural and restored fragments in the Pondicherry region of southern India, to see whether breeding system of plants (self-compatible and self-incompatible) affected fruit set. We tested whether the number of flowering individuals in the fragments affected the fruit set and further examined the adult and sapling densities of self-compatible (SC) and self-incompatible (SI) species. We measured the natural level of fruit set and pollinator limitation (calculated as the difference in fruit set between hand cross-pollinated and naturally pollinated flowers). Our results demonstrate that there was a higher level of pollinator limitation and hence lower levels of natural fruit set in self-incompatible species as compared to self-compatible species. However, the hand cross-pollinated flowers in SC and SI species produced similar levels of fruit set, further indicating that lower fruit set was due to pollinator limitation and not due to lack of cross-compatible individuals in the fragments. There was no significant relation between number of flowering individuals and the levels of natural fruit set, except for two species Derris ovalifolia, Ixora pavetta. In these species the natural fruit set decreased with increasing population size, again indicating pollinator limitation. The adult and sapling densities in self-compatible species were significantly higher than in self-incompatible species. These findings indicate that the low reproductive output in self-incompatible species may eventually lead to lower population sizes. Restoration of pollinator services along with plant species in fragmented habitats is important for the long-term conservation of biodiversity.
Oxygen no longer plays a major role in Body Size Evolution
NASA Astrophysics Data System (ADS)
Datta, H.; Sachson, W.; Heim, N. A.; Payne, J.
2015-12-01
When observing the long-term relationship between atmospheric oxygen and the maximum size in organisms across the Geozoic (~3.8 Ga - present), it appears that as oxygen increases, organism size grows. However, during the Phanerozoic (541 Ma - Present) oxygen levels varied, so we set out to test the hypothesis that oxygen levels drive patterns marine animal body size evolution. Expected decreases in maximum size due to a lack of oxygen do not occur, and instead, body size continues to increase regardless. In the oxygen data, a relatively low atmospheric oxygen percentage can support increasing body size, so our research tries to determine whether lifestyle affects body size in marine organisms. The genera in the data set were organized based on their tiering, motility, and feeding, such as a pelagic, fully-motile, predator. When organisms fill a certain ecological niche to take advantage of resources, they will have certain life modes, rather than randomly selected traits. For example, even in terrestrial environments, large animals have to constantly feed themselves to support their expensive terrestrial lifestyle which involves fairly consistent movement, and the structural support necessary for that movement. Only organisms with access to high energy food sources or large amounts of food can support themselves, and that is before they expend energy elsewhere. Organisms that expend energy frugally when active or have slower metabolisms in comparison to body size have a more efficient lifestyle and are generally able to grow larger, while those who have higher energy demands like predators are limited to comparatively smaller sizes. Therefore, in respect to the fossil record and modern measurements of animals, the metabolism and lifestyle of an organism dictate its body size in general. With this further clarification on the patterns of evolution, it will be easier to observe and understand the reasons for the ecological traits of organisms today.
Large Terrain Modeling and Visualization for Planets
NASA Technical Reports Server (NTRS)
Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher
2011-01-01
Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.
Summary of: radiation protection in dental X-ray surgeries--still rooms for improvement.
Walker, Anne
2013-03-01
To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.
Radiation protection in dental X-ray surgeries--still rooms for improvement.
Hart, G; Dugdale, M
2013-03-01
To illustrate the authors' experience in the provision of radiation protection adviser (RPA)/medical physics expert (MPE) services and critical examination/radiation quality assurance (QA) testing, to demonstrate any continuing variability of the compliance of X-ray sets with existing guidance and of compliance of dental practices with existing legislation. Data was collected from a series of critical examination and routine three-yearly radiation QA tests on 915 intra-oral X-ray sets and 124 panoramic sets. Data are the result of direct measurements on the sets, made using a traceably calibrated Unfors Xi meter. The testing covered the measurement of peak kilovoltage (kVp); filtration; timer accuracy and consistency; X-ray beam size; and radiation output, measured as the entrance surface dose in milliGray (mGy) for intra-oral sets and dose-area product (DAP), measured in mGy.cm(2) for panoramic sets. Physical checks, including mechanical stability, were also included as part of the testing process. The Health and Safety Executive has expressed concern about the poor standards of compliance with the regulations during inspections at dental practices. Thirty-five percent of intra-oral sets exceeded the UK adult diagnostic reference level on at least one setting, as did 61% of those with child dose settings. There is a clear advantage of digital radiography and rectangular collimation in dose terms, with the mean dose from digital sets 59% that of film-based sets and a rectangular collimator 76% that of circular collimators. The data shows the unrealised potential for dose saving in many digital sets and also marked differences in dose between sets. Provision of radiation protection advice to over 150 general dental practitioners raised a number of issues on the design of surgeries with X-ray equipment and critical examination testing. There is also considerable variation in advice given on the need (or lack of need) for room shielding. Where no radiation protection adviser (RPA) or medical physics expert (MPE) appointment has been made, there is often a very low level of compliance with legislative requirements. The active involvement of an RPA/MPE and continuing education on radiation protection issues has the potential to reduce radiation doses significantly further in many dental practices.
Xu, C L; Letcher, B H; Nislow, K H
2010-06-01
A 5 year individual-based data set was used to estimate size-specific survival rates in a wild brook trout Salvelinus fontinalis population in a stream network encompassing a mainstem and three tributaries (1.5-6 m wetted width), western Massachusetts, U.S.A. The relationships between survival in summer and temperature and flow metrics derived from continuous monitoring data were then tested. Increased summer temperatures significantly reduced summer survival rates for S. fontinalis in almost all size classes in all four sites throughout the network. In contrast, extreme low summer flows reduced survival of large fish, but only in small tributaries, and had no significant effects on fish in smaller size classes in any location. These results provide direct evidence of a link between season-specific survival and environmental factors likely to be affected by climate change and have important consequences for the management of both habitats and populations.
[Municipalities Stratification for Health Performance Evaluation].
Calvo, Maria Cristina Marino; Lacerda, Josimari Telino de; Colussi, Claudia Flemming; Schneider, Ione Jayce Ceola; Rocha, Thiago Augusto Hernandes
2016-01-01
to propose and present a stratification of Brazilian municipalities into homogeneous groups for evaluation studies of health management performance. this was a methodological study, with selected indicators which classify municipalities according to conditions that influence the health management and population size; data for the year 2010 were collected from demographic and health databases; correlation tests and factor analysis were used. seven strata were identified - Large-sized; Medium-sized with favorable, regular or unfavorable influences; and Small-sized with favorable, regular or unfavorable influences -; there was a concentration of municipalities with favorable influences in strata with better purchasing power and funding, as well as a concentration of municipalities with unfavorable influences in the North and Northeast regions. the proposed classification grouped similar municipalities regarding influential factors in health management, which allowed the identification of comparable groups of municipalities, setting up a consistent alternative to performance evaluation studies.
Enrollment in prescription drug insurance: the interaction of numeracy and choice set size.
Szrek, Helena; Bundorf, M Kate
2014-04-01
To determine how choice set size affects decision quality among individuals of different levels of numeracy choosing prescription drug plans. Members of an Internet-enabled panel age 65 and over were randomly assigned to sets of prescription drug plans varying in size from 2 to 16 plans from which they made a hypothetical choice. They answered questions about enrollment likelihood and the costs and benefits of their choice. The measure of decision quality was enrollment likelihood among those for whom enrollment was beneficial. Enrollment likelihood by numeracy and choice set size was calculated. A model of moderated mediation was analyzed to understand the role of numeracy as a moderator of the relationship between the number of plans and the quality of the enrollment decision and the roles of the costs and benefits in mediating that relationship. More numerate adults made better decisions than less numerate adults when choosing among a small number of alternatives but not when choice sets were larger. Choice set size had little effect on decision making of less numerate adults. Differences in decision making costs between more and less numerate adults helped explain the effect of choice set size on decision quality. Interventions to improve decision making in the context of Medicare Part D may differentially affect lower and higher numeracy adults. The conflicting results on choice overload in the psychology literature may be explained in part by differences amongst individuals in how they respond to choice set size.
Signal detection evidence for limited capacity in visual search
Fencsik, David E.; Flusberg, Stephen J.; Horowitz, Todd S.; Wolfe, Jeremy M.
2014-01-01
The nature of capacity limits (if any) in visual search has been a topic of controversy for decades. In 30 years of work, researchers have attempted to distinguish between two broad classes of visual search models. Attention-limited models have proposed two stages of perceptual processing: an unlimited-capacity preattentive stage, and a limited-capacity selective attention stage. Conversely, noise-limited models have proposed a single, unlimited-capacity perceptual processing stage, with decision processes influenced only by stochastic noise. Here, we use signal detection methods to test a strong prediction of attention-limited models. In standard attention-limited models, performance of some searches (feature searches) should only be limited by a preattentive stage. Other search tasks (e.g., spatial configuration search for a “2” among “5”s) should be additionally limited by an attentional bottleneck. We equated average accuracies for a feature and a spatial configuration search over set sizes of 1–8 for briefly presented stimuli. The strong prediction of attention-limited models is that, given overall equivalence in performance, accuracy should be better on the spatial configuration search than on the feature search for set size 1, and worse for set size 8. We confirm this crossover interaction and show that it is problematic for at least one class of one-stage decision models. PMID:21901574
NASA Astrophysics Data System (ADS)
Hochberger, Juergen; Bredt, Marion; Mueller, Gudrun; Hahn, Eckhart G.; Ell, Christian
1993-05-01
In the following study three different pulsed laser lithotripsy systems were compared for the fine fragmentation of identical sets of natural and synthetic gallstones `in vitro.' Using a pulsed coumarin dye laser (504 nm), a pulsed rhodamine 6G dye laser (595 nm), and a pulsed Alexandrite laser (755 nm) a total of 184 concrements of known chemical composition, size, and weight were disintegrated to a fragment size of
1993-06-01
IFF subsystem in size, weight, cabling requirements, and provides the same audio feedback to the gunner. Mie Training Set Guided Missile was the...brought to the HIS site by an instructor who in no way interfered with the test or coached them during the test. The Southwest Asia veterans were brought...and experience group [F((, 14) = 8.04, R<.05]. Mie High Experience Group had a higher kill rate in MrPPO than in MDPP4, whereas this was reversed for
Puechmaille, Sebastien J
2016-05-01
Inferences of population structure and more precisely the identification of genetically homogeneous groups of individuals are essential to the fields of ecology, evolutionary biology and conservation biology. Such population structure inferences are routinely investigated via the program structure implementing a Bayesian algorithm to identify groups of individuals at Hardy-Weinberg and linkage equilibrium. While the method is performing relatively well under various population models with even sampling between subpopulations, the robustness of the method to uneven sample size between subpopulations and/or hierarchical levels of population structure has not yet been tested despite being commonly encountered in empirical data sets. In this study, I used simulated and empirical microsatellite data sets to investigate the impact of uneven sample size between subpopulations and/or hierarchical levels of population structure on the detected population structure. The results demonstrated that uneven sampling often leads to wrong inferences on hierarchical structure and downward-biased estimates of the true number of subpopulations. Distinct subpopulations with reduced sampling tended to be merged together, while at the same time, individuals from extensively sampled subpopulations were generally split, despite belonging to the same panmictic population. Four new supervised methods to detect the number of clusters were developed and tested as part of this study and were found to outperform the existing methods using both evenly and unevenly sampled data sets. Additionally, a subsampling strategy aiming to reduce sampling unevenness between subpopulations is presented and tested. These results altogether demonstrate that when sampling evenness is accounted for, the detection of the correct population structure is greatly improved. © 2016 John Wiley & Sons Ltd.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Word game bingo: a behavioral treatment package for improving textual responding to sight words.
Kirby, K C; Holborn, S W; Bushby, H T
1981-01-01
Six third-grade students identified as deficient in reading skills tested the efficacy of word game bingo for acquisition and retention of sight word reading. The design was a modified multiple baseline in which treatment was implemented over 3 of 4 word sets and terminated on earlier sets when commencing treatment on later sets. Four sets of bingo cards were constructed on 7 X 9 cm paper divided into 25 equal-sized boxes. Sight words of each set were randomly placed into 24 of these boxes (the center box was marked "free"). Bingo winners were given tokens which were traded weekly for reinforcing activities. Noticeable improvements occurred for the word sets receiving the game treatment (sets A to C). Mean percentage points of improvement from baseline to treatment were approximately 30%. Terminal levels of correct responding exceeded 90%. Several variations of the game were suggested for future research and word game bingo was advocated as an effective behavioral technique or teachers to train sight word reading. PMID:7298541
Parker, David; Belaud-Rotureau, Marc-Antoine
2014-01-01
Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori(®)) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement.
Parker, David; Belaud-Rotureau, Marc-Antoine
2014-01-01
Break-apart fluorescence in situ hybridization (FISH) is the gold standard test for anaplastic lymphoma kinase (ALK) gene rearrangement. However, this methodology often is assumed to be expensive and potentially cost-prohibitive given the low prevalence of ALK-positive non-small cell lung cancer (NSCLC) cases. To more accurately estimate the cost of ALK testing by FISH, we developed a micro-cost model that accounts for all cost elements of the assay, including laboratory reagents, supplies, capital equipment, technical and pathologist labor, and the acquisition cost of the commercial test and associated reagent kits and controls. By applying a set of real-world base-case parameter values, we determined that the cost of a single ALK break-apart FISH test result is $278.01. Sensitivity analysis on the parameters of batch size, testing efficiency, and the cost of the commercial diagnostic testing products revealed that the cost per result is highly sensitive to batch size, but much less so to efficiency or product cost. This implies that ALK testing by FISH will be most cost effective when performed in high-volume centers. Our results indicate that testing cost may not be the primary determinant of crizotinib (Xalkori®) treatment cost effectiveness, and suggest that testing cost is an insufficient reason to limit the use of FISH testing for ALK rearrangement. PMID:25520569
NASA Technical Reports Server (NTRS)
Hughes, William O.; McNelis, Anne M.
2010-01-01
The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.
Short-term Time Step Convergence in a Climate Model
Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...
2015-02-11
A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less
Use of simulation to compare the performance of minimization with stratified blocked randomization.
Toorawa, Robert; Adena, Michael; Donovan, Mark; Jones, Steve; Conlon, John
2009-01-01
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright (c) 2008 John Wiley & Sons, Ltd.
Diesel particulate emissions from used cooking oil biodiesel.
Lapuerta, Magín; Rodríguez-Fernández, José; Agudelo, John R
2008-03-01
Two different biodiesel fuels, obtained from waste cooking oils with different previous uses, were tested in a DI diesel commercial engine either pure or in 30% and 70% v/v blends with a reference diesel fuel. Tests were performed under a set of engine operating conditions corresponding to typical road conditions. Although the engine efficiency was not significantly affected, an increase in fuel consumption with the biodiesel concentration was observed. This increase was proportional to the decrease in the heating value. The main objective of the work was to study the effect of biodiesel blends on particulate emissions, measured in terms of mass, optical effect (smoke opacity) and size distributions. A sharp decrease was observed in both smoke and particulate matter emissions as the biodiesel concentration was increased. The mean particle size was also reduced with the biodiesel concentration, but no significant increases were found in the range of the smallest particles. No important differences in emissions were found between the two tested biodiesel fuels.
Microtubule Dynamics Scale with Cell Size to Set Spindle Length and Assembly Timing.
Lacroix, Benjamin; Letort, Gaëlle; Pitayu, Laras; Sallé, Jérémy; Stefanutti, Marine; Maton, Gilliane; Ladouceur, Anne-Marie; Canman, Julie C; Maddox, Paul S; Maddox, Amy S; Minc, Nicolas; Nédélec, François; Dumont, Julien
2018-05-21
Successive cell divisions during embryonic cleavage create increasingly smaller cells, so intracellular structures must adapt accordingly. Mitotic spindle size correlates with cell size, but the mechanisms for this scaling remain unclear. Using live cell imaging, we analyzed spindle scaling during embryo cleavage in the nematode Caenorhabditis elegans and sea urchin Paracentrotus lividus. We reveal a common scaling mechanism, where the growth rate of spindle microtubules scales with cell volume, which explains spindle shortening. Spindle assembly timing is, however, constant throughout successive divisions. Analyses in silico suggest that controlling the microtubule growth rate is sufficient to scale spindle length and maintain a constant assembly timing. We tested our in silico predictions to demonstrate that modulating cell volume or microtubule growth rate in vivo induces a proportional spindle size change. Our results suggest that scalability of the microtubule growth rate when cell size varies adapts spindle length to cell volume. Copyright © 2018 Elsevier Inc. All rights reserved.
Word recognition using a lexicon constrained by first/last character decisions
NASA Astrophysics Data System (ADS)
Zhao, Sheila X.; Srihari, Sargur N.
1995-03-01
In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.
Sweeney, Sedona; Mosha, Jacklin F; Terris-Prestholt, Fern; Sollis, Kimberly A; Kelly, Helen; Changalucha, John; Peeling, Rosanna W
2014-08-01
To determine the costs of Rapid Syphilis Test (RSTs) as compared with rapid plasma reagin (RPR) when implemented in a Tanzanian setting, and to determine the relative impact of a quality assurance (QA) system on the cost of RST implementation. The incremental costs for RPR and RST screening programmes in existing antenatal care settings in Geita District, Tanzania were collected for 9 months in subsequent years from nine health facilities that varied in size, remoteness and scope of antenatal services. The costs per woman tested and treated were estimated for each facility. A sensitivity analysis was constructed to determine the impact of parameter and model uncertainty. In surveyed facilities, a total of 6362 women were tested with RSTs compared with 224 tested with RPR. The range of unit costs was $1.76-$3.13 per woman screened and $12.88-$32.67 per woman treated. Unit costs for the QA system came to $0.51 per woman tested, of which 50% were attributed to salaries and transport for project personnel. Our results suggest that rapid syphilis diagnostics are very inexpensive in this setting and can overcome some critical barriers to ensuring universal access to syphilis testing and treatment. The additional costs for implementation of a quality system were found to be relatively small, and could be reduced through alterations to the programme design. Given the potential for a quality system to improve quality of diagnosis and care, we recommend that QA activities be incorporated into RST roll-out. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.
Twelve- to 14-Month-Old Infants Can Predict Single-Event Probability with Large Set Sizes
ERIC Educational Resources Information Center
Denison, Stephanie; Xu, Fei
2010-01-01
Previous research has revealed that infants can reason correctly about single-event probabilities with small but not large set sizes (Bonatti, 2008; Teglas "et al.", 2007). The current study asks whether infants can make predictions regarding single-event probability with large set sizes using a novel procedure. Infants completed two trials: A…
Assessment of resampling methods for causality testing: A note on the US inflation behavior
Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms. PMID:28708870
Assessment of resampling methods for causality testing: A note on the US inflation behavior.
Papana, Angeliki; Kyrtsou, Catherine; Kugiumtzis, Dimitris; Diks, Cees
2017-01-01
Different resampling methods for the null hypothesis of no Granger causality are assessed in the setting of multivariate time series, taking into account that the driving-response coupling is conditioned on the other observed variables. As appropriate test statistic for this setting, the partial transfer entropy (PTE), an information and model-free measure, is used. Two resampling techniques, time-shifted surrogates and the stationary bootstrap, are combined with three independence settings (giving a total of six resampling methods), all approximating the null hypothesis of no Granger causality. In these three settings, the level of dependence is changed, while the conditioning variables remain intact. The empirical null distribution of the PTE, as the surrogate and bootstrapped time series become more independent, is examined along with the size and power of the respective tests. Additionally, we consider a seventh resampling method by contemporaneously resampling the driving and the response time series using the stationary bootstrap. Although this case does not comply with the no causality hypothesis, one can obtain an accurate sampling distribution for the mean of the test statistic since its value is zero under H0. Results indicate that as the resampling setting gets more independent, the test becomes more conservative. Finally, we conclude with a real application. More specifically, we investigate the causal links among the growth rates for the US CPI, money supply and crude oil. Based on the PTE and the seven resampling methods, we consistently find that changes in crude oil cause inflation conditioning on money supply in the post-1986 period. However this relationship cannot be explained on the basis of traditional cost-push mechanisms.
The development of methods for predicting and measuring distribution patterns of aerial sprays
NASA Technical Reports Server (NTRS)
Ormsbee, A. I.; Bragg, M. B.; Maughmer, M. D.
1979-01-01
The capability of conducting scale model experiments which involve the ejection of small particles into the wake of an aircraft close to the ground is developed. A set of relationships used to scale small-sized dispersion studies to full-size results are experimentally verified and, with some qualifications, basic deposition patterns are presented. In the process of validating these scaling laws, the basic experimental techniques used in conducting such studies, both with and without an operational propeller, were developed. The procedures that evolved are outlined. The envelope of test conditions that can be accommodated in the Langley Vortex Research Facility, which were developed theoretically, are verified using a series of vortex trajectory experiments that help to define the limitations due to wall interference effects for models of different sizes.
Lee, A.; McVey, J.; Faustino, P.; Lute, S.; Sweeney, N.; Pawar, V.; Khan, M.; Brorson, K.; Hussong, D.
2010-01-01
Filters rated as having a 0.2-μm pore size (0.2-μm-rated filters) are used in laboratory and manufacturing settings for diverse applications of bacterial and particle removal from process fluids, analytical test articles, and gasses. Using Hydrogenophaga pseudoflava, a diminutive bacterium with an unusual geometry (i.e., it is very thin), we evaluated passage through 0.2-μm-rated filters and the impact of filtration process parameters and bacterial challenge density. We show that consistent H. pseudoflava passage occurs through 0.2-μm-rated filters. This is in contrast to an absence of significant passage of nutritionally challenged bacteria that are of similar size (i.e., hydrodynamic diameter) but dissimilar geometry. PMID:19966023
Measuring sperm backflow following female orgasm: a new method
King, Robert; Dempsey, Maria; Valentine, Katherine A.
2016-01-01
Background Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. Method A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. Results The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. Conclusions This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size. PMID:27799082
Measuring sperm backflow following female orgasm: a new method.
King, Robert; Dempsey, Maria; Valentine, Katherine A
2016-01-01
Human female orgasm is a vexed question in the field while there is credible evidence of cryptic female choice that has many hallmarks of orgasm in other species. Our initial goal was to produce a proof of concept for allowing females to study an aspect of infertility in a home setting, specifically by aligning the study of human infertility and increased fertility with the study of other mammalian fertility. In the latter case - the realm of oxytocin-mediated sperm retention mechanisms seems to be at work in terms of ultimate function (differential sperm retention) while the proximate function (rapid transport or cervical tenting) remains unresolved. A repeated measures design using an easily taught technique in a natural setting was used. Participants were a small (n=6), non-representative sample of females. The introduction of a sperm-simulant combined with an orgasm-producing technique using a vibrator/home massager and other easily supplied materials. The sperm flowback (simulated) was measured using a technique that can be used in a home setting. There was a significant difference in simulant retention between the orgasm (M=4.08, SD=0.17) and non-orgasm (M=3.30, SD=0.22) conditions; t (5)=7.02, p=0.001. Cohen's d=3.97, effect size r=0.89. This indicates a medium to small effect size. This method could allow females to test an aspect of sexual response that has been linked to lowered fertility in a home setting with minimal training. It needs to be replicated with a larger sample size.
Assessment of burning characteristics of aircraft interior materials
NASA Technical Reports Server (NTRS)
Grand, A. F.; Valys, A. J.
1981-01-01
The performance of a series of seat cushion design constructions was compared based on their heat and smoke release characteristics. Tests were conducted in a room size calorimeter instrumented for measuring weight loss, rate of heat release, smoke and volatile decomposition products and the cumulative energy release. Baseline data were obtained from burn tests conducted on commercial airline salvage sets as a comparison with more advanced seat designs. A toxicological assessment of smoke and fire gases involved the exposure of test animals and their biological responses ascertained. Relative toxicological hazards of the combustion gases are discussed based on the animal response studies and the analysis of the combustion gases.
ERIC Educational Resources Information Center
Alharbi, Abeer A.; Stoet, Gijsbert
2017-01-01
There is no consensus among academics about whether children benefit from smaller classes. We analysed the data from the 2012 Programme for International Student Assessment (PISA) to test if smaller classes lead to higher performance. Advantages of using this data set are not only its size (478,120 15-year old students in 63 nations) and…
Interference, aging, and visuospatial working memory: the role of similarity.
Rowe, Gillian; Hasher, Lynn; Turcotte, Josée
2010-11-01
Older adults' performance on working memory (WM) span tasks is known to be negatively affected by the buildup of proactive interference (PI) across trials. PI has been reduced in verbal tasks and performance increased by presenting distinctive items across trials. In addition, reversing the order of trial presentation (i.e., starting with the longest sets first) has been shown to reduce PI in both verbal and visuospatial WM span tasks. We considered whether making each trial visually distinct would improve older adults' visuospatial WM performance, and whether combining the 2 PI-reducing manipulations, distinct trials and reversed order of presentation, would prove additive, thus providing even greater benefit. Forty-eight healthy older adults (age range = 60-77 years) completed 1 of 3 versions of a computerized Corsi block test. For 2 versions of the task, trials were either all visually similar or all visually distinct, and were presented in the standard ascending format (shortest set size first). In the third version, visually distinct trials were presented in a reverse order of presentation (longest set size first). Span scores were reliably higher in the ascending version for visually distinct compared with visually similar trials, F(1, 30) = 4.96, p = .03, η² = .14. However, combining distinct trials and a descending format proved no more beneficial than administering the descending format alone. Our findings suggest that a more accurate measurement of the visuospatial WM span scores of older adults (and possibly neuropsychological patients) might be obtained by reducing within-test interference.
Statistical evaluation of synchronous spike patterns extracted by frequent item set mining
Torre, Emiliano; Picado-Muiño, David; Denker, Michael; Borgelt, Christian; Grün, Sonja
2013-01-01
We recently proposed frequent itemset mining (FIM) as a method to perform an optimized search for patterns of synchronous spikes (item sets) in massively parallel spike trains. This search outputs the occurrence count (support) of individual patterns that are not trivially explained by the counts of any superset (closed frequent item sets). The number of patterns found by FIM makes direct statistical tests infeasible due to severe multiple testing. To overcome this issue, we proposed to test the significance not of individual patterns, but instead of their signatures, defined as the pairs of pattern size z and support c. Here, we derive in detail a statistical test for the significance of the signatures under the null hypothesis of full independence (pattern spectrum filtering, PSF) by means of surrogate data. As a result, injected spike patterns that mimic assembly activity are well detected, yielding a low false negative rate. However, this approach is prone to additionally classify patterns resulting from chance overlap of real assembly activity and background spiking as significant. These patterns represent false positives with respect to the null hypothesis of having one assembly of given signature embedded in otherwise independent spiking activity. We propose the additional method of pattern set reduction (PSR) to remove these false positives by conditional filtering. By employing stochastic simulations of parallel spike trains with correlated activity in form of injected spike synchrony in subsets of the neurons, we demonstrate for a range of parameter settings that the analysis scheme composed of FIM, PSF and PSR allows to reliably detect active assemblies in massively parallel spike trains. PMID:24167487
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines.
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families.
Genomic Prediction of Seed Quality Traits Using Advanced Barley Breeding Lines
Nielsen, Nanna Hellum; Jahoor, Ahmed; Jensen, Jens Due; Orabi, Jihad; Cericola, Fabio; Edriss, Vahid; Jensen, Just
2016-01-01
Genomic selection was recently introduced in plant breeding. The objective of this study was to develop genomic prediction for important seed quality parameters in spring barley. The aim was to predict breeding values without expensive phenotyping of large sets of lines. A total number of 309 advanced spring barley lines tested at two locations each with three replicates were phenotyped and each line was genotyped by Illumina iSelect 9Kbarley chip. The population originated from two different breeding sets, which were phenotyped in two different years. Phenotypic measurements considered were: seed size, protein content, protein yield, test weight and ergosterol content. A leave-one-out cross-validation strategy revealed high prediction accuracies ranging between 0.40 and 0.83. Prediction across breeding sets resulted in reduced accuracies compared to the leave-one-out strategy. Furthermore, predicting across full and half-sib-families resulted in reduced prediction accuracies. Additionally, predictions were performed using reduced marker sets and reduced training population sets. In conclusion, using less than 200 lines in the training set can result in low prediction accuracy, and the accuracy will then be highly dependent on the family structure of the selected training set. However, the results also indicate that relatively small training sets (200 lines) are sufficient for genomic prediction in commercial barley breeding. In addition, our results indicate a minimum marker set of 1,000 to decrease the risk of low prediction accuracy for some traits or some families. PMID:27783639
Genome size evolution in relation to leaf strategy and metabolic rates revisited.
Beaulieu, Jeremy M; Leitch, Ilia J; Knight, Charles A
2007-03-01
It has been proposed that having too much DNA may carry physiological consequences for plants. The strong correlation between DNA content, cell size and cell division rate could lead to predictable morphological variation in plants, including a negative relationship with leaf mass per unit area (LMA). In addition, the possible increased demand for resources in species with high DNA content may have downstream effects on maximal metabolic efficiency, including decreased metabolic rates. Tests were made for genome size-dependent variation in LMA and metabolic rates (mass-based photosynthetic rate and dark respiration rate) using our own measurements and data from a plant functional trait database (Glopnet). These associations were tested using two metrics of genome size: bulk DNA amount (2C DNA) and monoploid genome size (1Cx DNA). The data were analysed using an evolutionary framework that included a regression analysis and independent contrasts using a phylogenetic tree with estimates of molecular diversification times. A contribution index for the LMA data set was also calculated to determine which divergences have the greatest influence on the relationship between genome size and LMA. A significant negative association was found between bulk DNA amount and LMA in angiosperms. This was primarily a result of influential divergences that may represent early shifts in growth form. However, divergences in bulk DNA amount were positively associated with divergences in LMA, suggesting that the relationship may be indirect and mediated through other traits directly related to genome size. There was a significant negative association between genome size and metabolic rates that was driven by a basal divergence between angiosperms and gymnosperms; no significant independent contrast results were found. Therefore, it is concluded that genome size-dependent constraints acting on metabolic efficiency may not exist within seed plants.
Lack of Set Size Effects in Spatial Updating: Evidence for Offline Updating
ERIC Educational Resources Information Center
Hodgson, Eric; Waller, David
2006-01-01
Four experiments required participants to keep track of the locations of (i.e., update) 1, 2, 3, 4, 6, 8, 10, or 15 target objects after rotating. Across all conditions, updating was unaffected by set size. Although some traditional set size effects (i.e., a linear increase of latency with memory load) were observed under some conditions, these…
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem
Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683
Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi
2011-06-01
This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.
Validity of photographs for food portion estimation in a rural West African setting.
Huybregts, L; Roberfroid, D; Lachat, C; Van Camp, J; Kolsteren, P
2008-06-01
To validate food photographs for food portion size estimation of frequently consumed dishes, to be used in a 24-hour recall food consumption study of pregnant women in a rural environment in Burkina Faso. This food intake study is part of an intervention evaluating the efficacy of prenatal micronutrient supplementation on birth outcomes. Women of childbearing age (15-45 years). A food photograph album containing four photographs of food portions per food item was compiled for eight selected food items. Subjects were presented two food items each in the morning and two in the afternoon. These foods were weighed to the exact weight of a food depicted in one of the photographs and were in the same receptacles. The next day another fieldworker presented the food photographs to the subjects to test their ability to choose the correct photograph. The correct photograph out of the four proposed was chosen in 55% of 1028 estimations. For each food, proportions of underestimating and overestimating participants were balanced, except for rice and couscous. On a group level, mean differences between served and estimated portion sizes were between -8.4% and 6.3%. Subjects who attended school were almost twice as likely to choose the correct photograph. The portion size served (small vs. largest sizes) had a significant influence on the portion estimation ability. The results from this study indicate that in a West African rural setting, food photographs can be a valuable tool for the quantification of food portion size on group level.
Cancer diagnostics using neural network sorting of processed images
NASA Astrophysics Data System (ADS)
Wyman, Charles L.; Schreeder, Marshall; Grundy, Walt; Kinser, Jason M.
1996-03-01
A combination of image processing with neural network sorting was conducted to demonstrate feasibility of automated cervical smear screening. Nuclei were isolated to generate a series of data points relating to the density and size of individual nuclei. This was followed by segmentation to isolate entire cells for subsequent generation of data points to bound the size of the cytoplasm. Data points were taken on as many as ten cells per image frame and included correlation against a series of filters providing size and density readings on nuclei. Additional point data was taken on nuclei images to refine size information and on whole cells to bound the size of the cytoplasm, twenty data points per assessed cell were generated. These data point sets, designated as neural tensors, comprise the inputs for training and use of a unique neural network to sort the images and identify those indicating evidence of disease. The neural network, named the Fast Analog Associative Memory, accumulates data and establishes lookup tables for comparison against images to be assessed. Six networks were trained to differentiate normal cells from those evidencing various levels abnormality that may lead to cancer. A blind test was conducted on 77 images to evaluate system performance. The image set included 31 positives (diseased) and 46 negatives (normal). Our system correctly identified all 31 positives and 41 of the negatives with 5 false positives. We believe this technology can lead to more efficient automated screening of cervical smears.
Being Attractive Brings Advantages: The Case of Parrot Species in Captivity
Frynta, Daniel; Lišková, Silvie; Bültmann, Sebastian; Burda, Hynek
2010-01-01
Background Parrots are one of the most frequently kept and bred bird orders in captivity. This increases poaching and thus the potential importance of captive populations for rescue programmes managed by zoos and related institutions. Both captive breeding and poaching are selective and may be influenced by the attractiveness of particular species to humans. In this paper, we tested the hypothesis that the size of zoo populations is not only determined by conservation needs, but also by the perceived beauty of individual parrot species assessed by human observers. Methodology/Principal Findings For the purpose of data collection, we defined four sets of species (40 parrots, 367 parrots, 34 amazons, 17 macaws). Then, we asked 776 human respondents to evaluate parrot pictures of the selected species according to perceived beauty and we analyzed its association with color and morphological characters. Irrespective of the species set, we found a good agreement among the respondents. The preferred species tended to be large, colorful, and long-tailed. Conclusions/Significance We repeatedly confirmed significant, positive association between the perceived beauty and the size of worldwide zoo population. Moreover, the range size and body size appeared to be significant predictors of zoo population size. In contrast, the effects of other explanatory variables, including the IUCN (International Union for Conservation of Nature) listing, appeared insignificant. Our results may suggest that zoos preferentially keep beautiful parrots and pay less attention to conservation needs. PMID:20830206
Turssi, C P; Ferracane, J L; Vogel, K
2005-08-01
Based on the incomplete understanding on how filler features influence the wear resistance and monomer conversion of resin composites, this study sought to evaluate whether materials containing different shapes and combinations of size of filler particles would perform similarly in terms of three-body abrasion and degree of conversion. Twelve experimental monomodal, bimodal or trimodal composites containing either spherical or irregular shaped fillers ranging from 100 to 1500 nm were examined. Wear testings were conducted in the OHSU wear machine (n = 6) and quantified after 10(5) cycles using a profilometer. Degree of conversion (DC) was measured by FTIR spectrometry at the surface of the composites (n = 6). Data sets were analyzed using one-way Anova and Tukey's test at a significance level of 0.05. Filler size and geometry was found to have a significant effect on wear resistance and DC of composites. At specific sizes and combinations, the presence of small filler particles, either spherical or irregular, may aid in enhancing the wear resistance of composites, without compromising the percentage of reacted carbon double bonds.
Inferring the Mode of Selection from the Transient Response to Demographic Perturbations
NASA Astrophysics Data System (ADS)
Balick, Daniel; Do, Ron; Reich, David; Sunyaev, Shamil
2014-03-01
Despite substantial recent progress in theoretical population genetics, most models work under the assumption of a constant population size. Deviations from fixed population sizes are ubiquitous in natural populations, many of which experience population bottlenecks and re-expansions. The non-equilibrium dynamics introduced by a large perturbation in population size are generally viewed as a confounding factor. In the present work, we take advantage of the transient response to a population bottleneck to infer features of the mode of selection and the distribution of selective effects. We develop an analytic framework and a corresponding statistical test that qualitatively differentiates between alleles under additive and those under recessive or more general epistatic selection. This statistic can be used to bound the joint distribution of selective effects and dominance effects in any diploid sexual organism. We apply this technique to human population genetic data, and severely restrict the space of allowed selective coefficients in humans. Additionally, one can test a set of functionally or medically relevant alleles for the primary mode of selection, or determine the local regional variation in dominance coefficients along the genome.
The influence of distal-end heat treatment on deflection of nickel-titanium archwire.
Silva, Marcelo Faria da; Pinzan-Vercelino, Célia Regina Maia; Gurgel, Júlio de Araújo
2016-01-01
The aim of this in vitro study was to evaluate the deflection-force behavior of nickel-titanium (NiTi) orthodontic wires adjacent to the portion submitted to heat treatment. A total of 106 segments of NiTi wires (0.019 x 0.025-in) and heat-activated NiTi wires (0.016 x 0.022-in) from four commercial brands were tested. The segments were obtained from 80 archwires. For the experimental group, the distal portion of each segmented archwire was subjected to heat treatment (n = 40), while the other distal portion of the same archwire was used as a heating-free control group (n = 40). Deflection tests were performed in a temperature-controlled universal testing machine. Unpaired Student's t-tests were applied to determine if there were differences between the experimental and control groups for each commercial brand and size of wire. Statistical significance was set at p < 0.05. There were no statistically significant differences between the tested groups with the same size and brand of wire. Heat treatment applied to the distal ends of rectangular NiTi archwires does not permanently change the elastic properties of the adjacent portions.
The influence of distal-end heat treatment on deflection of nickel-titanium archwire
da Silva, Marcelo Faria; Pinzan-Vercelino, Célia Regina Maia; Gurgel, Júlio de Araújo
2016-01-01
Objective: The aim of this in vitro study was to evaluate the deflection-force behavior of nickel-titanium (NiTi) orthodontic wires adjacent to the portion submitted to heat treatment. Material and Methods: A total of 106 segments of NiTi wires (0.019 x 0.025-in) and heat-activated NiTi wires (0.016 x 0.022-in) from four commercial brands were tested. The segments were obtained from 80 archwires. For the experimental group, the distal portion of each segmented archwire was subjected to heat treatment (n = 40), while the other distal portion of the same archwire was used as a heating-free control group (n = 40). Deflection tests were performed in a temperature-controlled universal testing machine. Unpaired Student's t-tests were applied to determine if there were differences between the experimental and control groups for each commercial brand and size of wire. Statistical significance was set at p < 0.05. Results: There were no statistically significant differences between the tested groups with the same size and brand of wire. Conclusions: Heat treatment applied to the distal ends of rectangular NiTi archwires does not permanently change the elastic properties of the adjacent portions. PMID:27007766
Kim, Yu-Ri; Park, Jong-Il; Lee, Eun Jeong; Park, Sung Ha; Seong, Nak-won; Kim, Jun-Ho; Kim, Geon-Yong; Meang, Eun-Ho; Hong, Jeong-Sup; Kim, Su-Hyon; Koh, Sang-Bum; Kim, Min-Seok; Kim, Cheol-Su; Kim, Soo-Ki; Son, Sang Wook; Seo, Young Rok; Kang, Boo Hyon; Han, Beom Seok; An, Seong Soo A; Yun, Hyo-In; Kim, Meyoung-Kon
2014-01-01
Nanoparticles (NPs) are used commercially in health and fitness fields, but information about the toxicity and mechanisms underlying the toxic effects of NPs is still very limited. The aim of this study is to investigate the toxic effect(s) of 100 nm negatively (ZnOAE100[−]) or positively (ZnOAE100[+]) charged zinc oxide (ZnO) NPs administered by gavage in Sprague Dawley rats, to establish a no observed adverse effect level, and to identify target organ(s). After verification of the primary particle size, morphology, hydrodynamic size, and zeta potential of each test article, we performed a 90-day study according to Organisation for Economic Co-operation and Development test guideline 408. For the 90-day study, the high dose was set at 500 mg/kg and the middle and low doses were set at 125 mg/kg and 31.25 mg/kg, respectively. Both ZnO NPs had significant changes in hematological and blood biochemical analysis, which could correlate with anemia-related parameters, in the 500 mg/kg groups of both sexes. Histopathological examination showed significant adverse effects (by both test articles) in the stomach, pancreas, eye, and prostate gland tissues, but the particle charge did not affect the tendency or the degree of the lesions. We speculate that this inflammatory damage might result from continuous irritation caused by both test articles. Therefore, the target organs for both ZnOAE100(−) and ZnOAE100(+) are considered to be the stomach, pancreas, eye, and prostate gland. Also, the no observed adverse effect level for both test articles was identified as 31.25 mg/kg for both sexes, because the adverse effects were observed at all doses greater than 125 mg/kg. PMID:25565830
A systematic way for the cost reduction of density fitting methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kállay, Mihály, E-mail: kallay@mail.bme.hu
2014-12-28
We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less
The Wilcoxon signed rank test for paired comparisons of clustered data.
Rosner, Bernard; Glynn, Robert J; Lee, Mei-Ling T
2006-03-01
The Wilcoxon signed rank test is a frequently used nonparametric test for paired data (e.g., consisting of pre- and posttreatment measurements) based on independent units of analysis. This test cannot be used for paired comparisons arising from clustered data (e.g., if paired comparisons are available for each of two eyes of an individual). To incorporate clustering, a generalization of the randomization test formulation for the signed rank test is proposed, where the unit of randomization is at the cluster level (e.g., person), while the individual paired units of analysis are at the subunit within cluster level (e.g., eye within person). An adjusted variance estimate of the signed rank test statistic is then derived, which can be used for either balanced (same number of subunits per cluster) or unbalanced (different number of subunits per cluster) data, with an exchangeable correlation structure, with or without tied values. The resulting test statistic is shown to be asymptotically normal as the number of clusters becomes large, if the cluster size is bounded. Simulation studies are performed based on simulating correlated ranked data from a signed log-normal distribution. These studies indicate appropriate type I error for data sets with > or =20 clusters and a superior power profile compared with either the ordinary signed rank test based on the average cluster difference score or the multivariate signed rank test of Puri and Sen. Finally, the methods are illustrated with two data sets, (i) an ophthalmologic data set involving a comparison of electroretinogram (ERG) data in retinitis pigmentosa (RP) patients before and after undergoing an experimental surgical procedure, and (ii) a nutritional data set based on a randomized prospective study of nutritional supplements in RP patients where vitamin E intake outside of study capsules is compared before and after randomization to monitor compliance with nutritional protocols.
NASA Astrophysics Data System (ADS)
Bernard, John Charles
The objective of this study was to compare the performance of five single sided auctions that could be used in restructured electric power markets across different market sizes in a multiple unit setting. Auction selection would profoundly influence an industry over $200 billion in size in the United States, and the consequences of implementing an inappropriate mechanism would be great. Experimental methods were selected to analyze the auctions. Two rounds of experiments were conducted, the first testing the sealed offer last accepted offer (LAO) and first rejected offer (FRO), and the clock English (ENG) and sealed offer English (SOE) in markets of sizes two and six. The FRO, SOE, and ENG used the same pricing rule. Second round testing was on the LAO, FRO, and the nonuniform price multiple unit Vickrey (MUV) in markets of sizes two, four, and six. Experiments lasted 23 and 75 periods for rounds 1 and 2 respectively. Analysis of variance and contrast analysis were used to examine the data. The four performance measures used were price, efficiency, profits per unit, and supply revelation. Five basic principles were also assessed: no sales at losses, all low cost capacity should be offered and sold, no high cost capacity should sell, and the market should clear. It was expected group size and auction type would affect performance. For all performance measures, group size was a significant variable, with smaller groups showing poorer performance. Auction type was significant only for the efficiency performance measure, where clock auctions outperformed the others. Clock auctions also proved superior for the first four principles. The FRO performed poorly in almost all situations, and should not be a preferred mechanism in any market. The ENG was highly efficient, but expensive for the buyer. The SOE appeared superior to the FRO and ENG. The clock improves efficiency over the FRO while less information kept prices under the ENG. The MUV was superior in revealing costs, but performed less well in other categories. While concerns existed for all the mechanisms investigated, the commonly proposed LAO was the best option for restructured electric power markets.
Coil geometry effects on scanning single-coil magnetic induction tomography
NASA Astrophysics Data System (ADS)
Feldkamp, Joe R.; Quirk, Stephen
2017-09-01
Alternative coil designs for single coil magnetic induction tomography are considered in this work, with the intention of improving upon the standard design used previously. In particular, we note that the blind spot associated with this coil type, a portion of space along its axis where eddy current generation can be very weak, has an important effect on performance. The seven designs tested here vary considerably in the size of their blind spot. To provide the most discerning test possible, we use laboratory phantoms containing feature dimensions similar to blind spot size. Furthermore, conductivity contrasts are set higher than what would occur naturally in biological systems, which has the effect of weakening eddy current generation at coil locations that straddle the border between high and low conductivity features. Image reconstruction results for the various coils show that coils with smaller blind spots give markedly better performance, though improvements in signal-to-noise ratio could alter that conclusion.
Teixeira, Ana L; Falcao, Andre O
2014-07-28
Structurally similar molecules tend to have similar properties, i.e. closer molecules in the molecular space are more likely to yield similar property values while distant molecules are more likely to yield different values. Based on this principle, we propose the use of a new method that takes into account the high dimensionality of the molecular space, predicting chemical, physical, or biological properties based on the most similar compounds with measured properties. This methodology uses ordinary kriging coupled with three different molecular similarity approaches (based on molecular descriptors, fingerprints, and atom matching) which creates an interpolation map over the molecular space that is capable of predicting properties/activities for diverse chemical data sets. The proposed method was tested in two data sets of diverse chemical compounds collected from the literature and preprocessed. One of the data sets contained dihydrofolate reductase inhibition activity data, and the second molecules for which aqueous solubility was known. The overall predictive results using kriging for both data sets comply with the results obtained in the literature using typical QSPR/QSAR approaches. However, the procedure did not involve any type of descriptor selection or even minimal information about each problem, suggesting that this approach is directly applicable to a large spectrum of problems in QSAR/QSPR. Furthermore, the predictive results improve significantly with the similarity threshold between the training and testing compounds, allowing the definition of a confidence threshold of similarity and error estimation for each case inferred. The use of kriging for interpolation over the molecular metric space is independent of the training data set size, and no reparametrizations are necessary when more compounds are added or removed from the set, and increasing the size of the database will consequentially improve the quality of the estimations. Finally it is shown that this model can be used for checking the consistency of measured data and for guiding an extension of the training set by determining the regions of the molecular space for which new experimental measurements could be used to maximize the model's predictive performance.
Searching for the right word: Hybrid visual and memory search for words
Boettcher, Sage E. P.; Wolfe, Jeremy M.
2016-01-01
In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035
Differential item functioning analysis of the Vanderbilt Expertise Test for cars
Lee, Woo-Yeol; Cho, Sun-Joo; McGugin, Rankin W.; Van Gulick, Ana Beth; Gauthier, Isabel
2015-01-01
The Vanderbilt Expertise Test for cars (VETcar) is a test of visual learning for contemporary car models. We used item response theory to assess the VETcar and in particular used differential item functioning (DIF) analysis to ask if the test functions the same way in laboratory versus online settings and for different groups based on age and gender. An exploratory factor analysis found evidence of multidimensionality in the VETcar, although a single dimension was deemed sufficient to capture the recognition ability measured by the test. We selected a unidimensional three-parameter logistic item response model to examine item characteristics and subject abilities. The VETcar had satisfactory internal consistency. A substantial number of items showed DIF at a medium effect size for test setting and for age group, whereas gender DIF was negligible. Because online subjects were on average older than those tested in the lab, we focused on the age groups to conduct a multigroup item response theory analysis. This revealed that most items on the test favored the younger group. DIF could be more the rule than the exception when measuring performance with familiar object categories, therefore posing a challenge for the measurement of either domain-general visual abilities or category-specific knowledge. PMID:26418499
NASA Technical Reports Server (NTRS)
Allen, Jerry M.
2005-01-01
An experimental study has been performed to develop a large force and moment aerodynamic data set on a slender axisymmetric missile configuration having cruciform strakes and in-line control tail fins. The data include six-component balance measurements of the configuration aerodynamics and three-component measurements on all four tail fins. The test variables include angle of attack, roll angle, Mach number, model buildup, strake length, nose size, and tail fin deflection angles to provide pitch, yaw, and roll control. Test Mach numbers ranged from 0.60 to 4.63. The entire data set is presented on a CD-ROM that is attached to this paper. The CD-ROM also includes extensive plots of both the six-component configuration data and the three-component tail fin data. Selected samples of these plots are presented in this paper to illustrate the features of the data and to investigate the effects of the test variables.
NASA Technical Reports Server (NTRS)
Allen, Jerry M.
2005-01-01
An experimental study has been performed to develop a large force and moment aerodynamic data set on a slender axisymmetric missile configuration having cruciform strakes and in-line control tail fins. The data include six-component balance measurements of the configuration aerodynamics and three-component measurements on all four tail fins. The test variables include angle of attack, roll angle, Mach number, model buildup, strake length, nose size, and tail fin deflection angles to provide pitch, yaw, and roll control. Test Mach numbers ranged from 0.60 to 4.63. The entire data set is presented on a CD-ROM that is attached to this paper. The CD-ROM also includes extensive plots of both the six-component configuration data and the three-component tail fin data. Selected samples of these plots are presented in this paper to illustrate the features of the data and to investigate the effects of the test variables.
Effects of Group Size on Students Mathematics Achievement in Small Group Settings
ERIC Educational Resources Information Center
Enu, Justice; Danso, Paul Amoah; Awortwe, Peter K.
2015-01-01
An ideal group size is hard to obtain in small group settings; hence there are groups with more members than others. The purpose of the study was to find out whether group size has any effects on students' mathematics achievement in small group settings. Two third year classes of the 2011/2012 academic year were selected from two schools in the…
Neural Plasticity and Neurorehabilitation Following Traumatic Brain Injury
2009-10-01
Nissl . Using the Nissl stained sections, Dorothy Kozlowski’s lab has analyzed the size of the contusions. Previous studies have shown that if...brains, staining one set with Nissl , saving the remaining sets for Immunohistochemical staining . • Dr. Kozlowski’s lab is analyzing contusion size...serially and coronaly into sets and immunohistochemically analyzed for the following: contusion size estimated as volume of remaining tissue in Nissl
Bruxvoort, Katia J; Leurent, Baptiste; Chandler, Clare I R; Ansah, Evelyn K; Baiden, Frank; Björkman, Anders; Burchett, Helen E D; Clarke, Siân E; Cundill, Bonnie; DiLiberto, Debora D; Elfving, Kristina; Goodman, Catherine; Hansen, Kristian S; Kachur, S Patrick; Lal, Sham; Lalloo, David G; Leslie, Toby; Magnussen, Pascal; Mangham-Jefferies, Lindsay; Mårtensson, Andreas; Mayan, Ismail; Mbonye, Anthony K; Msellem, Mwinyi I; Onwujekwe, Obinna E; Owusu-Agyei, Seth; Rowland, Mark W; Shakely, Delér; Staedke, Sarah G; Vestergaard, Lasse S; Webster, Jayne; Whitty, Christopher J M; Wiseman, Virginia L; Yeung, Shunmay; Schellenberg, David; Hopkins, Heidi
2017-10-01
Since 2010, the World Health Organization has been recommending that all suspected cases of malaria be confirmed with parasite-based diagnosis before treatment. These guidelines represent a paradigm shift away from presumptive antimalarial treatment of fever. Malaria rapid diagnostic tests (mRDTs) are central to implementing this policy, intended to target artemisinin-based combination therapies (ACT) to patients with confirmed malaria and to improve management of patients with nonmalarial fevers. The ACT Consortium conducted ten linked studies, eight in sub-Saharan Africa and two in Afghanistan, to evaluate the impact of mRDT introduction on case management across settings that vary in malaria endemicity and healthcare provider type. This synthesis includes 562,368 outpatient encounters (study size range 2,400-432,513). mRDTs were associated with significantly lower ACT prescription (range 8-69% versus 20-100%). Prescribing did not always adhere to malaria test results; in several settings, ACTs were prescribed to more than 30% of test-negative patients or to fewer than 80% of test-positive patients. Either an antimalarial or an antibiotic was prescribed for more than 75% of patients across most settings; lower antimalarial prescription for malaria test-negative patients was partly offset by higher antibiotic prescription. Symptomatic management with antipyretics alone was prescribed for fewer than 25% of patients across all scenarios. In community health worker and private retailer settings, mRDTs increased referral of patients to other providers. This synthesis provides an overview of shifts in case management that may be expected with mRDT introduction and highlights areas of focus to improve design and implementation of future case management programs.
Ruscello, B; Briotti, G; Tozzo, N; Partipilo, F; Taraborelli, M; Zeppetella, A; Padulo, J; D'Ottavio, S
2015-10-01
The aim of this paper was to investigate the acute effects of two different initial heart rates intensities when testing the repeated sprint ability (RSA) performances in young soccer players. Since there are many kinds of pre-match warm-ups, we chose to take as an absolute indicator of internal load the heart rate reached at the end of two different warm-up protocols (60 vs. 90% HRmax) and to compare the respective RSA performances. The RSA tests were performed on fifteen male soccer players (age: 17.9±1.5 years) with two sets of ten shuttle-sprints (15+15 m) with a 1:3 exercise to rest ratio, in different days (randomized order) with different HR% (60 & 90% HRmax). In order to compare the different sprint performances a Fatigue Index (FI%) was computed, while the blood lactate concentrations (BLa-) were measured before and after testing, to compare metabolic demand. Significant differences among trials within each sets (P<0.01) were found. Differences between sets were also found, especially comparing the last five trials for each set (Factorial ANOVA; P<0.01), effect size values confirming the relevance of these differences. Although the BLa- after warm-up was higher (36%) between 90% vs. 60% HRmax, after the RSA test the differences were considerably low (7%). Based on physiological information's this methodological approach (testing with initial 90%HRmax) reflects more realistically the metabolic background in which a soccer player operates during a real match. This background may be partially reproduced by warming up protocols that, by duration and metabolic commitment, can reproduce conveniently the physiological conditions encountered in a real game (e.g. HRmax≈85-95%; BLa->4 mmol/L(-1)).
Bruxvoort, Katia J.; Leurent, Baptiste; Chandler, Clare I. R.; Ansah, Evelyn K.; Baiden, Frank; Björkman, Anders; Burchett, Helen E. D.; Clarke, Siân E.; Cundill, Bonnie; DiLiberto, Debora D.; Elfving, Kristina; Goodman, Catherine; Hansen, Kristian S.; Kachur, S. Patrick; Lal, Sham; Lalloo, David G.; Leslie, Toby; Magnussen, Pascal; Mangham-Jefferies, Lindsay; Mårtensson, Andreas; Mayan, Ismail; Mbonye, Anthony K.; Msellem, Mwinyi I.; Onwujekwe, Obinna E.; Owusu-Agyei, Seth; Rowland, Mark W.; Shakely, Delér; Staedke, Sarah G.; Vestergaard, Lasse S.; Webster, Jayne; Whitty, Christopher J. M.; Wiseman, Virginia L.; Yeung, Shunmay; Schellenberg, David; Hopkins, Heidi
2017-01-01
Abstract. Since 2010, the World Health Organization has been recommending that all suspected cases of malaria be confirmed with parasite-based diagnosis before treatment. These guidelines represent a paradigm shift away from presumptive antimalarial treatment of fever. Malaria rapid diagnostic tests (mRDTs) are central to implementing this policy, intended to target artemisinin-based combination therapies (ACT) to patients with confirmed malaria and to improve management of patients with nonmalarial fevers. The ACT Consortium conducted ten linked studies, eight in sub-Saharan Africa and two in Afghanistan, to evaluate the impact of mRDT introduction on case management across settings that vary in malaria endemicity and healthcare provider type. This synthesis includes 562,368 outpatient encounters (study size range 2,400–432,513). mRDTs were associated with significantly lower ACT prescription (range 8–69% versus 20–100%). Prescribing did not always adhere to malaria test results; in several settings, ACTs were prescribed to more than 30% of test-negative patients or to fewer than 80% of test-positive patients. Either an antimalarial or an antibiotic was prescribed for more than 75% of patients across most settings; lower antimalarial prescription for malaria test-negative patients was partly offset by higher antibiotic prescription. Symptomatic management with antipyretics alone was prescribed for fewer than 25% of patients across all scenarios. In community health worker and private retailer settings, mRDTs increased referral of patients to other providers. This synthesis provides an overview of shifts in case management that may be expected with mRDT introduction and highlights areas of focus to improve design and implementation of future case management programs. PMID:28820705
Publication bias and the failure of replication in experimental psychology.
Francis, Gregory
2012-12-01
Replication of empirical findings plays a fundamental role in science. Among experimental psychologists, successful replication enhances belief in a finding, while a failure to replicate is often interpreted to mean that one of the experiments is flawed. This view is wrong. Because experimental psychology uses statistics, empirical findings should appear with predictable probabilities. In a misguided effort to demonstrate successful replication of empirical findings and avoid failures to replicate, experimental psychologists sometimes report too many positive results. Rather than strengthen confidence in an effect, too much successful replication actually indicates publication bias, which invalidates entire sets of experimental findings. Researchers cannot judge the validity of a set of biased experiments because the experiment set may consist entirely of type I errors. This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication. Simulated experiments demonstrate that the publication bias test is able to discriminate biased experiment sets from unbiased experiment sets, but it is conservative about reporting bias. The test is then applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias. Such methods should be part of a systematic process to remove publication bias from experimental psychology and reinstate the important role of replication as a final arbiter of scientific findings.
BioBenchmark Toyama 2012: an evaluation of the performance of triple stores on biological data
2014-01-01
Background Biological databases vary enormously in size and data complexity, from small databases that contain a few million Resource Description Framework (RDF) triples to large databases that contain billions of triples. In this paper, we evaluate whether RDF native stores can be used to meet the needs of a biological database provider. Prior evaluations have used synthetic data with a limited database size. For example, the largest BSBM benchmark uses 1 billion synthetic e-commerce knowledge RDF triples on a single node. However, real world biological data differs from the simple synthetic data much. It is difficult to determine whether the synthetic e-commerce data is efficient enough to represent biological databases. Therefore, for this evaluation, we used five real data sets from biological databases. Results We evaluated five triple stores, 4store, Bigdata, Mulgara, Virtuoso, and OWLIM-SE, with five biological data sets, Cell Cycle Ontology, Allie, PDBj, UniProt, and DDBJ, ranging in size from approximately 10 million to 8 billion triples. For each database, we loaded all the data into our single node and prepared the database for use in a classical data warehouse scenario. Then, we ran a series of SPARQL queries against each endpoint and recorded the execution time and the accuracy of the query response. Conclusions Our paper shows that with appropriate configuration Virtuoso and OWLIM-SE can satisfy the basic requirements to load and query biological data less than 8 billion or so on a single node, for the simultaneous access of 64 clients. OWLIM-SE performs best for databases with approximately 11 million triples; For data sets that contain 94 million and 590 million triples, OWLIM-SE and Virtuoso perform best. They do not show overwhelming advantage over each other; For data over 4 billion Virtuoso works best. 4store performs well on small data sets with limited features when the number of triples is less than 100 million, and our test shows its scalability is poor; Bigdata demonstrates average performance and is a good open source triple store for middle-sized (500 million or so) data set; Mulgara shows a little of fragility. PMID:25089180
BioBenchmark Toyama 2012: an evaluation of the performance of triple stores on biological data.
Wu, Hongyan; Fujiwara, Toyofumi; Yamamoto, Yasunori; Bolleman, Jerven; Yamaguchi, Atsuko
2014-01-01
Biological databases vary enormously in size and data complexity, from small databases that contain a few million Resource Description Framework (RDF) triples to large databases that contain billions of triples. In this paper, we evaluate whether RDF native stores can be used to meet the needs of a biological database provider. Prior evaluations have used synthetic data with a limited database size. For example, the largest BSBM benchmark uses 1 billion synthetic e-commerce knowledge RDF triples on a single node. However, real world biological data differs from the simple synthetic data much. It is difficult to determine whether the synthetic e-commerce data is efficient enough to represent biological databases. Therefore, for this evaluation, we used five real data sets from biological databases. We evaluated five triple stores, 4store, Bigdata, Mulgara, Virtuoso, and OWLIM-SE, with five biological data sets, Cell Cycle Ontology, Allie, PDBj, UniProt, and DDBJ, ranging in size from approximately 10 million to 8 billion triples. For each database, we loaded all the data into our single node and prepared the database for use in a classical data warehouse scenario. Then, we ran a series of SPARQL queries against each endpoint and recorded the execution time and the accuracy of the query response. Our paper shows that with appropriate configuration Virtuoso and OWLIM-SE can satisfy the basic requirements to load and query biological data less than 8 billion or so on a single node, for the simultaneous access of 64 clients. OWLIM-SE performs best for databases with approximately 11 million triples; For data sets that contain 94 million and 590 million triples, OWLIM-SE and Virtuoso perform best. They do not show overwhelming advantage over each other; For data over 4 billion Virtuoso works best. 4store performs well on small data sets with limited features when the number of triples is less than 100 million, and our test shows its scalability is poor; Bigdata demonstrates average performance and is a good open source triple store for middle-sized (500 million or so) data set; Mulgara shows a little of fragility.
A Look Inside SLAC's Battery Lab
Wei Seh, Zhi
2018-01-26
In this video, Stanford materials science and engineering graduate student Zhi Wei Seh shows how he prepares battery materials in SLAC's energy storage laboratory, assembles dime-sized prototype "coin cells" and then tests them to see how many charge-discharge cycles they can endure without losing their ability to hold a charge. Results to date have already set records: After 1,000 cycles, they retain 70 percent of their original charge.
A Look Inside SLAC's Battery Lab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei Seh, Zhi
2014-07-17
In this video, Stanford materials science and engineering graduate student Zhi Wei Seh shows how he prepares battery materials in SLAC's energy storage laboratory, assembles dime-sized prototype "coin cells" and then tests them to see how many charge-discharge cycles they can endure without losing their ability to hold a charge. Results to date have already set records: After 1,000 cycles, they retain 70 percent of their original charge.
Machine learning enhanced optical distance sensor
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, N. A.
2018-01-01
Presented for the first time is a machine learning enhanced optical distance sensor. The distance sensor is based on our previously demonstrated distance measurement technique that uses an Electronically Controlled Variable Focus Lens (ECVFL) with a laser source to illuminate a target plane with a controlled optical beam spot. This spot with varying spot sizes is viewed by an off-axis camera and the spot size data is processed to compute the distance. In particular, proposed and demonstrated in this paper is the use of a regularized polynomial regression based supervised machine learning algorithm to enhance the accuracy of the operational sensor. The algorithm uses the acquired features and corresponding labels that are the actual target distance values to train a machine learning model. The optimized training model is trained over a 1000 mm (or 1 m) experimental target distance range. Using the machine learning algorithm produces a training set and testing set distance measurement errors of <0.8 mm and <2.2 mm, respectively. The test measurement error is at least a factor of 4 improvement over our prior sensor demonstration without the use of machine learning. Applications for the proposed sensor include industrial scenario distance sensing where target material specific training models can be generated to realize low <1% measurement error distance measurements.
Villa, Carlo E.; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe
2010-01-01
The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done. PMID:20808918
Target identification using Zernike moments and neural networks
NASA Astrophysics Data System (ADS)
Azimi-Sadjadi, Mahmood R.; Jamshidi, Arta A.; Nevis, Andrew J.
2001-10-01
The development of an underwater target identification algorithm capable of identifying various types of underwater targets, such as mines, under different environmental conditions pose many technical problems. Some of the contributing factors are: targets have diverse sizes, shapes and reflectivity properties. Target emplacement environment is variable; targets may be proud or partially buried. Environmental properties vary significantly from one location to another. Bottom features such as sand, rocks, corals, and vegetation can conceal a target whether it is partially buried or proud. Competing clutter with responses that closely resemble those of the targets may lead to false positives. All the problems mentioned above contribute to overly difficult and challenging conditions that could lead to unreliable algorithm performance with existing methods. In this paper, we developed and tested a shape-dependent feature extraction scheme that provides features invariant to rotation, size scaling and translation; properties that are extremely useful for any target classification problem. The developed schemes were tested on an electro-optical imagery data set collected under different environmental conditions with variable background, range and target types. The electro-optic data set was collected using a Laser Line Scan (LLS) sensor by the Coastal Systems Station (CSS), located in Panama City, Florida. The performance of the developed scheme and its robustness to distortion, rotation, scaling and translation was also studied.
Villa, Carlo E; Caccia, Michele; Sironi, Laura; D'Alfonso, Laura; Collini, Maddalena; Rivolta, Ilaria; Miserocchi, Giuseppe; Gorletta, Tatiana; Zanoni, Ivan; Granucci, Francesca; Chirico, Giuseppe
2010-08-17
The basic research in cell biology and in medical sciences makes large use of imaging tools mainly based on confocal fluorescence and, more recently, on non-linear excitation microscopy. Substantially the aim is the recognition of selected targets in the image and their tracking in time. We have developed a particle tracking algorithm optimized for low signal/noise images with a minimum set of requirements on the target size and with no a priori knowledge of the type of motion. The image segmentation, based on a combination of size sensitive filters, does not rely on edge detection and is tailored for targets acquired at low resolution as in most of the in-vivo studies. The particle tracking is performed by building, from a stack of Accumulative Difference Images, a single 2D image in which the motion of the whole set of the particles is coded in time by a color level. This algorithm, tested here on solid-lipid nanoparticles diffusing within cells and on lymphocytes diffusing in lymphonodes, appears to be particularly useful for the cellular and the in-vivo microscopy image processing in which few a priori assumption on the type, the extent and the variability of particle motions, can be done.
NASA Astrophysics Data System (ADS)
Hedberg, Emma; Gidhagen, Lars; Johansson, Christer
Sampling of particles (PM10) was conducted during a one-year period at two rural sites in Central Chile, Quillota and Linares. The samples were analyzed for elemental composition. The data sets have undergone source-receptor analyses in order to estimate the sources and their abundance's in the PM10 size fraction, by using the factor analytical method positive matrix factorization (PMF). The analysis showed that PM10 was dominated by soil resuspension at both sites during the summer months, while during winter traffic dominated the particle mass at Quillota and local wood burning dominated the particle mass at Linares. Two copper smelters impacted the Quillota station, and contributed to 10% and 16% of PM10 as an average during summer and winter, respectively. One smelter impacted Linares by 8% and 19% of PM10 in the summer and winter, respectively. For arsenic the two smelters accounted for 87% of the monitored arsenic levels at Quillota and at Linares one smelter contributed with 72% of the measured mass. In comparison with PMF, the use of a dispersion model tended to overestimate the smelter contribution to arsenic levels at both sites. The robustness of the PMF model was tested by using randomly reduced data sets, where 85%, 70%, 50% and 33% of the samples were included. In this way the ability of the model to reconstruct the sources initially found by the original data set could be tested. On average for all sources the relative standard deviation increased from 7% to 25% for the variables identifying the sources, when decreasing the data set from 85% to 33% of the samples, indicating that the solution initially found was very stable to begin with. But it was also noted that sources due to industrial or combustion processes were more sensitive for the size of the data set, compared to the natural sources as local soil and sea spray sources.
On the magnetic polarizability tensor of US coinage
NASA Astrophysics Data System (ADS)
Davidson, John L.; Abdel-Rehim, Omar A.; Hu, Peipei; Marsh, Liam A.; O'Toole, Michael D.; Peyton, Anthony J.
2018-03-01
The magnetic dipole polarizability tensor of a metallic object gives unique information about the size, shape and electromagnetic properties of the object. In this paper, we present a novel method of coin characterization based on the spectroscopic response of the absolute tensor. The experimental measurements are validated using a combination of tests with a small set of bespoke coin surrogates and simulated data. The method is applied to an uncirculated set of US coins. Measured and simulated spectroscopic tensor responses of the coins show significant differences between different coin denominations. The presented results are encouraging as they strongly demonstrate the ability to characterize coins using an absolute tensor approach.
Crack identification and evolution law in the vibration failure process of loaded coal
NASA Astrophysics Data System (ADS)
Li, Chengwu; Ai, Dihao; Sun, Xiaoyuan; Xie, Beijing
2017-08-01
To study the characteristics of coal cracks produced in the vibration failure process, we set up a static load and static and dynamic combination load failure test simulation system, prepared with different particle size, formation pressure, and firmness coefficient coal samples. Through static load damage testing of coal samples and then dynamic load (vibration exciter) and static (jack) combination destructive testing, the crack images of coal samples under the load condition were obtained. Combined with digital image processing technology, an algorithm of crack identification with high precision and in real-time is proposed. With the crack features of the coal samples under different load conditions as the research object, we analyzed the distribution of cracks on the surface of the coal samples and the factors influencing crack evolution using the proposed algorithm and a high-resolution industrial camera. Experimental results showed that the major portion of the crack after excitation is located in the rear of the coal sample where the vibration exciter cannot act. Under the same disturbance conditions, crack size and particle size exhibit a positive correlation, while crack size and formation pressure exhibit a negative correlation. Soft coal is more likely to lead to crack evolution than hard coal, and more easily causes instability failure. The experimental results and crack identification algorithm provide a solid basis for the prevention and control of instability and failure of coal and rock mass, and they are helpful in improving the monitoring method of coal and rock dynamic disasters.
Detection of airborne respiratory syncytial virus in a pediatric acute care clinic.
Grayson, Stephanie A; Griffiths, Pamela S; Perez, Miriam K; Piedimonte, Giovanni
2017-05-01
Respiratory syncytial virus (RSV) is the most common cause of respiratory illness in infants and young children, but this virus is also capable of re-infecting adults throughout life. Universal precautions to prevent its transmission consist of gown and glove use, but masks and goggles are not routinely required because it is believed that RSV is unlikely to be transmitted by the airborne route. Our hypothesis was that RSV is present in respirable-size particles aerosolized by patients seen in a pediatric acute care setting. RSV-laden particles were captured using stationary 2-stage bioaerosol cyclone samplers. Aerosol particles were separated into three size fractions (<1, 1-4.1, and ≥4.1 μm) and were tested for the presence of RSV RNA by real-time PCR. Samplers were set 152 cm ("upper") and 102 cm ("lower") above the floor in each of two examination rooms. Of the total, 554 samples collected over 48 days, only 13 (or 2.3%) were positive for RSV. More than 90% of the RSV-laden aerosol particles were in the ≥4.1 μm size range, which typically settle to the ground within minutes, whereas only one sample (or 8%) was positive for particles in the 1-4.1 μm respirable size range. Our data indicate that airborne RSV-laden particles can be detected in pediatric outpatient clinics during the epidemic peak. However, RSV airborne transmission is highly inefficient. Thus, the logistical and financial implications of mandating the use of masks and goggles to prevent RSV spread seem unwarranted in this setting. Pediatr Pulmonol. 2017;52:684-688. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Reduced body size and cub recruitment in polar bears associated with sea ice decline.
Rode, Karyn D; Amstrup, Steven C; Regehr, Eric V
2010-04-01
Rates of reproduction and survival are dependent upon adequate body size and condition of individuals. Declines in size and condition have provided early indicators of population decline in polar bears (Ursus maritimus) near the southern extreme of their range. We tested whether patterns in body size, condition, and cub recruitment of polar bears in the southern Beaufort Sea of Alaska were related to the availability of preferred sea ice habitats and whether these measures and habitat availability exhibited trends over time, between 1982 and 2006. The mean skull size and body length of all polar bears over three years of age declined over time, corresponding with long-term declines in the spatial and temporal availability of sea ice habitat. Body size of young, growing bears declined over time and was smaller after years when sea ice availability was reduced. Reduced litter mass and numbers of yearlings per female following years with lower availability of optimal sea ice habitat, suggest reduced reproductive output and juvenile survival. These results, based on analysis of a long-term data set, suggest that declining sea ice is associated with nutritional limitations that reduced body size and reproduction in this population.
Reduced body size and cub recruitment in polar bears associated with sea ice decline
Rode, Karyn D.; Amstrup, Steven C.; Regehr, Eric V.
2010-01-01
Rates of reproduction and survival are dependent upon adequate body size and condition of individuals. Declines in size and condition have provided early indicators of population decline in polar bears (Ursus maritimus) near the southern extreme of their range. We tested whether patterns in body size, condition, and cub recruitment of polar bears in the southern Beaufort Sea of Alaska were related to the availability of preferred sea ice habitats and whether these measures and habitat availability exhibited trends over time, between 1982 and 2006. The mean skull size and body length of all polar bears over three years of age declined over time, corresponding with long‐term declines in the spatial and temporal availability of sea ice habitat. Body size of young, growing bears declined over time and was smaller after years when sea ice availability was reduced. Reduced litter mass and numbers of yearlings per female following years with lower availability of optimal sea ice habitat, suggest reduced reproductive output and juvenile survival. These results, based on analysis of a long‐term data set, suggest that declining sea ice is associated with nutritional limitations that reduced body size and reproduction in this population.
Sexual display and mate choice in an energetically costly environment.
Head, Megan L; Wong, Bob B M; Brooks, Robert
2010-12-09
Sexual displays and mate choice often take place under the same set of environmental conditions and, as a consequence, may be exposed to the same set of environmental constraints. Surprisingly, however, very few studies consider the effects of environmental costs on sexual displays and mate choice simultaneously. We conducted an experiment, manipulating water flow in large flume tanks, to examine how an energetically costly environment might affect the sexual display and mate choice behavior of male and female guppies, Poecilia reticulata. We found that male guppies performed fewer sexual displays and became less choosy, with respect to female size, in the presence of a water current compared to those tested in still water. In contrast to males, female responsive to male displays did not differ between the water current treatments and females exhibited no mate preferences with respect to male size or coloration in either treatment. The results of our study underscore the importance of considering the simultaneous effects of environmental costs on the sexual behaviors of both sexes.
Cooper, Keith M
2012-08-01
In the UK, Government policy requires marine aggregate extraction companies to leave the seabed in a similar physical condition after the cessation of dredging. This measure is intended to promote recovery, and the return of a similar faunal community to that which existed before dredging. Whilst the policy is sensible, and in line with the principles of sustainable development, the use of the word 'similar' is open to interpretation. There is, therefore, a need to set quantifiable limits for acceptable change in sediment composition. Using a case study site, it is shown how such limits could be defined by the range of sediment particle size composition naturally found in association with the faunal assemblages in the wider region. Whilst the approach offers a number of advantages over the present system, further testing would be required before it could be recommended for use in the regulatory context. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Wilder, Shawn M; Rypstra, Ann L
2008-09-01
Sexual cannibalism varies widely among spiders, but no general evolutionary hypothesis has emerged to explain its distribution across taxa. Sexual size dimorphism (SSD) also varies widely among spiders and could affect the vulnerability of males to cannibalistic attacks by females. We tested for a relationship between SSD and sexual cannibalism within and among species of spiders, using a broad taxonomic data set. For most species, cannibalism was more likely when males were much smaller than females. In addition, using phylogenetically controlled and uncontrolled analyses, there was a strong positive relationship between average SSD of a species and the frequency of sexual cannibalism. This is the first evidence that the degree of size difference between males and females is related to the phylogenetic distribution of sexual cannibalism among a broad range of spiders.
Harvati, Katerina; Weaver, Timothy D
2006-12-01
Cranial morphology is widely used to reconstruct evolutionary relationships, but its reliability in reflecting phylogeny and population history has been questioned. Some cranial regions, particularly the face and neurocranium, are believed to be influenced by the environment and prone to convergence. Others, such as the temporal bone, are thought to reflect more accurately phylogenetic relationships. Direct testing of these hypotheses was not possible until the advent of large genetic data sets. The few relevant studies in human populations have had intriguing but possibly conflicting results, probably partly due to methodological differences and to the small numbers of populations used. Here we use three-dimensional (3D) geometric morphometrics methods to test explicitly the ability of cranial shape, size, and relative position/orientation of cranial regions to track population history and climate. Morphological distances among 13 recent human populations were calculated from four 3D landmark data sets, respectively reflecting facial, neurocranial, and temporal bone shape; shape and relative position; overall cranial shape; and centroid sizes. These distances were compared to neutral genetic and climatic distances among the same, or closely matched, populations. Results indicate that neurocranial and temporal bone shape track neutral genetic distances, while facial shape reflects climate; centroid size shows a weak association with climatic variables; and relative position/orientation of cranial regions does not appear correlated with any of these factors. Because different cranial regions preserve population history and climate signatures differentially, caution is suggested when using cranial anatomy for phylogenetic reconstruction. Copyright (c) 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun
2013-01-01
The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.
NASA Astrophysics Data System (ADS)
Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun
2012-08-01
The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.
Lynn, Michael
2009-10-01
Waitresses completed an on-line survey about their physical characteristics, self-perceived attractiveness and sexiness, and average tips. The waitresses' self-rated physical attractiveness increased with their breast sizes and decreased with their ages, waist-to-hip ratios, and body sizes. Similar effects were observed on self-rated sexiness, with the exception of age, which varied with self-rated sexiness in a negative, quadratic relationship rather than a linear one. Moreover, the waitresses' tips varied with age in a negative, quadratic relationship, increased with breast size, increased with having blond hair, and decreased with body size. These findings, which are discussed from an evolutionary perspective, make several contributions to the literature on female physical attractiveness. First, they replicate some previous findings regarding the determinants of female physical attractiveness using a larger, more diverse, and more ecologically valid set of stimuli than has been studied before. Second, they provide needed evidence that some of those determinants of female beauty affect interpersonal behaviors as well as attractiveness ratings. Finally, they indicate that some determinants of female physical attractiveness do not have the same effects on overt interpersonal behavior (such as tipping) that they have on attractiveness ratings. This latter contribution highlights the need for more ecologically valid tests of evolutionary theories about the determinants and consequences of female beauty.
Incremental exercise test for the evaluation of peak oxygen consumption in paralympic swimmers.
de Souza, Helton; DA Silva Alves, Eduardo; Ortega, Luciana; Silva, Andressa; Esteves, Andrea M; Schwingel, Paulo A; Vital, Roberto; DA Rocha, Edilson A; Rodrigues, Bruno; Lira, Fabio S; Tufik, Sergio; DE Mello, Marco T
2016-04-01
Peak oxygen consumption (VO2peak) is a fundamental parameter used to evaluate physical capacity. The objective of this study was to explore two types of incremental exercise tests used to determine VO2peak in four Paralympic swimmers: arm ergometer testing in the laboratory and testing in the swimming pool. On two different days, the VO2peak values of the four athletes were measured in a swimming pool and by a cycle ergometer. The protocols identified the VO2peak by progressive loading until the volitional exhaustion maximum was reached. The results were analyzed using the paired Student's t-test, Cohen's d effect sizes and a linear regression. The results showed that the VO2peak values obtained using the swimming pool protocol were higher (P=0.02) than those obtained by the arm ergometer (45.8±19.2 vs. 30.4±15.5; P=0.02), with a large effect size (d=3.20). When analyzing swimmers 1, 2, 3 and 4 individually, differences of 22.4%, 33.8%, 60.1% and 27.1% were observed, respectively. Field tests similar to the competitive setting are a more accurate way to determine the aerobic capacity of Paralympic swimmers. This approach provides more sensitive data that enable better direction of training, consequently facilitating improved performance.
Protein contact prediction using patterns of correlation.
Hamilton, Nicholas; Burrage, Kevin; Ragan, Mark A; Huber, Thomas
2004-09-01
We describe a new method for using neural networks to predict residue contact pairs in a protein. The main inputs to the neural network are a set of 25 measures of correlated mutation between all pairs of residues in two "windows" of size 5 centered on the residues of interest. While the individual pair-wise correlations are a relatively weak predictor of contact, by training the network on windows of correlation the accuracy of prediction is significantly improved. The neural network is trained on a set of 100 proteins and then tested on a disjoint set of 1033 proteins of known structure. An average predictive accuracy of 21.7% is obtained taking the best L/2 predictions for each protein, where L is the sequence length. Taking the best L/10 predictions gives an average accuracy of 30.7%. The predictor is also tested on a set of 59 proteins from the CASP5 experiment. The accuracy is found to be relatively consistent across different sequence lengths, but to vary widely according to the secondary structure. Predictive accuracy is also found to improve by using multiple sequence alignments containing many sequences to calculate the correlations. Copyright 2004 Wiley-Liss, Inc.
Peripheral doses from pediatric IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Eric E.; Maserang, Beth; Wood, Roy
Peripheral dose (PD) data exist for conventional fields ({>=}10 cm) and intensity-modulated radiotherapy (IMRT) delivery to standard adult-sized phantoms. Pediatric peripheral dose reports are limited to conventional therapy and are model based. Our goal was to ascertain whether data acquired from full phantom studies and/or pediatric models, with IMRT treatment times, could predict Organ at Risk (OAR) dose for pediatric IMRT. As monitor units (MUs) are greater for IMRT, it is expected IMRT PD will be higher; potentially compounded by decreased patient size (absorption). Baseline slab phantom peripheral dose measurements were conducted for very small field sizes (from 2 tomore » 10 cm). Data were collected at distances ranging from 5 to 72 cm away from the field edges. Collimation was either with the collimating jaws or the multileaf collimator (MLC) oriented either perpendicular or along the peripheral dose measurement plane. For the clinical tests, five patients with intracranial or base of skull lesions were chosen. IMRT and conventional three-dimensional (3D) plans for the same patient/target/dose (180 cGy), were optimized without limitation to the number of fields or wedge use. Six MV, 120-leaf MLC Varian axial beams were used. A phantom mimicking a 3-year-old was configured per Center for Disease Control data. Micro (0.125 cc) and cylindrical (0.6 cc) ionization chambers were appropriated for the thyroid, breast, ovaries, and testes. The PD was recorded by electrometers set to the 10{sup -10} scale. Each system set was uniquely calibrated. For the slab phantom studies, close peripheral points were found to have a higher dose for low energy and larger field size and when MLC was not deployed. For points more distant from the field edge, the PD was higher for high-energy beams. MLC orientation was found to be inconsequential for the small fields tested. The thyroid dose was lower for IMRT delivery than that predicted for conventional (ratio of IMRT/cnventional ranged from 0.47-0.94) doses {approx}[0.4-1.8 cGy]/[0.9-2.9 cGy]/fraction, respectively. Prior phantom reports are for fields 10 cm or greater, while pediatric central nervous system fields range from 4 to 7 cm, and effectively much smaller for IMRT (2-6 cm). Peripheral dose in close proximity (<10 cm from the field edge) is dominated by internal scatter; therefore, field-size differences overwhelm phantom size affects and increased MU. Distant peripheral dose, dominated by head leakage, was higher than predicted, even when accounting for MUs ({approx}factor of 3) likely due to the pediatric phantom size. The ratio of the testes dose ranged from 3.3-5.3 for IMRT/conventional. PD to OAR for pediatric IMRT cannot be predicted from large-field full phantom studies. For regional OAR, doses are likely lower than predicted by existing ''large field'' data, while the distant PD is higher.« less
Adult and Child Semantic Neighbors of the Kroll and Potter (1984) Nonobjects
Storkel, Holly L.; Adlof, Suzanne M.
2008-01-01
Purpose The purpose was to determine the number of semantic neighbors, namely semantic set size, for 88 nonobjects (Kroll & Potter, 1984) and determine how semantic set size related to other measures and age. Method Data were collected from 82 adults and 92 preschool children in a discrete association task. The nonobjects were presented via computer, and participants reported the first word that came to mind that was meaningfully related to the nonobject. Words reported by two or more participants were considered semantic neighbors. The strength of each neighbor was computed as the proportion of participants who reported the neighbor. Results Results showed that semantic set size was not significantly correlated with objectlikeness ratings or object decision reaction times from Kroll and Potter (1984). However, semantic set size was significantly negatively correlated with the strength of the strongest neighbor(s). In terms of age effects, adult and child semantic set sizes were significantly positively correlated and the majority of numeric differences were on the order of 0–3 neighbors. Comparison of actual neighbors showed greater discrepancies; however, this varied by neighbor strength. Conclusions Semantic set size can be determined for nonobjects. Specific guidelines are suggested for using these nonobjects in future research. PMID:19252127
Nurse practitioner caseload in primary health care: Scoping review.
Martin-Misener, Ruth; Kilpatrick, Kelley; Donald, Faith; Bryant-Lukosius, Denise; Rayner, Jennifer; Valaitis, Ruta; Carter, Nancy; Miller, Patricia A; Landry, Véronique; Harbman, Patricia; Charbonneau-Smith, Renee; McKinlay, R James; Ziegler, Erin; Boesveld, Sarah; Lamb, Alyson
2016-10-01
To identify recommendations for determining patient panel/caseload size for nurse practitioners in community-based primary health care settings. Scoping review of the international published and grey literature. The search included electronic databases, international professional and governmental websites, contact with experts, and hand searches of reference lists. Eligible papers had to (a) address caseload or patient panels for nurse practitioners in community-based primary health care settings serving an all-ages population; and (b) be published in English or French between January 2000 and July 2014. Level one testing included title and abstract screening by two team members. Relevant papers were retained for full text review in level two testing, and reviewed by two team members. A third reviewer acted as a tiebreaker. Data were extracted using a structured extraction form by one team member and verified by a second member. Descriptive statistics were estimated. Content analysis was used for qualitative data. We identified 111 peer-reviewed articles and grey literature documents. Most of the papers were published in Canada and the United States after 2010. Current methods to determine panel/caseload size use large administrative databases, provider work hours and the average number of patient visits. Most of the papers addressing the topic of patient panel/caseload size in community-based primary health care were descriptive. The average number of patients seen by nurse practitioners per day varied considerably within and between countries; an average of 9-15 patients per day was common. Patient characteristics (e.g., age, gender) and health conditions (e.g., multiple chronic conditions) appear to influence patient panel/caseload size. Very few studies used validated tools to classify patient acuity levels or disease burden scores. The measurement of productivity and the determination of panel/caseload size is complex. Current metrics may not capture activities relevant to community-based primary health care nurse practitioners. Tools to measure all the components of these role are needed when determining panel/caseload size. Outcomes research is absent in the determination of panel/caseload size. There are few systems in place to track and measure community-based primary health care nurse practitioner activities. The development of such mechanisms is an important next step to assess community-based primary health care nurse practitioner productivity and determine patient panel/caseload size. Decisions about panel/caseload size must take into account the effects of nurse practitioner activities on outcomes of care. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
An empirical model for estimating annual consumption by freshwater fish populations
Liao, H.; Pierce, C.L.; Larscheid, J.G.
2005-01-01
Population consumption is an important process linking predator populations to their prey resources. Simple tools are needed to enable fisheries managers to estimate population consumption. We assembled 74 individual estimates of annual consumption by freshwater fish populations and their mean annual population size, 41 of which also included estimates of mean annual biomass. The data set included 14 freshwater fish species from 10 different bodies of water. From this data set we developed two simple linear regression models predicting annual population consumption. Log-transformed population size explained 94% of the variation in log-transformed annual population consumption. Log-transformed biomass explained 98% of the variation in log-transformed annual population consumption. We quantified the accuracy of our regressions and three alternative consumption models as the mean percent difference from observed (bioenergetics-derived) estimates in a test data set. Predictions from our population-size regression matched observed consumption estimates poorly (mean percent difference = 222%). Predictions from our biomass regression matched observed consumption reasonably well (mean percent difference = 24%). The biomass regression was superior to an alternative model, similar in complexity, and comparable to two alternative models that were more complex and difficult to apply. Our biomass regression model, log10(consumption) = 0.5442 + 0.9962??log10(biomass), will be a useful tool for fishery managers, enabling them to make reasonably accurate annual population consumption predictions from mean annual biomass estimates. ?? Copyright by the American Fisheries Society 2005.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Heine, Angela; Wissmann, Jacqueline; Tamm, Sascha; De Smedt, Bert; Schneider, Michael; Stern, Elsbeth; Verschaffel, Lieven; Jacobs, Arthur M
2013-09-01
The aim of the present study was to probe electrophysiological effects of non-symbolic numerical processing in 20 children with mathematical learning disabilities (mean age = 99.2 months) compared to a group of 20 typically developing matched controls (mean age = 98.4 months). EEG data were obtained while children were tested with a standard non-symbolic numerical comparison paradigm that allowed us to investigate the effects of numerical distance manipulations for different set sizes, i.e., the classical subitizing, counting and estimation ranges. Effects of numerical distance manipulations on event-related potential (ERP) amplitudes as well as activation patterns of underlying current sources were analyzed. In typically developing children, the amplitudes of a late parietal positive-going ERP component showed systematic numerical distance effects that did not depend on set size. For the group of children with mathematical learning disabilities, ERP distance effects were found only for stimuli within the subitizing range. Current source density analysis of distance-related group effects suggested that areas in right inferior parietal regions are involved in the generation of the parietal ERP amplitude differences. Our results suggest that right inferior parietal regions are recruited differentially by controls compared to children with mathematical learning disabilities in response to non-symbolic numerical magnitude processing tasks, but only for stimuli with set sizes that exceed the subitizing range. Copyright © 2012 Elsevier Ltd. All rights reserved.
Misquitta, Alston J; Stone, Anthony J; Price, Sarah L
2008-01-01
In part 1 of this two-part investigation we set out the theoretical basis for constructing accurate models of the induction energy of clusters of moderately sized organic molecules. In this paper we use these techniques to develop a variety of accurate distributed polarizability models for a set of representative molecules that include formamide, N-methyl propanamide, benzene, and 3-azabicyclo[3.3.1]nonane-2,4-dione. We have also explored damping, penetration, and basis set effects. In particular, we have provided a way to treat the damping of the induction expansion. Different approximations to the induction energy are evaluated against accurate SAPT(DFT) energies, and we demonstrate the accuracy of our induction models on the formamide-water dimer.
Classical Testing in Functional Linear Models.
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.
Classical Testing in Functional Linear Models
Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab
2016-01-01
We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155
Development of advanced lightweight containment systems
NASA Technical Reports Server (NTRS)
Stotler, C.
1981-01-01
Parametric type data were obtained on advanced lightweight containment systems. These data were used to generate design methods and procedures necessary for the successful development of such systems. The methods were then demonstrated through the design of a lightweight containment system for a CF6 size engine. The containment concept evaluated consisted basically of a lightweight structural sandwich shell wrapped with dry Kevlar cloth. The initial testing was directed towards the determination of the amount of Kevlar required to result in threshold containment for a specific set of test conditions. A relationship was then developed between the thickness required and the energy of the released blade so that the data could be used to design for conditions other than those tested.
Simulations of Liners and Test Objects for a New Atlas Advanced Radiography Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. V. Morgan; S. Iversen; R. A. Hilko
2002-06-01
The Advanced Radiographic Source (ARS) will improve the data significantly due to its smaller source width. Because of the enhanced ARS output, larger source-to-object distances are a reality. The harder ARS source will allow radiography of thick high-Z targets. The five different spectral simulations resulted in similar imaging detector weighted transmission. This work used a limited set of test objects and imaging detectors. Other test objects and imaging detectors could possibly change the MVp-sensitivity result. The effect of material motion blur must be considered for the ARS due to the expected smaller X-ray source size. This study supports the originalmore » 1.5-MVp value.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKechnie, Scott; Booth, George H.; Cohen, Aron J.
The best practice in computational methods for determining vertical ionization energies (VIEs) is assessed, via reference to experimentally determined VIEs that are corroborated by highly accurate coupled-cluster calculations. These reference values are used to benchmark the performance of density-functional theory (DFT) and wave function methods: Hartree-Fock theory (HF), second-order Møller-Plesset perturbation theory (MP2) and Electron Propagator Theory (EPT). The core test set consists of 147 small molecules. An extended set of six larger molecules, from benzene to hexacene, is also considered to investigate the dependence of the results on molecule size. The closest agreement with experiment is found for ionizationmore » energies obtained from total energy diff calculations. In particular, DFT calculations using exchange-correlation functionals with either a large amount of exact exchange or long-range correction perform best. The results from these functionals are also the least sensitive to an increase in molecule size. In general, ionization energies calculated directly from the orbital energies of the neutral species are less accurate and more sensitive to an increase in molecule size. For the single-calculation approach, the EPT calculations are in closest agreement for both sets of molecules. For the orbital energies from DFT functionals, only those with long-range correction give quantitative agreement with dramatic failing for all other functionals considered. The results offer a practical hierarchy of approximations for the calculation of vertical ionization energies. In addition, the experimental and computational reference values can be used as a standardized set of benchmarks, against which other approximate methods can be compared.« less
Rahm, E.J.; Griffith, S.A.; Noltie, Douglas B.; DiStefano, R.J.
2005-01-01
Following its introduction into the St. Francis River drainage, Missouri, U.S.A., the woodland crayfish, Orconectes hylas has expanded its range there; simultaneously populations of two imperiled endemic species, the Big Creek crayfish, O. peruncus, and the St. Francis River crayfish, O. quadruncus have declined therein. In seeking a basis for this decline, our study objective was to test whether the outcome of aggressive inter-specific interactions would favor O. hylas. We studied agonistic encounters between size-matched pairs of same-sex individuals of the introduced and the endemic species in a laboratory setting, first with juveniles and then with adults. Within each life stage, we conducted four sets of laboratory experiments, with approximately 20 trials in each set: (1) O. hylas males versus O. peruncus males, (2) O. hylas males versus O. quadruncus males, (3) O. hylas females versus O. peruncus females, and (4) O. hylas females versus O. quadruncus females. In addition, these same four experiment sets were repeated using larger adult O. hylas crayfish matched with smaller-sized adult endemics, mimicking the mismatch in adult sizes that occurs in the wild. Within each experiment, every trial was analysed to quantify the frequency of occurrence of three initiation behaviors and to determine the overall outcome of the trial. Results did not show O. hylas (juveniles or adults) to be behaviorally dominant over either endemic species. Orconectes hylas displayed the majority of one of the initiation behaviors significantly more often than did the endemic species in only two of the twelve experiments. Because direct aggressive interaction was not demonstrated to be the mechanism whereby O. peruncus and O. quadruncus are being replaced by O. hylas, other life history and ecological factors will require investigation. ?? Koninklijke Brill NV, 2005.
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
NASA Astrophysics Data System (ADS)
Grudinin, Sergei; Kadukova, Maria; Eisenbarth, Andreas; Marillet, Simon; Cazals, Frédéric
2016-09-01
The 2015 D3R Grand Challenge provided an opportunity to test our new model for the binding free energy of small molecules, as well as to assess our protocol to predict binding poses for protein-ligand complexes. Our pose predictions were ranked 3-9 for the HSP90 dataset, depending on the assessment metric. For the MAP4K dataset the ranks are very dispersed and equal to 2-35, depending on the assessment metric, which does not provide any insight into the accuracy of the method. The main success of our pose prediction protocol was the re-scoring stage using the recently developed Convex-PL potential. We make a thorough analysis of our docking predictions made with AutoDock Vina and discuss the effect of the choice of rigid receptor templates, the number of flexible residues in the binding pocket, the binding pocket size, and the benefits of re-scoring. However, the main challenge was to predict experimentally determined binding affinities for two blind test sets. Our affinity prediction model consisted of two terms, a pairwise-additive enthalpy, and a non pairwise-additive entropy. We trained the free parameters of the model with a regularized regression using affinity and structural data from the PDBBind database. Our model performed very well on the training set, however, failed on the two test sets. We explain the drawback and pitfalls of our model, in particular in terms of relative coverage of the test set by the training set and missed dynamical properties from crystal structures, and discuss different routes to improve it.
Quantifying Particle Numbers and Mass Flux in Drifting Snow
NASA Astrophysics Data System (ADS)
Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael
2016-12-01
We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93) and good correlation for the drifting snow experiments (r ≥slant 0.81). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.
The Muon Portal Project: A large-area tracking detector for muon tomography
NASA Astrophysics Data System (ADS)
Riggi, F.
2016-05-01
The Muon Portal Project [1] is a joint initiative between research and industrial partners, aimed at the construction of a real size detector protoype to search for hidden high-Z fissile materials inside containers by the muon scattering technique. The detector is based on a set of 48 detection modules (1 m × 3 m), so as to provide four X-Y detection planes, two placed above and two below the container to be inspected. After a research and development phase, which led to the choice and test of the individual components, the construction of the full size detector has already started and will be completed in a few months.
NASA Astrophysics Data System (ADS)
Han, D. Y.; Cao, P.; Liu, J.; Zhu, J. B.
2017-12-01
Cutter spacing is an essential parameter in the TBM design. However, few efforts have been made to study the optimum cutter spacing incorporating penetration depth. To investigate the influence of pre-set penetration depth and cutter spacing on sandstone breakage and TBM performance, a series of sequential laboratory indentation tests were performed in a biaxial compression state. Effects of parameters including penetration force, penetration depth, chip mass, chip size distribution, groove volume, specific energy and maximum angle of lateral crack were investigated. Results show that the total mass of chips, the groove volume and the observed optimum cutter spacing increase with increasing pre-set penetration depth. It is also found that the total mass of chips could be an alternative means to determine optimum cutter spacing. In addition, analysis of chip size distribution suggests that the mass of large chips is dominated by both cutter spacing and pre-set penetration depth. After fractal dimension analysis, we found that cutter spacing and pre-set penetration depth have negligible influence on the formation of small chips and that small chips are formed due to squeezing of cutters and surface abrasion caused by shear failure. Analysis on specific energy indicates that the observed optimum spacing/penetration ratio is 10 for the sandstone, at which, the specific energy and the maximum angle of lateral cracks are smallest. The findings in this paper contribute to better understanding of the coupled effect of cutter spacing and pre-set penetration depth on TBM performance and rock breakage, and provide some guidelines for cutter arrangement.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Rheology of ice I at low stress and elevated confining pressure
Durham, W.B.; Stern, L.A.; Kirby, S.H.
2001-01-01
Triaxial compression testing of pure, polycrystalline water ice I at conditions relevant to planetary interiors and near-surface environments (differential stresses 0.45 to 10 MPa, temperatures 200 to 250 K, confining pressure 50 MPa) reveals that a complex variety of rheologies and grain structures may exist for ice and that rheology of ice appears to depend strongly on the grain structures. The creep of polycrystalline ice I with average grain size of 0.25 mm and larger is consistent with previously published dislocation creep laws, which are now extended to strain rates as low as 2 ?? 10-8s-1. When ice I is reduced to very fine and uniform grain size by rapid pressure release from the ice II stability field, the rheology changes dramatically. At 200 and 220 K the rheology matches the grain-size-sensitive rheology measured by Goldsby and Kohlstedt [1997, this issue] at 1 atm. This finding dispels concerns that the Goldsby and Kohlstedt results were influenced by mechanisms such as microfracturing and cavitation, processes not expected to operate at elevated pressures in planetary interiors. At 233 K and above, grain growth causes the fine-grained ice to become more creep resistant. Scanning electron microscopy investigation of some of these deformed samples shows that grains have markedly coarsened and the strain hardening can be modeled by normal grain growth and the Goldsby and Kohlstedt rheology. Several samples also displayed very heterogeneous grain sizes and high aspect ratio grain shapes. Grain-size-sensitive creep and dislocation creep coincidentally contribute roughly equal amounts of strain rate at conditions of stress, temperature, and grain size that are typical of terrestrial and planetary settings, so modeling ice dynamics in these settings must include both mechanisms. Copyright 2001 by the American Geophysical Union.
Ruane, Lauren G.; Rotzin, Andrew T.; Congleton, Philip H.
2014-01-01
Background and Aims Natural variation in fruit and seed set may be explained by factors that affect the composition of pollen grains on stigmas. Self-incompatible species require compatible outcross pollen grains to produce seeds. The siring success of outcross pollen grains, however, can be hindered if self (or other incompatible) pollen grains co-occur on stigmas. This study identifies factors that determine fruit set in Phlox hirsuta, a self-sterile endangered species that is prone to self-pollination, and its associated fitness costs. Methods Multiple linear regressions were used to identify factors that explain variation in percentage fruit set within three of the five known populations of this endangered species. Florivorous beetle density, petal colour, floral display size, local conspecific density and pre-dispersal seed predation were quantified and their effects on the ability of flowers to produce fruits were assessed. Key Results In all three populations, percentage fruit set decreased as florivorous beetle density increased and as floral display size increased. The effect of floral display size on fruit set, however, often depended on the density of nearby conspecific plants. High local conspecific densities offset – even reversed – the negative effects of floral display size on percentage fruit set. Seed predation by mammals decreased fruit set in one population. Conclusions The results indicate that seed production in P. hirsuta can be maximized by selectively augmenting populations in areas containing isolated large plants, by reducing the population sizes of florivorous beetles and by excluding mammals that consume unripe fruits. PMID:24557879
Implications of Atmospheric Test Fallout Data for Nuclear Winter.
NASA Astrophysics Data System (ADS)
Baker, George Harold, III
1987-09-01
Atmospheric test fallout data have been used to determine admissable dust particle size distributions for nuclear winter studies. The research was originally motivated by extreme differences noted in the magnitude and longevity of dust effects predicted by particle size distributions routinely used in fallout predictions versus those used for nuclear winter studies. Three different sets of historical data have been analyzed: (1) Stratospheric burden of Strontium -90 and Tungsten-185, 1954-1967 (92 contributing events); (2) Continental U.S. Strontium-90 fallout through 1958 (75 contributing events); (3) Local Fallout from selected Nevada tests (16 events). The contribution of dust to possible long term climate effects following a nuclear exchange depends strongly on the particle size distribution. The distribution affects both the atmospheric residence time and optical depth. One dimensional models of stratospheric/tropospheric fallout removal were developed and used to identify optimum particle distributions. Results indicate that particle distributions which properly predict bulk stratospheric activity transfer tend to be somewhat smaller than number size distributions used in initial nuclear winter studies. In addition, both ^{90}Sr and ^ {185}W fallout behavior is better predicted by the lognormal distribution function than the prevalent power law hybrid function. It is shown that the power law behavior of particle samples may well be an aberration of gravitational cloud stratification. Results support the possible existence of two independent particle size distributions in clouds generated by surface or near surface bursts. One distribution governs late time stratospheric fallout, the other governs early time fallout. A bimodal lognormal distribution is proposed to describe the cloud particle population. The distribution predicts higher initial sunlight attenuation and lower late time attenuation than the power law hybrid function used in initial nuclear winter studies.
Perianth organization and intra-specific floral variability.
Herrera, J; Arista, M; Ortiz, P L
2008-11-01
Floral symmetry and fusion of perianth parts are factors that contribute to fine-tune the match between flowers and their animal pollination vectors. In the present study, we investigated whether the possession of a sympetalous (fused) corolla and bilateral symmetry of flowers translate into decreased intra-specific variability as a result of natural stabilizing selection exerted by pollinators. Average size of the corolla and intra-specific variability were determined in two sets of southern Spanish entomophilous plant species. In the first set, taxa were paired by family to control for the effect of phylogeny (phylogenetically independent contrasts), whereas in the second set species were selected at random. Flower size data from a previous study (with different species) were also used to test the hypothesis that petal fusion contributes to decrease intra-specific variability. In the phylogenetically independent contrasts, floral symmetry was a significant correlate of intra-specific variation, with bilaterally symmetrical flowers showing more constancy than radially symmetrical flowers (i.e. unsophisticated from a functional perspective). As regards petal fusion, species with fused petals were on average more constant than choripetalous species, but the difference was not statistically significant. The reanalysis of data from a previous study yielded largely similar results, with a distinct effect of symmetry on variability, but no effect of petal fusion. The randomly-chosen species sample, on the other hand, failed to reveal any significant effect of either symmetry or petal fusion on intra-specific variation. The problem of low-statistical power in this kind of analysis, and the difficulty of testing an evolutionary hypothesis that involves phenotypic traits with a high degree of morphological correlation is discussed.
The prevalence of terraced treescapes in analyses of phylogenetic data sets.
Dobrin, Barbara H; Zwickl, Derrick J; Sanderson, Michael J
2018-04-04
The pattern of data availability in a phylogenetic data set may lead to the formation of terraces, collections of equally optimal trees. Terraces can arise in tree space if trees are scored with parsimony or with partitioned, edge-unlinked maximum likelihood. Theory predicts that terraces can be large, but their prevalence in contemporary data sets has never been surveyed. We selected 26 data sets and phylogenetic trees reported in recent literature and investigated the terraces to which the trees would belong, under a common set of inference assumptions. We examined terrace size as a function of the sampling properties of the data sets, including taxon coverage density (the proportion of taxon-by-gene positions with any data present) and a measure of gene sampling "sufficiency". We evaluated each data set in relation to the theoretical minimum gene sampling depth needed to reduce terrace size to a single tree, and explored the impact of the terraces found in replicate trees in bootstrap methods. Terraces were identified in nearly all data sets with taxon coverage densities < 0.90. They were not found, however, in high-coverage-density (i.e., ≥ 0.94) transcriptomic and genomic data sets. The terraces could be very large, and size varied inversely with taxon coverage density and with gene sampling sufficiency. Few data sets achieved a theoretical minimum gene sampling depth needed to reduce terrace size to a single tree. Terraces found during bootstrap resampling reduced overall support. If certain inference assumptions apply, trees estimated from empirical data sets often belong to large terraces of equally optimal trees. Terrace size correlates to data set sampling properties. Data sets seldom include enough genes to reduce terrace size to one tree. When bootstrap replicate trees lie on a terrace, statistical support for phylogenetic hypotheses may be reduced. Although some of the published analyses surveyed were conducted with edge-linked inference models (which do not induce terraces), unlinked models have been used and advocated. The present study describes the potential impact of that inference assumption on phylogenetic inference in the context of the kinds of multigene data sets now widely assembled for large-scale tree construction.
Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology
Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.
2015-01-01
The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032
Greater vertical spot spacing to improve femtosecond laser capsulotomy quality.
Schultz, Tim; Joachim, Stephanie C; Noristani, Rozina; Scott, Wendell; Dick, H Burkhard
2017-03-01
To evaluate the effect of adapted capsulotomy laser settings on the cutting quality in femtosecond laser-assisted cataract surgery. Ruhr-University Eye Clinic, Bochum, Germany. Prospective randomized case series. Eyes were treated with 1 of 2 laser settings. In Group 1, the regular standard settings were used (incisional depth 600 μm, pulse energy 4 μJ, horizontal spot spacing 5 μm, vertical spot spacing 10 μm, treatment time 1.2 seconds). In Group 2, vertical spot spacing was increased to 15 μm and the treatment time was 1.0 seconds. Light microscopy was used to evaluate the cut quality of the capsule edge. The size and number of tags (misplaced laser spots, which form a second cut of the capsule with high tear risk) were evaluated in a blinded manner. Groups were compared using the Mann-Whitney U test. The study comprised 100 eyes (50 eyes in each group). Cataract surgery was successfully completed in all eyes, and no anterior capsule tear occurred during the treatment. Histologically, significant fewer tags were observed with the new capsulotomy laser setting. The mean score for the number and size of free tags was significantly lower in this group than with the standard settings (P < .001). The new laser settings improved cut quality and reduced the number of tags. The modification has the potential to reduce the risk for radial capsule tears in femtosecond laser-assisted cataract surgery. With the new settings, no tags and no capsule tears were observed under the operating microscope in any eye. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Grain size evolution and convection regimes of the terrestrial planets
NASA Astrophysics Data System (ADS)
Rozel, A.; Golabek, G. J.; Boutonnet, E.
2011-12-01
A new model of grain size evolution has recently been proposed in Rozel et al. 2010. This new approach stipulates that the grain size dynamics is governed by two additive and simultaneous processes: grain growth and dynamic recrystallization. We use the usual normal grain growth laws for the growth part. For dynamic recrystallization, reducing the mean grain size increases the total area of grain boundaries. Grain boundaries carry some surface tension, so some energy is required to decrease the mean grain size. We consider that this energy is available during mechanical work. It is usually considered to produce some heat via viscous dissipation. A partitioning parameter f is then required to know what amount of energy is dissipated and what part is converted in surface tension. This study gives a new calibration of the partitioning parameter on major Earth materials involved in the dynamic of the terrestrial planets. Our calibration is in adequation with the published piezometric relations available in the literature (equilibrium grain size versus shear stress). We test this new model of grain size evolution in a set of numerical computations of the dynamics of the Earth using stagYY. We show that the grain size evolution has a major effect on the convection regimes of terrestrial planets.
We are requesting the reference set, which includes 50 HCC cases and 50 cirrhotic controls. In our preliminary study, AFP had a AUROC of 0.66 while the AUROC for the 5 glycoproteins was 0.81. The sensitivity and specificity for the 5 glycoproteins was 79% and 72% at the point that maximizes sensitivity+specificity in the ROC curve, and it was 79% and 35%, respectively, for AFP at the same point in the ROC curve. The reference set will allow us to determine the best performance of the 5 glycoproteins by themselves or whether their combination has a better sensitivity and/or specificity and AUROC. While a direct comparison with AFP will be made, the reference set will not allow a robust comparison due to the low sample size. If the glycoproteins are complementary or have better performance than AFP, then the next step would be to test them in the entire phase 2 hepatocellular carcinoma set.
Liver Rapid Reference Set Application ( #2): Lubman - Univ of Michigan (2010) — EDRN Public Portal
We are requesting the reference set, which includes 50 HCC cases and 50 cirrhotic controls. In our preliminary study, AFP had a AUROC of 0.66 while the AUROC for the 5 glycoproteins was 0.81. The sensitivity and specificity for the 5 glycoproteins was 79% and 72% at the point that maximizes sensitivity+specificity in the ROC curve, and it was 79% and 35%, respectively, for AFP at the same point in the ROC curve. The reference set will allow us to determine the best performance of the 5 glycoproteins by themselves or whether their combination has a better sensitivity and/or specificity and AUROC. While a direct comparison with AFP will be made, the reference set will not allow a robust comparison due to the low sample size. If the glycoproteins are complementary or have better performance than AFP, then the next step would be to test them in the entire phase 2 hepatocellular carcinoma set.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
Lang, Katie; Stahl, Daniel; Espie, Jonathan; Treasure, Janet; Tchanturia, Kate
2014-05-01
Set shifting inefficiencies in adults with anorexia nervosa (AN) are established, however the neurocognitive profile of children and adolescents with AN is less clear. This study aimed to provide a review of the literature. Electronic databases were used to search for manuscripts. Meta-analysis was performed on seven studies using two neuropsychological tests (Trail Making Task, TMT; Wisconsin Card Sorting Task, WCST). The mean difference in outcome between AN and healthy control (HC) groups was standardized by calculating Cohen's d. Meta-analysis of TMT studies showed a nonsignificant negative, pooled standardized mean difference of -0.005 (95% C.I. -0.416 to 0.406, z = 0.02, p = .98). WCST studies revealed a nonsignificant pooled effect size of d = 0.196 (95% C.I. -0.091-0.483, z = 1.34, p = .18). Studies which did not allow for a calculation of effect size typically showed a nonsignificant, worse performance by the AN groups. The inefficiencies in set shifting that are apparent in the adult AN literature do not appear to be as pronounced in children. This may suggest that set shifting difficulties in adult AN are the result of starvation or indicative of longer duration of illness. Larger studies are needed to confirm these impressions. Copyright © 2013 Wiley Periodicals, Inc.
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
Comparison of Signal Response Between EDM Notch and Cracks in Eddy-Current Testing
NASA Technical Reports Server (NTRS)
Kane, Mary; Koshti, Ajay
2008-01-01
In the field of ET an eddy-current instrument is calibrated on a manufactured notch that is designed to simulate a defect in a part. The calibrated instrument is then used to scan parts with the assumption that any response that is over half the amplitude of the notch signal is taken to be defective. The purpose of this study is to attempt a direct comparison of the signal response observed from an EDM notch to a crack of the same size. To make this comparison test equipment will be set up and calibrated as per normal inspection procedures. Once this has been achieved both notches and as many different sizes of crack specimens will be scanned and the data recorded. This data will then be analyzed to provide a comparison of the response. The results should also provide information that shows it is acceptable to use the half-amplitude method for determining if a part is defective. The tests will be performed on two different materials commonly inspected, titanium and aluminum. This will allow a comparison of the results between materials.
Numerical Study of a Convective Turbulence Encounter
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Hamilton, David W.; Bowles, Roland L.
2002-01-01
A numerical simulation of a convective turbulence event is investigated and compared with observational data. The specific case was encountered during one of NASA's flight tests and was characterized by severe turbulence. The event was associated with overshooting convective turrets that contained low to moderate radar reflectivity. Model comparisons with observations are quite favorable. Turbulence hazard metrics are proposed and applied to the numerical data set. Issues such as adequate grid size are examined.
2010-08-20
for transmitting the required power and torque. The proper gear set has also been sized to insure life expectancy of the test rig. The shaft design ...these at minimal cost and great environmental safety. These materials specifically designed on antiwear and extreme pressure chemistries can...nanolubricant additives are designed as surface-stabilized nanomaterials that are dispersed in a hydrocarbon medium for maximum effectiveness. This
Maximizing return on socioeconomic investment in phase II proof-of-concept trials.
Chen, Cong; Beckman, Robert A
2014-04-01
Phase II proof-of-concept (POC) trials play a key role in oncology drug development, determining which therapeutic hypotheses will undergo definitive phase III testing according to predefined Go-No Go (GNG) criteria. The number of possible POC hypotheses likely far exceeds available public or private resources. We propose a design strategy for maximizing return on socioeconomic investment in phase II trials that obtains the greatest knowledge with the minimum patient exposure. We compare efficiency using the benefit-cost ratio, defined to be the risk-adjusted number of truly active drugs correctly identified for phase III development divided by the risk-adjusted total sample size in phase II and III development, for different POC trial sizes, powering schemes, and associated GNG criteria. It is most cost-effective to conduct small POC trials and set the corresponding GNG bars high, so that more POC trials can be conducted under socioeconomic constraints. If δ is the minimum treatment effect size of clinical interest in phase II, the study design with the highest benefit-cost ratio has approximately 5% type I error rate and approximately 20% type II error rate (80% power) for detecting an effect size of approximately 1.5δ. A Go decision to phase III is made when the observed effect size is close to δ. With the phenomenal expansion of our knowledge in molecular biology leading to an unprecedented number of new oncology drug targets, conducting more small POC trials and setting high GNG bars maximize the return on socioeconomic investment in phase II POC trials. ©2014 AACR.
Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis
ERIC Educational Resources Information Center
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio
2010-01-01
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z
2015-01-01
Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.
Neural activity in the hippocampus predicts individual visual short-term memory capacity.
von Allmen, David Yoh; Wurmitzer, Karoline; Martin, Ernst; Klaver, Peter
2013-07-01
Although the hippocampus had been traditionally thought to be exclusively involved in long-term memory, recent studies raised controversial explanations why hippocampal activity emerged during short-term memory tasks. For example, it has been argued that long-term memory processes might contribute to performance within a short-term memory paradigm when memory capacity has been exceeded. It is still unclear, though, whether neural activity in the hippocampus predicts visual short-term memory (VSTM) performance. To investigate this question, we measured BOLD activity in 21 healthy adults (age range 19-27 yr, nine males) while they performed a match-to-sample task requiring processing of object-location associations (delay period = 900 ms; set size conditions 1, 2, 4, and 6). Based on individual memory capacity (estimated by Cowan's K-formula), two performance groups were formed (high and low performers). Within whole brain analyses, we found a robust main effect of "set size" in the posterior parietal cortex (PPC). In line with a "set size × group" interaction in the hippocampus, a subsequent Finite Impulse Response (FIR) analysis revealed divergent hippocampal activation patterns between performance groups: Low performers (mean capacity = 3.63) elicited increased neural activity at set size two, followed by a drop in activity at set sizes four and six, whereas high performers (mean capacity = 5.19) showed an incremental activity increase with larger set size (maximal activation at set size six). Our data demonstrated that performance-related neural activity in the hippocampus emerged below capacity limit. In conclusion, we suggest that hippocampal activity reflected successful processing of object-location associations in VSTM. Neural activity in the PPC might have been involved in attentional updating. Copyright © 2013 Wiley Periodicals, Inc.
Millner, Alexander J; Coppersmith, Daniel D L; Teachman, Bethany A; Nock, Matthew K
2018-05-21
Assessing suicidal thoughts and behaviors is difficult because at-risk individuals often fail to provide honest or accurate accounts of their suicidal thoughts or intentions. Research has shown that the Death Implicit Association Test (D-IAT), a behavioral test that measures implicit (i.e., outside of conscious control) associations between oneself and death concepts, can differentiate among people with different suicidal histories, such as those with different severity or recency of suicidal behaviors. We report here on the development and evaluation of a shorter and simpler version of the D-IAT called the Death Brief Implicit Association Test (D-BIAT). We recruited large (ns > 1,500) samples of participants to complete the original D-IAT and shorter D-BIAT via a public web-based platform and evaluated different scoring approaches, assessed the reliability and validity of the D-BIAT and compared it with the D-IAT. We found that the D-BIAT was reliable, provided significant group differences with effect sizes on par with the D-IAT, as well as similarly sized classification metrics (i.e., receiver operator characteristics). Although the D-IAT was nonsignificantly better on most outcomes, the D-BIAT is 1-1[1/2] minutes shorter and provided larger effect sizes for distinguishing between past year and lifetime attempters. Thus, there is a trade-off between administration time and improved outcomes associated with increased data. The D-BIAT should be considered for use where time or participant burden needs to be minimized, such as in clinical settings. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Effect size for the main cognitive function determinants in a large cross-sectional study.
Mura, T; Amieva, H; Goldberg, M; Dartigues, J-F; Ankri, J; Zins, M; Berr, C
2016-11-01
The aim of our study was to examine the effect sizes of different cognitive function determinants in middle and early old age. Cognitive functions were assessed in 11 711 volunteers (45 to 75 years old), included in the French CONSTANCES cohort between January 2012 and May 2014, using the free and cued selective reminding test (FCSRT), verbal fluency tasks, digit-symbol substitution test (DSST) and trail making test (TMT), parts A and B. The effect sizes of socio-demographic (age, sex, education), lifestyle (alcohol, tobacco, physical activity), cardiovascular (diabetes, blood pressure) and psychological (depressive symptomatology) variables were computed as omega-squared coefficients (ω 2 ; part of the variation of a neuropsychological score that is independently explained by a given variable). These sets of variables explained from R 2 = 10% (semantic fluency) to R 2 = 26% (DSST) of the total variance. In all tests, socio-demographic variables accounted for the greatest part of the explained variance. Age explained from ω 2 = 0.5% (semantic fluency) to ω 2 = 7.5% (DSST) of the total score variance, gender from ω 2 = 5.2% (FCSRT) to a negligible part (semantic fluency or TMT) and education from ω 2 = 7.2% (DSST) to ω 2 = 1.4% (TMT-A). Behavioral, cardiovascular and psychological variables only slightly influenced the cognitive test results (all ω 2 < 0.8%, most ω 2 < 0.1%). Socio-demographic variables (age, gender and education) are the main variables associated with cognitive performance variations between 45 and 75 years of age in the general population. © 2016 EAN.
Is a data set distributed as a power law? A test, with application to gamma-ray burst brightnesses
NASA Technical Reports Server (NTRS)
Wijers, Ralph A. M. J.; Lubin, Lori M.
1994-01-01
We present a method to determine whether an observed sample of data is drawn from a parent distribution that is pure power law. The method starts from a class of statistics which have zero expectation value under the null hypothesis, H(sub 0), that the distribution is a pure power law: F(x) varies as x(exp -alpha). We study one simple member of the class, named the `bending statistic' B, in detail. It is most effective for detection a type of deviation from a power law where the power-law slope varies slowly and monotonically as a function of x. Our estimator of B has a distribution under H(sub 0) that depends only on the size of the sample, not on the parameters of the parent population, and is approximated well by a normal distribution even for modest sample sizes. The bending statistic can therefore be used to test a set of numbers is drawn from any power-law parent population. Since many measurable quantities in astrophysics have distriibutions that are approximately power laws, and since deviations from the ideal power law often provide interesting information about the object of study (e.g., a `bend' or `break' in a luminosity function, a line in an X- or gamma-ray spectrum), we believe that a test of this type will be useful in many different contexts. In the present paper, we apply our test to various subsamples of gamma-ray burst brightness from the first-year Burst and Transient Source Experiment (BATSE) catalog and show that we can only marginally detect the expected steepening of the log (N (greater than C(sub max))) - log (C(sub max)) distribution.
Does precision decrease with set size?
Mazyar, Helga; van den Berg, Ronald; Ma, Wei Ji
2012-01-01
The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations. PMID:22685337
Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan
2013-06-07
Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.
Reduced-portion entrées in a worksite and restaurant setting: impact on food consumption and waste.
Berkowitz, Sarah; Marquart, Len; Mykerezi, Elton; Degeneffe, Dennis; Reicks, Marla
2016-11-01
Large portion sizes in restaurants have been identified as a public health risk. The purpose of the present study was to determine whether customers in two different food-service operator segments (non-commercial worksite cafeteria and commercial upscale restaurant) would select reduced-portion menu items and the impact of selecting reduced-portion menu items on energy and nutrient intakes and plate waste. Consumption and plate waste data were collected for 5 weeks before and 7 weeks after introduction of five reduced-size entrées in a worksite lunch cafeteria and for 3 weeks before and 4 weeks after introduction of five reduced-size dinner entrées in a restaurant setting. Full-size entrées were available throughout the entire study periods. A worksite cafeteria and a commercial upscale restaurant in a large US Midwestern metropolitan area. Adult worksite employees and restaurant patrons. Reduced-size entrées accounted for 5·3-12·8 % and 18·8-31·3 % of total entrées selected in the worksite and restaurant settings, respectively. Food waste, energy intake and intakes of total fat, saturated fat, cholesterol, Na, fibre, Ca, K and Fe were significantly lower when both full- and reduced-size entrées were served in the worksite setting and in the restaurant setting compared with when only full-size entrées were served. A relatively small proportion of reduced-size entrées were selected but still resulted in reductions in overall energy and nutrient intakes. These outcomes could serve as the foundation for future studies to determine strategies to enhance acceptance of reduced-portion menu items in restaurant settings.
Relative Humidity in Limited Streamer Tubes for Stanford Linear Accelerator Center's BaBar Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, M.I.; /MIT; Convery, M.
2005-12-15
The BABAR Detector at the Stanford Linear Accelerator Center studies the decay of B mesons created in e{sup +}e{sup -} collisions. The outermost layer of the detector, used to detect muons and neutral hadrons created during this process, is being upgraded from Resistive Plate Chambers (RPCs) to Limited Streamer Tubes (LSTs). The standard-size LST tube consists of eight cells, where a silver-plated wire runs down the center of each. A large potential difference is placed between the wires and ground. Gas flows through a series of modules connected with tubing, typically four. LSTs must be carefully tested before installation, asmore » it will be extremely difficult to repair any damage once installed in the detector. In the testing process, the count rate in most modules showed was stable and consistent with cosmic ray rate over an approximately 500 V operating range between 5400 to 5900 V. The count in some modules, however, was shown to unexpectedly spike near the operation point. In general, the modules through which the gas first flows did not show this problem, but those further along the gas chain were much more likely to do so. The suggestion was that this spike was due to higher humidity in the modules furthest from the fresh, dry inflowing gas, and that the water molecules in more humid modules were adversely affecting the modules' performance. This project studied the effect of humidity in the modules, using a small capacitive humidity sensor (Honeywell). The sensor provided a humidity-dependent output voltage, as well as a temperature measurement from a thermistor. A full-size hygrometer (Panametrics) was used for testing and calibrating the Honeywell sensors. First the relative humidity of the air was measured. For the full calibration, a special gas-mixing setup was used, where relative humidity of the LST gas mixture could be varied from almost dry to almost fully saturated. With the sensor calibrated, a set of sensors was used to measure humidity vs. time in the LSTs. The sensors were placed in two sets of LST modules, one gas line flowing through each set. These modules were tested for count rate v. voltage while simultaneously measuring relative humidity in each module. One set produced expected readings, while the other showed the spike in count rate. The relative humidity in the two sets of modules looked very similar, but it rose significantly for modules further along the gas chain.« less
Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders
2010-06-01
Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.
The tempo and mode of evolution: body sizes of island mammals.
Raia, Pasquale; Meiri, Shai
2011-07-01
The tempo and mode of body size evolution on islands are believed to be well known. It is thought that body size evolves relatively quickly on islands toward the mammalian modal value, thus generating extreme cases of size evolution and the island rule. Here, we tested both theories in a phylogenetically explicit context, by using two different species-level mammalian phylogenetic hypotheses limited to sister clades dichotomizing into an exclusively insular and an exclusively mainland daughter nodes. Taken as a whole, mammals were found to show a largely punctuational mode of size evolution. We found that, accounting for this, and regardless of the phylogeny used, size evolution on islands is no faster than on the continents. We compared different selection regimes using a set of Ornstein-Uhlenbeck models to examine the effects of insularity of the mode of evolution. The models strongly supported clade-specific selection regimes. Under this regime, however, an evolutionary model allowing insular species to evolve differently from their mainland relatives performs worse than a model that ignores insularity as a factor. Thus, insular taxa do not experience statistically different selection from their mainland relatives. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution
NASA Astrophysics Data System (ADS)
Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.
2004-02-01
We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of <1 μm the real part of the complex refractive index was retrieved to an accuracy of +/-0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.
Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution.
Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N
2004-02-10
We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to approximately 10 microm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of approximately 50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of < 1 microm the real part of the complex refractive index was retrieved to an accuracy of +/- 0.05, the imaginary part was retrieved to 50% uncertainty. Simulations dealing with a mode-dependent complex refractive index showed that an average complex refractive index is derived that lies between the values for the two individual modes. Thus it becomes possible to investigate external mixtures of particle size distributions, which, for example, might be present along continental rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.
Once-through integral system (OTIS): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J R
1986-09-01
A scaled experimental facility, designated the once-through integral system (OTIS), was used to acquire post-small break loss-of-coolant accident (SBLOCA) data for benchmarking system codes. OTIS was also used to investigate the application of the Abnormal Transient Operating Guidelines (ATOG) used in the Babcock and Wilcox (B and W) designed nuclear steam supply system (NSSS) during the course of an SBLOCA. OTIS was a single-loop facility with a plant to model power scale factor of 1686. OTIS maintained the key elevations, approximate component volumes, and loop flow resistances, and simulated the major component phenomena of a B and W raised-loop nuclearmore » plant. A test matrix consisting of 15 tests divided into four categories was performed. The largest group contained 10 tests and was defined to parametrically obtain an extensive set of plant-typical experimental data for code benchmarking. Parameters such as leak size, leak location, and high-pressure injection (HPI) shut-off head were individually varied. The remaining categories were specified to study the impact of the ATOGs (2 tests), to note the effect of guard heater operation on observed phenomena (2 tests), and to provide a data set for comparison with previous test experience (1 test). A summary of the test results and a detailed discussion of Test 220100 is presented. Test 220100 was the nominal or reference test for the parametric studies. This test was performed with a scaled 10-cm/sup 2/ leak located in the cold leg suction piping.« less
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Development of a non-explosive release actuator using shape memory alloy wire.
Yoo, Young Ik; Jeong, Ju Won; Lim, Jae Hyuk; Kim, Kyung-Won; Hwang, Do-Soon; Lee, Jung Ju
2013-01-01
We have developed a newly designed non-explosive release actuator that can replace currently used release devices. The release mechanism is based on a separation mechanism, which relies on segmented nuts and a shape memory alloy (SMA) wire trigger. A quite fast and simple trigger operation is made possible through the use of SMA wire. This actuator is designed to allow a high preload with low levels of shock for the solar arrays of medium-size satellites. After actuation, the proposed device can be easily and instantly reset. Neither replacement, nor refurbishment of any components is necessary. According to the results of a performance test, the release time, preload capacity, and maximum shock level are 50 ms, 15 kN, and 350 G, respectively. In order to increase the reliability of the actuator, more than ten sets of performance tests are conducted. In addition, the proposed release actuator is tested under thermal vacuum and extreme vibration environments. No degradation or damage was observed during the two environment tests, and the release actuator was able to operate successfully. Considering the test results as a whole, we conclude that the proposed non-explosive release actuator can be applied reliably to intermediate-size satellites to replace existing release systems.
Sykes, Lynn R.; Cifuentes, Inés L.
1984-01-01
Magnitudes of the larger Soviet underground nuclear weapons tests from the start of the Threshold Test Ban Treaty in 1976 through 1982 are determined for short- and long-period seismic waves. Yields are calculated from the surface wave magnitude for those explosions at the eastern Kazakh test site that triggered a small-to-negligible component of tectonic stress and are used to calibrate body wave magnitude-yield relationship that can be used to determine the sizes of other explosions at that test site. The results confirm that a large bias, related to differential attenuation of P waves, exists between Nevada and Central Asia. The yields of the seven largest Soviet explosions are nearly identical and are close to 150 kilotons, the limit set by the Threshold Treaty. PMID:16593440
NASA Technical Reports Server (NTRS)
1971-01-01
The need is examined for orbital flight tests of gyroscope, dewar, and other components, in order to reduce the technical and financial risk in performing the relativity experiment. A program is described that would generate engineering data to permit prediction of final performance. Two flight tests are recommended. The first flight would test a dewar smaller than that required for the final flight, but of size and form sufficient to allow extrapolation to the final design. The second flight would use the same dewar design to carry a set of three gyroscopes, which would be evaluated for spinup and drift characteristics for a period of a month or more. A proportional gas control system using boiloff helium gas from the dewar, and having the ability to prevent sloshing of liquid helium, would also be tested.
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco; Llana-Fúnez, Sergio
2016-04-01
The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).
Chronic Disease Self-Management Program in the Workplace: Opportunities for Health Improvement
Smith, Matthew Lee; Wilson, Mark G.; DeJoy, David M.; Padilla, Heather; Zuercher, Heather; Corso, Phaedra; Vandenberg, Robert; Lorig, Kate; Ory, Marcia G.
2015-01-01
Disease management is becoming increasingly important in workplace health promotion given the aging workforce, rising chronic disease prevalence, and needs to maintain a productive and competitive American workforce. Despite the widespread availability of the Chronic Disease Self-Management Program (CDSMP), and its known health-related benefits, program adoption remains low in workplace settings. The primary purpose of this study is to compare personal and delivery characteristics of adults who attended CDSMP in the workplace relative to other settings (e.g., senior centers, healthcare organizations, residential facilities). This study also contrasts characteristics of CDSMP workplace participants to those of the greater United States workforce and provides recommendations for translating CDSMP for use in workplace settings. Data were analyzed from 25,664 adults collected during a national dissemination of CDSMP. Only states and territories that conducted workshops in workplace settings were included in analyses (n = 13 states and Puerto Rico). Chi-squared tests and t-tests were used to compare CDSMP participant characteristics by delivery site type. CDSMP workplace participant characteristics were then compared to reports from the United States Bureau of Labor Statistics. Of the 25,664 CDSMP participants in this study, 1.7% (n = 435) participated in workshops hosted in worksite settings. Compared to CDSMP participants in non-workplace settings, workplace setting participants were significantly younger and had fewer chronic conditions. Differences were also observed based on chronic disease types. On average, CDSMP workshops in workplace settings had smaller class sizes and workplace setting participants attended more workshop sessions. CDSMP participants in workplace settings were substantially older and a larger proportion were female than the general United States workforce. Findings indicate opportunities to translate CDSMP for use in the workplace to reach new target audiences. PMID:25964909
Harrison, Rosamund; Veronneau, Jacques; Leroux, Brian
2010-05-13
The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. This trial is registered as ISRCTN41467632.
2010-01-01
Background The goal of this cluster randomized trial is to test the effectiveness of a counseling approach, Motivational Interviewing, to control dental caries in young Aboriginal children. Motivational Interviewing, a client-centred, directive counseling style, has not yet been evaluated as an approach for promotion of behaviour change in indigenous communities in remote settings. Methods/design Aboriginal women were hired from the 9 communities to recruit expectant and new mothers to the trial, administer questionnaires and deliver the counseling to mothers in the test communities. The goal is for mothers to receive the intervention during pregnancy and at their child's immunization visits. Data on children's dental health status and family dental health practices will be collected when children are 30-months of age. The communities were randomly allocated to test or control group by a random "draw" over community radio. Sample size and power were determined based on an anticipated 20% reduction in caries prevalence. Randomization checks were conducted between groups. Discussion In the 5 test and 4 control communities, 272 of the original target sample size of 309 mothers have been recruited over a two-and-a-half year period. A power calculation using the actual attained sample size showed power to be 79% to detect a treatment effect. If an attrition fraction of 4% per year is maintained, power will remain at 80%. Power will still be > 90% to detect a 25% reduction in caries prevalence. The distribution of most baseline variables was similar for the two randomized groups of mothers. However, despite the random assignment of communities to treatment conditions, group differences exist for stage of pregnancy and prior tooth extractions in the family. Because of the group imbalances on certain variables, control of baseline variables will be done in the analyses of treatment effects. This paper explains the challenges of conducting randomized trials in remote settings, the importance of thorough community collaboration, and also illustrates the likelihood that some baseline variables that may be clinically important will be unevenly split in group-randomized trials when the number of groups is small. Trial registration This trial is registered as ISRCTN41467632. PMID:20465831
NASA Astrophysics Data System (ADS)
Abdullah, Muhammad Faiz; Puay, How Tion; Zakaria, Nor Azazi
2017-10-01
Sustainable Urban Drainage System (SuDS) such as swales and rain gardens is showing growing popularity as a green technology for stormwater management and it can be used in all types of development to provide a natural approach to managing drainage. Soil permeability is a critical factor in selecting the right SuDS technique for a site. On this basis, we have set up a laboratory experiment to investigate the porosity and saturated hydraulic conductivity of single size and binary (two sizes) mixture using column-test as a preliminary investigation with two sets of glass beads with different sizes are used in this study. The porosity and saturated hydraulic conductivity for varies volume fraction of the course and fine glass beads were measured. It was found that the porosity of the binary mixture does not increase with the increment of the ratio of coarse to fine beads until the volume fraction of fine particles is equal to the coarse component. Saturated hydraulic conductivity result shows that the assumption of random packing was not achieved at the higher coarse ratio where most of the fine particles tend to sit at the bottom of the column forming separate layers which lower the overall hydraulic conductivity value.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
Huang, Yen-Tsung; Pan, Wen-Chi
2016-06-01
Causal mediation modeling has become a popular approach for studying the effect of an exposure on an outcome through a mediator. However, current methods are not applicable to the setting with a large number of mediators. We propose a testing procedure for mediation effects of high-dimensional continuous mediators. We characterize the marginal mediation effect, the multivariate component-wise mediation effects, and the L2 norm of the component-wise effects, and develop a Monte-Carlo procedure for evaluating their statistical significance. To accommodate the setting with a large number of mediators and a small sample size, we further propose a transformation model using the spectral decomposition. Under the transformation model, mediation effects can be estimated using a series of regression models with a univariate transformed mediator, and examined by our proposed testing procedure. Extensive simulation studies are conducted to assess the performance of our methods for continuous and dichotomous outcomes. We apply the methods to analyze genomic data investigating the effect of microRNA miR-223 on a dichotomous survival status of patients with glioblastoma multiforme (GBM). We identify nine gene ontology sets with expression values that significantly mediate the effect of miR-223 on GBM survival. © 2015, The International Biometric Society.
A Repeated Power Training Enhances Fatigue Resistance While Reducing Intraset Fluctuations.
Gonzalo-Skok, Oliver; Tous-Fajardo, Julio; Moras, Gerard; Arjol-Serrano, José Luis; Mendez-Villanueva, Alberto
2018-04-04
Oliver, GS, Julio, TF, Moras, G, José Luis, AS, and Alberto, MV. A repeated power training enhances fatigue resistance while reducing intraset fluctuations. J Strength Cond Res XX(X): 000-000, 2018-The present study analyzed the effects of adding an upper-body repeated power ability (RPA) training to habitual strength training sessions. Twenty young elite male basketball players were randomly allocated into a control group (CON, n = 10) or repeated power group (RPG, n = 10) and evaluated by 1 repetition maximum (1RM), incremental load, and RPA tests in the bench press exercise before and after a 7-week period and a 4-week cessation period. Repeated power group performed 1-3 blocks of 5 sets of 5 repetitions using the load that maximized power output with 30 seconds and 3 minute of passive recovery between sets and blocks, respectively. Between-group analysis showed substantial greater improvements in RPG compared with CON in: best set (APB), last set (APL), mean power over 5 sets (APM), percentage of decrement, fluctuation decrease during APL and RPA index (APLpost/APBpre) during the RPA test (effect size [ES] = 0.64-1.86), and 1RM (ES = 0.48) and average power at 80% of 1RM (ES = 1.11) in the incremental load test. The improvements of APB and APM were almost perfectly correlated. In conclusion, RPA training represents an effective method to mainly improve fatigue resistance together with the novel finding of a better consistency in performance (measured as reduced intraset power fluctuations) at the end of a dynamic repeated effort.
Gene set analysis approaches for RNA-seq data: performance evaluation and application guideline
Rahmatallah, Yasir; Emmert-Streib, Frank
2016-01-01
Transcriptome sequencing (RNA-seq) is gradually replacing microarrays for high-throughput studies of gene expression. The main challenge of analyzing microarray data is not in finding differentially expressed genes, but in gaining insights into the biological processes underlying phenotypic differences. To interpret experimental results from microarrays, gene set analysis (GSA) has become the method of choice, in particular because it incorporates pre-existing biological knowledge (in a form of functionally related gene sets) into the analysis. Here we provide a brief review of several statistically different GSA approaches (competitive and self-contained) that can be adapted from microarrays practice as well as those specifically designed for RNA-seq. We evaluate their performance (in terms of Type I error rate, power, robustness to the sample size and heterogeneity, as well as the sensitivity to different types of selection biases) on simulated and real RNA-seq data. Not surprisingly, the performance of various GSA approaches depends only on the statistical hypothesis they test and does not depend on whether the test was developed for microarrays or RNA-seq data. Interestingly, we found that competitive methods have lower power as well as robustness to the samples heterogeneity than self-contained methods, leading to poor results reproducibility. We also found that the power of unsupervised competitive methods depends on the balance between up- and down-regulated genes in tested gene sets. These properties of competitive methods have been overlooked before. Our evaluation provides a concise guideline for selecting GSA approaches, best performing under particular experimental settings in the context of RNA-seq. PMID:26342128
Chang, Ken; Bai, Harrison X; Zhou, Hao; Su, Chang; Bi, Wenya Linda; Agbodza, Ena; Kavouridis, Vasileios K; Senders, Joeky T; Boaro, Alessandro; Beers, Andrew; Zhang, Biqi; Capellini, Alexandra; Liao, Weihua; Shen, Qin; Li, Xuejun; Xiao, Bo; Cryan, Jane; Ramkissoon, Shakti; Ramkissoon, Lori; Ligon, Keith; Wen, Patrick Y; Bindra, Ranjit S; Woo, John; Arnaout, Omar; Gerstner, Elizabeth R; Zhang, Paul J; Rosen, Bruce R; Yang, Li; Huang, Raymond Y; Kalpathy-Cramer, Jayashree
2018-03-01
Purpose: Isocitrate dehydrogenase ( IDH ) mutations in glioma patients confer longer survival and may guide treatment decision making. We aimed to predict the IDH status of gliomas from MR imaging by applying a residual convolutional neural network to preoperative radiographic data. Experimental Design: Preoperative imaging was acquired for 201 patients from the Hospital of University of Pennsylvania (HUP), 157 patients from Brigham and Women's Hospital (BWH), and 138 patients from The Cancer Imaging Archive (TCIA) and divided into training, validation, and testing sets. We trained a residual convolutional neural network for each MR sequence (FLAIR, T2, T1 precontrast, and T1 postcontrast) and built a predictive model from the outputs. To increase the size of the training set and prevent overfitting, we augmented the training set images by introducing random rotations, translations, flips, shearing, and zooming. Results: With our neural network model, we achieved IDH prediction accuracies of 82.8% (AUC = 0.90), 83.0% (AUC = 0.93), and 85.7% (AUC = 0.94) within training, validation, and testing sets, respectively. When age at diagnosis was incorporated into the model, the training, validation, and testing accuracies increased to 87.3% (AUC = 0.93), 87.6% (AUC = 0.95), and 89.1% (AUC = 0.95), respectively. Conclusions: We developed a deep learning technique to noninvasively predict IDH genotype in grade II-IV glioma using conventional MR imaging using a multi-institutional data set. Clin Cancer Res; 24(5); 1073-81. ©2017 AACR . ©2017 American Association for Cancer Research.
Set shifting and working memory in adults with attention-deficit/hyperactivity disorder.
Rohlf, Helena; Jucksch, Viola; Gawrilow, Caterina; Huss, Michael; Hein, Jakob; Lehmkuhl, Ulrike; Salbach-Andrae, Harriet
2012-01-01
Compared to the high number of studies that investigated executive functions (EF) in children with attention-deficit/hyperactivity disorder (ADHD), a little is known about the EF performance of adults with ADHD. This study compared 37 adults with ADHD (ADHD(total)) and 32 control participants who were equivalent in age, intelligence quotient (IQ), sex, and years of education, in two domains of EF--set shifting and working memory. Additionally, the ADHD(total) group was subdivided into two subgroups: ADHD patients without comorbidity (ADHD(-), n = 19) and patients with at least one comorbid disorder (ADHD(+), n = 18). Participants fulfilled two measures for set shifting (i.e., the trail making test, TMT and a computerized card sorting test, CKV) and one measure for working memory (i.e., digit span test, DS). Compared to the control group the ADHD(total) group displayed deficits in set shifting and working memory. The differences between the groups were of medium-to-large effect size (TMT: d = 0.48; DS: d = 0.51; CKV: d = 0.74). The subgroup comparison of the ADHD(+) group and the ADHD(-) group revealed a poorer performance in general information processing speed for the ADHD(+) group. With regard to set shifting and working memory, no significant differences could be found between the two subgroups. These results suggest that the deficits of the ADHD(total) group are attributable to ADHD rather than to comorbidity. An influence of comorbidity, however, could not be completely ruled out as there was a trend of a poorer performance in the ADHD(+) group on some of the outcome measures.
Falk, Tiago H; Tam, Cynthia; Schellnus, Heidi; Chau, Tom
2011-12-01
Standardized writing assessments such as the Minnesota Handwriting Assessment (MHA) can inform interventions for handwriting difficulties, which are prevalent among school-aged children. However, these tests usually involve the laborious task of subjectively rating the legibility of the written product, precluding their practical use in some clinical and educational settings. This study describes a portable computer-based handwriting assessment tool to objectively measure MHA quality scores and to detect handwriting difficulties in children. Several measures are proposed based on spatial, temporal, and grip force measurements obtained from a custom-built handwriting instrument. Thirty-five first and second grade students participated in the study, nine of whom exhibited handwriting difficulties. Students performed the MHA test and were subjectively scored based on speed and handwriting quality using five primitives: legibility, form, alignment, size, and space. Several spatial parameters are shown to correlate significantly (p<0.001) with subjective scores obtained for alignment, size, space, and form. Grip force and temporal measures, in turn, serve as useful indicators of handwriting legibility and speed, respectively. Using only size and space parameters, promising discrimination between proficient and non-proficient handwriting can be achieved. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ahmadi, Ali; Seyedi Hosseininia, Ehsan
2017-06-01
This paper discusses the formation of stable arches in granular materials by using a series of laboratory tests. To this aim, a developed trapdoor apparatus is designed to find dimensions of arches formed over the door in cohesionless aggregates. This setup has two new important applications. In order to investigate the maximum width of the opening generated exactly on the verge of failure, the door can be open to an arbitrary size. In addition, the box containing granular materials (or base angle) is able to be set on optional angles from zero to 90 degrees with respect to the horizontal. Therefore, it is possible to understand the effect of different levels of gravity accelerations on the formed arches. It is observed that for all tested granular materials, increasing the door size and decreasing the base angle, both cause to increase the width and height of the arch. Moreover, the shape of all arches is governed by a parabola. Furthermore, the maximum door width is approximately five to 8.6 times the particle size, depending on the internal friction angle of materials and the base angle.
NASA Astrophysics Data System (ADS)
Arbilei, Marwan N.
2018-05-01
This paper aimed to recycle high power electrical wires west in prosthetics limbs manufacturing. The effect of grain size on mechanical properties (Hardness and Tensile Strength), and wear resistance of commercial 6026 T9 Aluminum alloys that used in electrical industry have been modeled to be predicted. Six sets of samples were prepared with different annealing heat treatment parameters, (300,350 and 400)°C with (1 and 2) hours. Each treatment gained different grain sizes (23-71) μm and evenly HV (61-169) values. The grain size that produced from heat treatments was ranged from. Tensile properties regarding HV have been reviewed and all data haven collected to create a mathematical model showing the relation between Tensile strength and Hardness. The Sliding wear tests applied with (3 and 8) N with five periods (20-100) minutes. Multiple regression model prepared for predicting the values of weight loss for wear process. The model was tested and validated for the properties. The main purpose of this research is to provide an effective and accurate way to predict weight loose rate in wear process.
Construction and characterization of the detection modules for the Muon Portal Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blancato, A.A.; Bonanno, D.L.; La Rocca, P.
The Muon Portal Project is a joint initiative between research and industrial partners, aimed at the construction of a real size detector prototype (6 x 3 x 7 m{sup 3}) for the inspection of containers by the muon scattering technique, devised to search for hidden high-Z fissile materials and provide a full 3D tomography of the interior of the container in a scanning time of the order of minutes. The muon tracking detector is based on a set of 48 detection modules (size 1 m x 3 m), each built with 100 extruded scintillator strips, so as to provide fourmore » X-Y detection planes, two placed above and two below the container to be inspected. Two wavelength shifting (WLS) fibres embedded in each strip convey the emitted photons to Silicon Photomultipliers (SiPM) which act as photo-sensors. After a research and development phase, which led to the choice and test of the individual components, the construction of the full size detector has already started. The paper describes the results of the mass characterization of the photo-sensors and the construction and test measurements of the first detection modules of the Project. (authors)« less
Modeling Electronic Quantum Transport with Machine Learning
Lopez Bezanilla, Alejandro; von Lilienfeld Toal, Otto A.
2014-06-11
We present a machine learning approach to solve electronic quantum transport equations of one-dimensional nanostructures. The transmission coefficients of disordered systems were computed to provide training and test data sets to the machine. The system’s representation encodes energetic as well as geometrical information to characterize similarities between disordered configurations, while the Euclidean norm is used as a measure of similarity. Errors for out-of-sample predictions systematically decrease with training set size, enabling the accurate and fast prediction of new transmission coefficients. The remarkable performance of our model to capture the complexity of interference phenomena lends further support to its viability inmore » dealing with transport problems of undulatory nature.« less
You Cannot Step Into the Same River Twice: When Power Analyses Are Optimistic.
McShane, Blakeley B; Böckenholt, Ulf
2014-11-01
Statistical power depends on the size of the effect of interest. However, effect sizes are rarely fixed in psychological research: Study design choices, such as the operationalization of the dependent variable or the treatment manipulation, the social context, the subject pool, or the time of day, typically cause systematic variation in the effect size. Ignoring this between-study variation, as standard power formulae do, results in assessments of power that are too optimistic. Consequently, when researchers attempting replication set sample sizes using these formulae, their studies will be underpowered and will thus fail at a greater than expected rate. We illustrate this with both hypothetical examples and data on several well-studied phenomena in psychology. We provide formulae that account for between-study variation and suggest that researchers set sample sizes with respect to our generally more conservative formulae. Our formulae generalize to settings in which there are multiple effects of interest. We also introduce an easy-to-use website that implements our approach to setting sample sizes. Finally, we conclude with recommendations for quantifying between-study variation. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Pastuovic, Z.; Siegele, R.; Cohen, D. D.; Mann, M.; Ionescu, M.; Button, D.; Long, S.
2017-08-01
The Centre for Accelerator Science facility at ANSTO has been expanded with the new NEC 6 MV ;SIRIUS; accelerator system in 2015. In this paper we present a detailed description of the new nuclear microprobe-Confocal Heavy Ion Micro-Probe (CHIMP) together with results of the microprobe resolution testing and the elemental analysis performed on typical samples of mineral ore deposits and hyper-accumulating plants regularly measured at ANSTO. The CHIMP focusing and scanning systems are based on the OM-150 Oxford quadrupole triplet and the OM-26 separated scan-coil doublet configurations. A maximum ion rigidity of 38.9 amu-MeV was determined for the following nuclear microprobe configuration: the distance from object aperture to collimating slits of 5890 mm, the working distance of 165 mm and the lens bore diameter of 11 mm. The overall distance from the object to the image plane is 7138 mm. The CHIMP beamline has been tested with the 3 MeV H+ and 6 MeV He2+ ion beams. The settings of the object and collimating apertures have been optimized using the WinTRAX simulation code for calculation of the optimum acceptance settings in order to obtain the highest possible ion current for beam spot sizes of 1 μm and 5 μm. For optimized aperture settings of the CHIMP the beam brightness was measured to be ∼0.9 pA μm-2 mrad-2 for 3 MeV H+ ions, while the brightness of ∼0.4 pA μm-2 mrad-2 was measured for 6 MeV He2+ ions. The smallest beam sizes were achieved using a microbeam with reduced particle rate of 1000 Hz passing through the object slit apertures several micrometers wide. Under these conditions a spatial resolution of ∼0.6 μm × 1.5 μm for 3 MeV H+ and ∼1.8 μm × 1.8 μm for 6 MeV He2+ microbeams in horizontal (and vertical) dimension has been achieved. The beam sizes were verified using STIM imaging on 2000 and 1000 mesh Cu electron microscope grids.
Elnaghy, A M; Elsaka, S E
2018-05-01
To compare the torsional resistance of XP-endo Shaper (XPS; size 30, .01 taper, FKG Dentaire, La Chaux-de-Fonds, Switzerland) instruments at body temperature with TRUShape (TRS; size 30, .06 taper, Dentsply Tulsa Dental Specialties, Tulsa, OK, USA), ProFile Vortex (PV; size 30, .04 taper, Dentsply Tulsa Dental Specialties) and FlexMaster (FM; size 30, .04 taper, VDW GmbH, Munich, Germany) nickel-titanium rotary instruments. A metal block with a square-shaped mould (5 mm × 5 mm × 5 mm) was positioned inside a glass container. Five millimetres of the tip of each instrument was held inside the metal block by filling the mould with a resin composite. The instruments were tested for torsional resistance in saline solution at 37 °C. Data were analysed using one-way analysis of variance (anova) and Tukey post hoc tests. The significance level was set at P < 0.05. FM had the greatest torsional resistance amongst the instruments tested (P < 0.001). There was no significant difference between FM and PV instruments (P = 0.211). The ranking for torsional resistance values was: FM > PV > TRS > XPS. FlexMaster and ProFile Vortex instruments were more resistant to torsional stress compared with TRUShape and XP-endo Shaper instruments. The manufacturing process used to produce XP-endo Shaper instruments did not enhance their resistance to torsional stress as compared with the other instruments. © 2017 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Patency of paediatric endotracheal tubes for airway instrumentation.
Elfgen, J; Buehler, P K; Thomas, J; Kemper, M; Imach, S; Weiss, M
2017-01-01
Airway exchange catheters (AEC) and fiberoptic bronchoscopes (FOB) for tracheal intubation are selected so that there is only a minimal gap between their outer and inner diameter of endotracheal tube (ETT) to minimize the risk of impingement during airway instrumentation. This study aimed to test the ease of passage of FOBs and AECs through paediatric ETT of different sizes and from different manufacturers when using current recommendations for dimensional equipment compatibility taken from text books and manufacturers information. Twelve different brands of cuffed and uncuffed ETT sized ID 2.5 to 5.0 mm were evaluated in an in vitro set-up. Ease of device passage as well as the locations of an impaired passage within the ETT were assessed. Redundant samples were used for same sized ETT and all measurements were triple-checked in randomized order. In total, 51 paired samples of uncuffed as well as cuffed paediatric ETT were tested. There were substantial differences in the ease of ETT passage concordantly for FOBs and AECs among different manufacturers, but also among the product lines from the same manufacturer for a given ID size. Restriction to passage most frequently was found near the endotracheal tube tip or as a gradually increasing resistance along the ETT shaft. Current recommendations for dimensional equipment compatibility AECs and FOBs with ETTs do not appear to be completely accurate for all ETT brands available. We recommend that specific equipment combinations always must be tested carefully together before attempting to use them in a patient. © 2016 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule
NASA Technical Reports Server (NTRS)
Bay, Stephen D.; Schwabacher, Mark
2003-01-01
Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.
Augmented Cross-Sectional Studies with Abbreviated Follow-up for Estimating HIV Incidence
Claggett, B.; Lagakos, S.W.; Wang, R.
2011-01-01
Summary Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010) propose an augmented cross-sectional design which provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this paper, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF Estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF Estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. PMID:21668904
Augmented cross-sectional studies with abbreviated follow-up for estimating HIV incidence.
Claggett, B; Lagakos, S W; Wang, R
2012-03-01
Cross-sectional HIV incidence estimation based on a sensitive and less-sensitive test offers great advantages over the traditional cohort study. However, its use has been limited due to concerns about the false negative rate of the less-sensitive test, reflecting the phenomenon that some subjects may remain negative permanently on the less-sensitive test. Wang and Lagakos (2010, Biometrics 66, 864-874) propose an augmented cross-sectional design that provides one way to estimate the size of the infected population who remain negative permanently and subsequently incorporate this information in the cross-sectional incidence estimator. In an augmented cross-sectional study, subjects who test negative on the less-sensitive test in the cross-sectional survey are followed forward for transition into the nonrecent state, at which time they would test positive on the less-sensitive test. However, considerable uncertainty exists regarding the appropriate length of follow-up and the size of the infected population who remain nonreactive permanently to the less-sensitive test. In this article, we assess the impact of varying follow-up time on the resulting incidence estimators from an augmented cross-sectional study, evaluate the robustness of cross-sectional estimators to assumptions about the existence and the size of the subpopulation who will remain negative permanently, and propose a new estimator based on abbreviated follow-up time (AF). Compared to the original estimator from an augmented cross-sectional study, the AF estimator allows shorter follow-up time and does not require estimation of the mean window period, defined as the average time between detectability of HIV infection with the sensitive and less-sensitive tests. It is shown to perform well in a wide range of settings. We discuss when the AF estimator would be expected to perform well and offer design considerations for an augmented cross-sectional study with abbreviated follow-up. © 2011, The International Biometric Society.
Hippocampus shape analysis for temporal lobe epilepsy detection in magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Kohan, Zohreh; Azmi, Reza
2016-03-01
There are evidences in the literature that Temporal Lobe Epilepsy (TLE) causes some lateralized atrophy and deformation on hippocampus and other substructures of the brain. Magnetic Resonance Imaging (MRI), due to high-contrast soft tissue imaging, is one of the most popular imaging modalities being used in TLE diagnosis and treatment procedures. Using an algorithm to help clinicians for better and more effective shape deformations analysis could improve the diagnosis and treatment of the disease. In this project our purpose is to design, implement and test a classification algorithm for MRIs based on hippocampal asymmetry detection using shape and size-based features. Our method consisted of two main parts; (1) shape feature extraction, and (2) image classification. We tested 11 different shape and size features and selected four of them that detect the asymmetry in hippocampus significantly in a randomly selected subset of the dataset. Then, we employed a support vector machine (SVM) classifier to classify the remaining images of the dataset to normal and epileptic images using our selected features. The dataset contains 25 patient images in which 12 cases were used as a training set and the rest 13 cases for testing the performance of classifier. We measured accuracy, specificity and sensitivity of, respectively, 76%, 100%, and 70% for our algorithm. The preliminary results show that using shape and size features for detecting hippocampal asymmetry could be helpful in TLE diagnosis in MRI.
Spacesuit and Space Vehicle Comparative Ergonomic Evaluation
NASA Technical Reports Server (NTRS)
England, Scott; Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Blackledge, Christopher; Perez, Esau; Rajulu, Sudhakar
2011-01-01
With the advent of the latest manned spaceflight objectives, a series of prototype launch and reentry spacesuit architectures were evaluated for eventual down selection by NASA based on the performance of a set of designated tasks. A consolidated approach was taken to testing, concurrently collecting suit mobility data, seat-suit-vehicle interface clearances and movement strategies within the volume of a Multi-Purpose Crew Vehicle mockup. To achieve the objectives of the test, a requirement was set forth to maintain high mockup fidelity while using advanced motion capture technologies. These seemingly mutually exclusive goals were accommodated with the construction of an optically transparent and fully adjustable frame mockup. The mockup was constructed such that it could be dimensionally validated rapidly with the motion capture system. This paper will describe the method used to create a motion capture compatible space vehicle mockup, the consolidated approach for evaluating spacesuits in action, as well as the various methods for generating hardware requirements for an entire population from the resulting complex data set using a limited number of test subjects. Kinematics, hardware clearance, suited anthropometry, and subjective feedback data were recorded on fifteen unsuited and five suited subjects. Unsuited subjects were selected chiefly by anthropometry, in an attempt to find subjects who fell within predefined criteria for medium male, large male and small female subjects. The suited subjects were selected as a subset of the unsuited subjects and tested in both unpressurized and pressurized conditions. Since the prototype spacesuits were fabricated in a single size to accommodate an approximately average sized male, the findings from the suit testing were systematically extrapolated to the extremes of the population to anticipate likely problem areas. This extrapolation was achieved by first performing population analysis through a comparison of suited subjects performance to their unsuited performance and then applying the results to the entire range of population. The use of a transparent space vehicle mockup enabled the collection of large amounts of data during human-in-the-loop testing. Mobility data revealed that most of the tested spacesuits had sufficient ranges of motion for tasks to be performed successfully. A failed tasked by a suited subject most often stemmed from a combination of poor field of view while seated and poor dexterity of the gloves when pressurized or from suit/vehicle interface issues. Seat ingress/egress testing showed that problems with anthropometric accommodation does not exclusively occur with the largest or smallest subjects, but rather specific combinations of measurements that lead to narrower seat ingress/egress clearance.
Small numbers are sensed directly, high numbers constructed from size and density.
Zimmermann, Eckart
2018-04-01
Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.
Point-of-Care Test Equipment for Flexible Laboratory Automation.
You, Won Suk; Park, Jae Jun; Jin, Sung Moon; Ryew, Sung Moo; Choi, Hyouk Ryeol
2014-08-01
Blood tests are some of the core clinical laboratory tests for diagnosing patients. In hospitals, an automated process called total laboratory automation, which relies on a set of sophisticated equipment, is normally adopted for blood tests. Noting that the total laboratory automation system typically requires a large footprint and significant amount of power, slim and easy-to-move blood test equipment is necessary for specific demands such as emergency departments or small-size local clinics. In this article, we present a point-of-care test system that can provide flexibility and portability with low cost. First, the system components, including a reagent tray, dispensing module, microfluidic disk rotor, and photometry scanner, and their functions are explained. Then, a scheduler algorithm to provide a point-of-care test platform with an efficient test schedule to reduce test time is introduced. Finally, the results of diagnostic tests are presented to evaluate the system. © 2014 Society for Laboratory Automation and Screening.
NASA Astrophysics Data System (ADS)
Bian, Q.; Huang, X. H. H.; Yu, J. Z.
2014-09-01
Size distribution data of major aerosol constituents are essential in source apportioning of visibility degradation, testing and verification of air quality models incorporating aerosols. We report here 1-year observations of mass size distributions of major inorganic ions (sulfate, nitrate, chloride, ammonium, sodium, potassium, magnesium and calcium) and oxalate at a coastal suburban receptor site in Hong Kong, China. A total of 43 sets of size-segregated samples in the size range of 0.056-18 μm were collected from March 2011 to February 2012. The size distributions of sulfate, ammonium, potassium and oxalate were characterized by a dominant droplet mode with a mass mean aerodynamic diameter (MMAD) in the range of ~ 0.7-0.9 μm. Oxalate had a slightly larger MMAD than sulfate on days with temperatures above 22 °C as a result of the process of volatilization and repartitioning. Nitrate was mostly dominated by the coarse mode but enhanced presence in fine mode was detected on winter days with lower temperature and lower concentrations of sea salt and soil particles. This data set reveals an inversely proportional relationship between the fraction of nitrate in the fine mode and product of the sum of sodium and calcium in equivalent concentrations and the dissociation constant of ammonium nitrate (i.e., (1/([Na+] + 2[Ca2+]) × (1/Ke')) when Pn_fine is significant (> 10%). The seasonal variation observed for sea salt aerosol abundance, with lower values in summer and winter, is possibly linked with the lower marine salinities in these two seasons. Positive matrix factorization was applied to estimate the relative contributions of local formation and transport to the observed ambient sulfate level through the use of the combined data sets of size-segregated sulfate and select gaseous air pollutants. On average, the regional/super-regional transport of air pollutants was the dominant source at this receptor site, especially on high-sulfate days while local formation processes contributed approximately 30% of the total sulfate. This work provides field-measurement-based evidence important for understanding both local photochemistry and regional/super-regional transport in order to properly simulate sulfate aerosols in air quality models.
Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I
2013-05-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.
Karulin, Alexey Y.; Karacsony, Kinga; Zhang, Wenji; Targoni, Oleg S.; Moldovan, Ioana; Dittrich, Marcus; Sundararaman, Srividya; Lehmann, Paul V.
2015-01-01
Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. PMID:25612115
Selecting promising treatments in randomized Phase II cancer trials with an active control.
Cheung, Ying Kuen
2009-01-01
The primary objective of Phase II cancer trials is to evaluate the potential efficacy of a new regimen in terms of its antitumor activity in a given type of cancer. Due to advances in oncology therapeutics and heterogeneity in the patient population, such evaluation can be interpreted objectively only in the presence of a prospective control group of an active standard treatment. This paper deals with the design problem of Phase II selection trials in which several experimental regimens are compared to an active control, with an objective to identify an experimental arm that is more effective than the control or to declare futility if no such treatment exists. Conducting a multi-arm randomized selection trial is a useful strategy to prioritize experimental treatments for further testing when many candidates are available, but the sample size required in such a trial with an active control could raise feasibility concerns. In this study, we extend the sequential probability ratio test for normal observations to the multi-arm selection setting. The proposed methods, allowing frequent interim monitoring, offer high likelihood of early trial termination, and as such enhance enrollment feasibility. The termination and selection criteria have closed form solutions and are easy to compute with respect to any given set of error constraints. The proposed methods are applied to design a selection trial in which combinations of sorafenib and erlotinib are compared to a control group in patients with non-small-cell lung cancer using a continuous endpoint of change in tumor size. The operating characteristics of the proposed methods are compared to that of a single-stage design via simulations: The sample size requirement is reduced substantially and is feasible at an early stage of drug development.
Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk
2018-06-01
Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.
Visual search for arbitrary objects in real scenes.
Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M
2011-08-01
How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.
Evaluating Suit Fit Using Performance Degradation
NASA Technical Reports Server (NTRS)
Margerum, Sarah E.; Cowley, Matthew; Harvill, Lauren; Benson, Elizabeth; Rajulu, Sudhakar
2012-01-01
The Mark III planetary technology demonstrator space suit can be tailored to an individual by swapping the modular components of the suit, such as the arms, legs, and gloves, as well as adding or removing sizing inserts in key areas. A method was sought to identify the transition from an ideal suit fit to a bad fit and how to quantify this breakdown using a metric of mobility-based human performance data. To this end, the degradation of the range of motion of the elbow and wrist of the suit as a function of suit sizing modifications was investigated to attempt to improve suit fit. The sizing range tested spanned optimal and poor fit and was adjusted incrementally in order to compare each joint angle across five different sizing configurations. Suited range of motion data were collected using a motion capture system for nine isolated and functional tasks utilizing the elbow and wrist joints. A total of four subjects were tested with motions involving both arms simultaneously as well as the right arm by itself. Findings indicated that no single joint drives the performance of the arm as a function of suit size; instead it is based on the interaction of multiple joints along a limb. To determine a size adjustment range where an individual can operate the suit at an acceptable level, a performance detriment limit was set. This user-selected limit reveals the task-dependent tolerance of the suit fit around optimal size. For example, the isolated joint motion indicated that the suit can deviate from optimal by as little as -0.6 in to -2.6 in before experiencing a 10% performance drop in the wrist or elbow joint. The study identified a preliminary method to quantify the impact of size on performance and developed a new way to gauge tolerances around optimal size.
Ecological correlates of group-size variation in a resource-defense ungulate, the sedentary guanaco.
Marino, Andrea; Baldi, Ricardo
2014-01-01
For large herbivores, predation-risk, habitat structure and population density are often reported as major determinants of group size variation within and between species. However, whether the underlying causes of these relationships imply an ecological adaptation or are the result of a purely mechanistic process in which fusion and fragmentation events only depend on the rate of group meeting, is still under debate. The aim of this study was to model guanaco family and bachelor group sizes in contrasting ecological settings in order to test hypotheses regarding the adaptive significance of group-size variation. We surveyed guanaco group sizes within three wildlife reserves located in eastern Patagonia where guanacos occupy a mosaic of grasslands and shrublands. Two of these reserves have been free from predators for decades while in the third, pumas often prey on guanacos. All locations have experienced important changes in guanaco abundance throughout the study offering the opportunity to test for density effects. We found that bachelor group size increased with increasing density, as expected by the mechanistic approach, but was independent of habitat structure or predation risk. In contrast, the smaller and territorial family groups were larger in the predator-exposed than in the predator-free locations, and were larger in open grasslands than in shrublands. However, the influence of population density on these social units was very weak. Therefore, family group data supported the adaptive significance of group-size variation but did not support the mechanistic idea. Yet, the magnitude of the effects was small and between-population variation in family group size after controlling for habitat and predation was negligible, suggesting that plasticity of these social units is considerably low. Our results showed that different social units might respond differentially to local ecological conditions, supporting two contrasting hypotheses in a single species, and highlight the importance of taking into account the proximate interests and constraints to which group members may be exposed to when deriving predictions about group-size variation.
ADVANCED CUTTINGS TRANSPORT STUDY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stefan Miska; Nicholas Takach; Kaveh Ashenayi
2004-01-31
Final design of the mast was completed (Task 5). The mast is consisting of two welded plate girders, set next to each other, and spaced 14-inches apart. Fabrication of the boom will be completed in two parts solely for ease of transportation. The end pivot connection will be made through a single 2-inch diameter x 4 feet-8 inch long 316 SS bar. During installation, hard piping make-ups using Chiksan joints will connect the annular section and 4-inch return line to allow full movement of the mast from horizontal to vertical. Additionally, flexible hoses and piping will be installed to isolatemore » both towers from piping loads and allow recycling operations respectively. Calibration of the prototype Foam Generator Cell has been completed and experiments are now being conducted. We were able to generate up to 95% quality foam. Work is currently underway to attach the Thermo-Haake RS300 viscometer and install a view port with a microscope to measure foam bubble size and bubble size distribution. Foam rheology tests (Task 13) were carried out to evaluate the rheological properties of the proposed foam formulation. After successful completion of the first foam test, two sets of rheological tests were conducted at different foam flow rates while keeping other parameters constant (100 psig, 70F, 80% quality). The results from these tests are generally in agreement with the previous foam tests done previously during Task 9. However, an unanticipated observation during these tests was that in both cases, the frictional pressure drop in 2 inch pipe was lower than that in the 3 inch and 4 inch pipes. We also conducted the first foam cuttings transport test during this quarter. Experiments on aerated fluids without cuttings have been completed in ACTF (Task 10). Gas and liquid were injected at different flow rates. Two different sets of experiments were carried out, where the only difference was the temperature. Another set of tests was performed, which covered a wide range of pressure and temperature. Several parameters were measured during these tests including differential pressure and mixture density in the annulus. Flow patterns during the aerated fluids test have been observed through the view port in the annulus and recorded by a video camera. Most of the flow patterns were slug flow. Further increase in gas flow rate changed the wavy flow pattern to slug flow. At this stage, all of the planned cuttings transport tests have been completed. The results clearly show that temperature significantly affects the cuttings transport efficiency of aerated muds, in addition to the liquid flow rate and gas liquid ratio (GLR). Since the printed circuit board is functioning (Task 11) with acceptable noise level we were able to conduct several tests. We used the newly designed pipe test section to conduct tests. We tested to verify that we can distinguish between different depths of sand in a static bed of sand in the pipe section. The results indicated that we can distinguish between different sand levels. We tested with water, air and a mix of the two mediums. Major modifications (installation of magnetic flow meter, pipe fittings and pipelines) to the dynamic bubble characterization facility (DTF, Task 12) were completed. An Excel program that allows obtaining the desired foam quality in DTF was developed. The program predicts the foam quality by recording the time it takes to pressurize the loop with nitrogen.« less
Analysis and testing of axial compression in imperfect slender truss struts
NASA Technical Reports Server (NTRS)
Lake, Mark S.; Georgiadis, Nicholas
1990-01-01
The axial compression of imperfect slender struts for large space structures is addressed. The load-shortening behavior of struts with initially imperfect shapes and eccentric compressive end loading is analyzed using linear beam-column theory and results are compared with geometrically nonlinear solutions to determine the applicability of linear analysis. A set of developmental aluminum clad graphite/epoxy struts sized for application to the Space Station Freedom truss are measured to determine their initial imperfection magnitude, load eccentricity, and cross sectional area and moment of inertia. Load-shortening curves are determined from axial compression tests of these specimens and are correlated with theoretical curves generated using linear analysis.
LANDSAT (MSS): Image demographic estimations
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Foresti, C.
1977-01-01
The author has identified the following significant results. Two sets of urban test sites, one with 35 cities and one with 70 cities, were selected in the State, Sao Paulo. A high degree of colinearity (0.96) was found between urban and areal measurements taken from aerial photographs and LANDSAT MSS imagery. High coefficients were observed when census data were regressed against aerial information (0.95) and LANDSAT data (0.92). The validity of population estimations was tested by regressing three urban variables, against three classes of cities. Results supported the effectiveness of LANDSAT to estimate large city populations with diminishing effectiveness as urban areas decrease in size.
Takahashi, Hidekazu; Haraguchi, Naotsugu; Nishimura, Junichi; Hata, Taishi; Matsuda, Chu; Yamamoto, Hirofumi; Mizushima, Tsunekazu; Mori, Masaki; Doki, Yuichiro; Nakajima, Kiyokazu
2018-06-01
Modern electrosurgical tools have a specific coagulation mode called "soft coagulation". However, soft coagulation has not been widely accepted for surgical operations. To optimize the soft coagulation environment, we developed a novel suction device integrated with an electrosurgical probe, called the "Suction ball coagulator" (SBC). In this study, we aimed to optimize the SBC design with a prototyping process involving a bench test and preclinical study; then, we aimed to demonstrate the feasibility, safety, and potential effectiveness of the SBC for laparoscopic surgery in clinical settings. SBC prototyping was performed with a bench test. Device optimization was performed in a preclinical study with a domestic swine bleeding model. Then, SBC was tested in a clinical setting during 17 clinical laparoscopic colorectal surgeries. In the bench tests, two tip hole sizes and patterns showed a good suction capacity. The preclinical study indicated the best tip shape for accuracy. In clinical use, no device-related adverse event was observed. Moreover, the SBC was feasible for prompt hemostasis and blunt dissections. In addition, SBC could evacuate vapors generated by tissue ablation using electroprobe during laparoscopic surgery. We successfully developed a novel, integrated suction/coagulation probe for hemostasis and commercialized it.
Ranking Bias in Association Studies
Jeffries, Neal O.
2009-01-01
Background It is widely appreciated that genomewide association studies often yield overestimates of the association of a marker with disease when attention focuses upon the marker showing the strongest relationship. For example, in a case-control setting the largest (in absolute value) estimated odds ratio has been found to typically overstate the association as measured in a second, independent set of data. The most common reason given for this observation is that the choice of the most extreme test statistic is often conditional upon first observing a significant p value associated with the marker. A second, less appreciated reason is described here. Under common circumstances it is the multiple testing of many markers and subsequent focus upon those with most extreme test statistics (i.e. highly ranked results) that leads to bias in the estimated effect sizes. Conclusions This bias, termed ranking bias, is separate from that arising from conditioning on a significant p value and may often be a more important factor in generating bias. An analytic description of this bias, simulations demonstrating its extent, and identification of some factors leading to its exacerbation are presented. PMID:19172085
NASA Astrophysics Data System (ADS)
Pokorný, Jaroslav; Pavlíková, Milena; Medved, Igor; Pavlík, Zbyšek; Zahálková, Jana; Rovnaníková, Pavla; Černý, Robert
2016-06-01
Active silica containing materials in the sub-micrometer size range are commonly used for modification of strength parameters and durability of cement based composites. In addition, these materials also assist to accelerate cement hydration. In this paper, two types of diatomaceous earths are used as partial cement replacement in composition of cement paste mixtures. For raw binders, basic physical and chemical properties are studied. The chemical composition of tested materials is determined using classical chemical analysis combined with XRD method that allowed assessment of SiO2 amorphous phase content. For all tested mixtures, initial and final setting times are measured. Basic physical and mechanical properties are measured on hardened paste samples cured 28 days in water. Here, bulk density, matrix density, total open porosity, compressive and flexural strength, are measured. Relationship between compressive strength and total open porosity is studied using several empirical models. The obtained results give evidence of high pozzolanic activity of tested diatomite earths. Their application leads to the increase of both initial and final setting times, decrease of compressive strength, and increase of flexural strength.
Development of a Biodegradable Bone Cement for Craniofacial Applications
Henslee, Allan M.; Gwak, Dong-Ho; Mikos, Antonios G.; Kasper, F. Kurtis
2015-01-01
This study investigated the formulation of a two-component biodegradable bone cement comprising the unsaturated linear polyester macromer poly(propylene fumarate) (PPF) and crosslinked PPF microparticles for use in craniofacial bone repair applications. A full factorial design was employed to evaluate the effects of formulation parameters such as particle weight percentage, particle size, and accelerator concentration on the setting and mechanical properties of crosslinked composites. It was found that the addition of crosslinked microparticles to PPF macromer significantly reduced the temperature rise upon crosslinking from 100.3 ± 21.6 to 102.7 ± 49.3 °C for formulations without microparticles to 28.0 ± 2.0 to 65.3 ± 17.5 °C for formulations with microparticles. The main effects of increasing the particle weight percentage from 25 to 50% were to significantly increase the compressive modulus by 37.7 ± 16.3 MPa, increase the compressive strength by 2.2 ± 0.5 MPa, decrease the maximum temperature by 9.5 ± 3.7 °C, and increase the setting time by 0.7 ± 0.3 min. Additionally, the main effects of increasing the particle size range from 0–150 μm to 150–300 μm were to significantly increase the compressive modulus by 31.2 ± 16.3 MPa and the compressive strength by 1.3 ± 0.5 MPa. However, the particle size range did not have a significant effect on the maximum temperature and setting time. Overall, the composites tested in this study were found to have properties suitable for further consideration in craniofacial bone repair applications. PMID:22499285
Eye Size and Set in Small-Bodied Fossil Primates: A Three-Dimensional Method.
Rosenberger, Alfred L; Smith, Tim D; DeLeon, Valerie B; Burrows, Anne M; Schenck, Robert; Halenar, Lauren B
2016-12-01
We introduce a new method to geometrically reconstruct eye volume and placement in small-bodied primates based on the three-dimensional contour of the intraorbital surface. We validate it using seven species of living primates, with dry skulls and wet dissections, and test its application on seven species of Paleogene fossils of interest. The method performs well even when the orbit is damaged and incomplete, lacking the postorbital bar and represented only by the orbital floor. Eye volume is an important quantity for anatomic and metabolic reasons, which due to differences in eye set, or position within (or outside) the bony orbit, can be underestimated in living and fossil forms when calculated from aperture diameter. Our Ectopic Index quantifies how much the globe's volume protrudes anteriorly from the aperture. Lemur, Notharctus and Rooneyia resemble anthropoids, with deeply recessed eyes protruding 11%-13%. Galago and Tarsius are the other extreme, at 47%-56%. We argue that a laterally oriented aperture has little to do with line-of-sight in euprimates, as large ectopic eyes can position the cornea to enable a directly forward viewing axis, and soft tissue positions the eyes facing forward in megachiropteran bats, which have unenclosed, open eye sockets. The size and set of virtual eyes reconstructed from 3D cranial models confirm that eyes were large to hypertrophic in Hemiacodon, Necrolemur, Microchoerus, Pseudoloris and Shoshonius, but eye size in Rooneyia may have been underestimated by measuring the aperture, as in Aotus. Anat Rec, 299:1671-1689, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Measuring the effect of attention on simple visual search.
Palmer, J; Ames, C T; Lindsey, D T
1993-02-01
Set-size in visual search may be due to 1 or more of 3 factors: sensory processes such as lateral masking between stimuli, attentional processes limiting the perception of individual stimuli, or attentional processes affecting the decision rules for combining information from multiple stimuli. These possibilities were evaluated in tasks such as searching for a longer line among shorter lines. To evaluate sensory contributions, display set-size effects were compared with cuing conditions that held sensory phenomena constant. Similar effects for the display and cue manipulations suggested that sensory processes contributed little under the conditions of this experiment. To evaluate the contribution of decision processes, the set-size effects were modeled with signal detection theory. In these models, a decision effect alone was sufficient to predict the set-size effects without any attentional limitation due to perception.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adnani, N
Purpose: To commission the Monaco Treatment Planning System for the Novalis Tx machine. Methods: The commissioning of Monte-Carlo (MC), Collapsed Cone (CC) and electron Monte-Carlo (eMC) beam models was performed through a series of measurements and calculations in medium and in water. In medium measurements relied Octavius 4D QA system with the 1000 SRS detector array for field sizes less than 4 cm × 4 cm and the 1500 detector array for larger field sizes. Heterogeneity corrections were validated using a custom built phantom. Prior to clinical implementation, an end to end testing of a Prostate and H&N VMAT plansmore » was performed. Results: Using a 0.5% uncertainty and 2 mm grid sizes, Tables I and II summarize the MC validation at 6 MV and 18 MV in both medium and water. Tables III and IV show similar comparisons for CC. Using the custom heterogeneity phantom setup of Figure 1 and IGRT guidance summarized in Figure 2, Table V lists the percent pass rate for a 2%, 2 mm gamma criteria at 6 and 18 MV for both MC and CC. The relationship between MC calculations settings of uncertainty and grid size and the gamma passing rate for a prostate and H&N case is shown in Table VI. Table VII lists the results of the eMC calculations compared to measured data for clinically available applicators and Table VIII for small field cutouts. Conclusion: MU calculations using MC are highly sensitive to uncertainty and grid size settings. The difference can be of the order of several per cents. MC is superior to CC for small fields and when using heterogeneity corrections, regardless of field size, making it more suitable for SRS, SBRT and VMAT deliveries. eMC showed good agreement with measurements down to 2 cm − 2 cm field size.« less
Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.
Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J
2013-06-01
An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Cantor-Rivera, Diego; Goubran, Maged; Kraguljac, Alan; Bartha, Robert; Peters, Terry
2010-03-01
The main objective of this study was to assess the effect of smoothing filter selection in Voxel-Based Morphometry studies on structural T1-weighted magnetic resonance images. Gaussian filters of 4 mm, 8 mm or 10 mm Full Width at High Maximum are commonly used, based on the assumption that the filter size should be at least twice the voxel size to obtain robust statistical results. The hypothesis of the presented work was that the selection of the smoothing filter influenced the detectability of small lesions in the brain. Mesial Temporal Sclerosis associated to Epilepsy was used as the case to demonstrate this effect. Twenty T1-weighted MRIs from the BrainWeb database were selected. A small phantom lesion was placed in the amygdala, hippocampus, or parahippocampal gyrus of ten of the images. Subsequently the images were registered to the ICBM/MNI space. After grey matter segmentation, a T-test was carried out to compare each image containing a phantom lesion with the rest of the images in the set. For each lesion the T-test was repeated with different Gaussian filter sizes. Voxel-Based Morphometry detected some of the phantom lesions. Of the three parameters considered: location,size, and intensity; it was shown that location is the dominant factor for the detection of the lesions.
Bread board float zone experiment system for high purity silicon
NASA Technical Reports Server (NTRS)
Kern, E. L.; Gill, G. L., Jr.
1982-01-01
A breadboard float zone experimental system has been established at Westech Systems for use by NASA in the float zone experimental area. A used zoner of suitable size and flexibility was acquired and installed with the necessary utilities. Repairs, alignments and modifications were made to provide for dislocation free zoning of silicon. The zoner is capable of studying process parameters used in growing silicon in gravity and is flexible to allow trying of new features that will test concepts of zoning in microgravity. Characterizing the state of the art molten zones of a growing silicon crystal will establish the data base against which improvements of zoning in gravity or growing in microgravity can be compared. 25 mm diameter was chosen as the reference size, since growth in microgravity will be at that diameter or smaller for about the next 6 years. Dislocation free crystals were growtn in the 100 and 111 orientations, using a wide set of growth conditions. The zone shape at one set of conditions was measured, by simultaneously aluminum doping and freezing the zone, lengthwise slabbing and delineating by etching. The whole set of crystals, grown under various conditions, were slabbed, polished and striation etched, revealing the growth interface shape and the periodic and aperiodic natures of the striations.
Jing, Xia; Cimino, James J.
2011-01-01
Objective: To explore new graphical methods for reducing and analyzing large data sets in which the data are coded with a hierarchical terminology. Methods: We use a hierarchical terminology to organize a data set and display it in a graph. We reduce the size and complexity of the data set by considering the terminological structure and the data set itself (using a variety of thresholds) as well as contributions of child level nodes to parent level nodes. Results: We found that our methods can reduce large data sets to manageable size and highlight the differences among graphs. The thresholds used as filters to reduce the data set can be used alone or in combination. We applied our methods to two data sets containing information about how nurses and physicians query online knowledge resources. The reduced graphs make the differences between the two groups readily apparent. Conclusions: This is a new approach to reduce size and complexity of large data sets and to simplify visualization. This approach can be applied to any data sets that are coded with hierarchical terminologies. PMID:22195119
Galvanic Manufacturing in the Cities of Russia: Potential Source of Ambient Nanoparticles
Golokhvast, Kirill S.; Shvedova, Anna A.
2014-01-01
Galvanic manufacturing is widely employed and can be found in nearly every average city in Russia. The release and accumulation of different metals (Me), depending on the technology used can be found in the vicinities of galvanic plants. Under the environmental protection act in Russia, the regulations for galvanic manufacturing do not include the regulations and safety standards for ambient ultrafine and nanosized particulate matter (PM). To assess whether Me nanoparticles (NP) are among environmental pollutants caused by galvanic manufacturing, the level of Me NP were tested in urban snow samples collected around galvanic enterprises in two cities. Employing transmission electronic microscopy, energy-dispersive X-ray spectroscopy, and a laser diffraction particle size analyzer, we found that the size distribution of tested Me NP was within 10–120 nm range. This is the first study to report that Me NP of Fe, Cr, Pb, Al, Ni, Cu, and Zn were detected around galvanic shop settings. PMID:25329582
Molecular hydrodynamics of high explosives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belak, J.
1994-11-01
High explosives release mechanical energy through chemical reactions. Applications of high explosives are vast in the mining and military industries and are beginning to see more civilian applications such as the deployment of airbags in modern automobiles. One of the central issues surrounding explosive materials is decreasing their sensitivity, necessary for their safe handling, while maintaining a high yield. Many practical tests have been devised to determine the sensitivity of explosive materials to shock, to impact, to spark, and to friction. These tests have great value in determining yield and setting precautions for safe handling but tell little of themore » mechanisms of initiation. How is the mechanical energy of impact or friction transformed into the chemical excitation that initiates explosion? The answer is intimately related to the structure of the explosive material, the size and distribution of grains, the size and presence of open areas such as voids and gas bubbles, and inevitably the bonding between explosive molecules.« less
Effectiveness of tori line use to reduce seabird bycatch in pelagic longline fishing
Domingo, Andrés; Abreu, Martin; Forselledo, Rodrigo; Yates, Oliver
2017-01-01
Industrial longline fisheries cause the death of large numbers of seabirds annually. Various mitigation measures have been proposed, including the use of tori lines. In this study the efficiency of a single tori line to reduce seabird bycatch was tested on pelagic longline vessels (25-37m length). Thirteen fishing trips were carried out in the area and season of the highest bycatch rates recorded in the southwest Atlantic (2009–2011). We deployed two treatments in random order: sets with a tori line and without a tori line (control treatment). The use of a tori line significantly reduced seabird bycatch rates. Forty three and seven birds were captured in the control (0.85 birds/1,000 hooks, n = 49 sets) and in the tori line treatment (0.13 birds/1,000 hooks, n = 51 sets), respectively. In 47% of the latter sets the tori line broke either because of entanglement with the longline gear or by tension. This diminished the tori line effectiveness; five of the seven captures during sets where a tori line was deployed were following ruptures. Nine additional trips were conducted with a tori line that was modified to reduce entanglements (2012–2016). Seven entanglements were recorded in 73 longline sets. The chance of a rupture on these trips was 4% (95% c.l. = 1–18%) of that during 2009–2011. This work shows that the use of a tori line reduces seabird bycatch in pelagic longline fisheries and is a practice suitable for medium size vessels (~25-40m length). Because the study area has historically very high bycatch rates at global level, this tori line design is potentially useful to reduce seabird bycatch in many medium size pelagic longline vessel fishing in the southern hemisphere. PMID:28886183