Sample records for extremely small sample

  1. Using the Student's "t"-Test with Extremely Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C .F.

    2013-01-01

    Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…

  2. Improving power and robustness for detecting genetic association with extreme-value sampling design.

    PubMed

    Chen, Hua Yun; Li, Mingyao

    2011-12-01

    Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.

  3. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    NASA Astrophysics Data System (ADS)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  4. Slice sampling technique in Bayesian extreme of gold price modelling

    NASA Astrophysics Data System (ADS)

    Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham

    2013-09-01

    In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.

  5. Topological Analysis and Gaussian Decision Tree: Effective Representation and Classification of Biosignals of Small Sample Size.

    PubMed

    Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong

    2017-09-01

    Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.

  6. The Characteristics of Extreme Erosion Events in a Small Mountainous Watershed

    PubMed Central

    Fang, Nu-Fang; Shi, Zhi-Hua; Yue, Ben-Jiang; Wang, Ling

    2013-01-01

    A large amount of soil loss is caused by a small number of extreme events that are mainly responsible for the time compression of geomorphic processes. The aim of this study was to analyze suspended sediment transport during extreme erosion events in a mountainous watershed. Field measurements were conducted in Wangjiaqiao, a small agricultural watershed (16.7 km2) in the Three Gorges Area (TGA) of China. Continuous records were used to analyze suspended sediment transport regimes and assess the sediment loads of 205 rainfall–runoff events during a period of 16 hydrological years (1989–2004). Extreme events were defined as the largest events, ranked in order of their absolute magnitude (representing the 95th percentile). Ten extreme erosion events from 205 erosion events, representing 83.8% of the total suspended sediment load, were selected for study. The results of canonical discriminant analysis indicated that extreme erosion events are characterized by high maximum flood-suspended sediment concentrations, high runoff coefficients, and high flood peak discharge, which could possibly be explained by the transport of deposited sediment within the stream bed during previous events or bank collapses. PMID:24146898

  7. Extreme Temperature Performance of Automotive-Grade Small Signal Bipolar Junction Transistors

    NASA Technical Reports Server (NTRS)

    Boomer, Kristen; Damron, Benny; Gray, Josh; Hammoud, Ahmad

    2018-01-01

    Electronics designed for space exploration missions must display efficient and reliable operation under extreme temperature conditions. For example, lunar outposts, Mars rovers and landers, James Webb Space Telescope, Europa orbiter, and deep space probes represent examples of missions where extreme temperatures and thermal cycling are encountered. Switching transistors, small signal as well as power level devices, are widely used in electronic controllers, data instrumentation, and power management and distribution systems. Little is known, however, about their performance in extreme temperature environments beyond their specified operating range; in particular under cryogenic conditions. This report summarizes preliminary results obtained on the evaluation of commercial-off-the-shelf (COTS) automotive-grade NPN small signal transistors over a wide temperature range and thermal cycling. The investigations were carried out to establish a baseline on functionality of these transistors and to determine suitability for use outside their recommended temperature limits.

  8. Standard Deviation for Small Samples

    ERIC Educational Resources Information Center

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  9. Future Lunar Sampling Missions: Big Returns on Small Samples

    NASA Astrophysics Data System (ADS)

    Shearer, C. K.; Borg, L.

    2002-01-01

    The next sampling missions to the Moon will result in the return of sample mass (100g to 1 kg) substantially smaller than those returned by the Apollo missions (380 kg). Lunar samples to be returned by these missions are vital for: (1) calibrating the late impact history of the inner solar system that can then be extended to other planetary surfaces; (2) deciphering the effects of catastrophic impacts on a planetary body (i.e. Aitken crater); (3) understanding the very late-stage thermal and magmatic evolution of a cooling planet; (4) exploring the interior of a planet; and (5) examining volatile reservoirs and transport on an airless planetary body. Can small lunar samples be used to answer these and other pressing questions concerning important solar system processes? Two potential problems with small, robotically collected samples are placing them in a geologic context and extracting robust planetary information. Although geologic context will always be a potential problem with any planetary sample, new lunar samples can be placed within the context of the important Apollo - Luna collections and the burgeoning planet-scale data sets for the lunar surface and interior. Here we illustrate the usefulness of applying both new or refined analytical approaches in deciphering information locked in small lunar samples.

  10. Manipulation of Samples at Extreme Temperatures for Fast in-situ Synchrotron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Richard

    An aerodynamic sample levitation system with laser beam heating was integrated with the APS beamlines 6 ID-D, 11 ID-C and 20 BM-B. The new capability enables in-situ measurements of structure and XANES at extreme temperatures (300-3500 °C) and in conditions that completely avoid contact with container surfaces. In addition to maintaining a high degree of sample purity, the use of aerodynamic levitation enables deep supercooling and greatly enhanced glass formation from a wide variety of melts and liquids. Development and integration of controlled extreme sample environments and new measurement techniques is an important aspect of beamline operations and user support.more » Processing and solidifying liquids is a critical value-adding step in manufacturing semiconductors, optical materials, metals and in the operation of many energy conversion devices. Understanding structural evolution is of fundamental importance in condensed materials, geology, and biology. The new capability provides unique possibilities for materials research and helps to develop and maintain a competitive materials manufacturing and energy utilization industry. Test samples were used to demonstrate key features of the capability including experiments on hot crystalline materials, liquids at temperatures from about 500 to 3500 °C. The use of controlled atmospheres using redox gas mixtures enabled in-situ changes in the oxidation states of cations in melts. Significant innovations in this work were: (i) Use of redox gas mixtures to adjust the oxidation state of cations in-situ (ii) Operation with a fully enclosed system suitable for work with nuclear fuel materials (iii) Making high quality high energy in-situ x-ray diffraction measurements (iv) Making high quality in-situ XANES measurements (v) Publishing high impact results (vi) Developing independent funding for the research on nuclear materials This SBIR project work led to a commercial instrument product for the niche market of

  11. Small deformations of extreme five dimensional Myers-Perry black hole initial data

    NASA Astrophysics Data System (ADS)

    Alaee, Aghil; Kunduri, Hari K.

    2015-02-01

    We demonstrate the existence of a one-parameter family of initial data for the vacuum Einstein equations in five dimensions representing small deformations of the extreme Myers-Perry black hole. This initial data set has `' symmetry and preserves the angular momenta and horizon geometry of the extreme solution. Our proof is based upon an earlier result of Dain and Gabach-Clement concerning the existence of -invariant initial data sets which preserve the geometry of extreme Kerr (at least for short times). In addition, we construct a general class of transverse, traceless symmetric rank 2 tensors in these geometries.

  12. Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.

    PubMed

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.

  13. An interface for the direct coupling of small liquid samples to AMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ognibene, T. J.; Thomas, A. T.; Daley, P. F.

    We describe the moving wire interface attached to the 1-MV AMS system at LLNL’s Center for Accelerator Mass Spectrometry for the analysis of nonvolatile liquid samples as either discrete drops or from the direct output of biochemical separatory instrumentation, such as high-performance liquid chromatography (HPLC). Discrete samples containing at least a few 10 s of nanograms of carbon and as little as 50 zmol 14C can be measured with a 3–5% precision in a few minutes. The dynamic range of our system spans approximately 3 orders in magnitude. Sample to sample memory is minimized by the use of fresh targetsmore » for each discrete sample or by minimizing the amount of carbon present in a peak generated by an HPLC containing a significant amount of 14C. As a result, liquid sample AMS provides a new technology to expand our biomedical AMS program by enabling the capability to measure low-level biochemicals in extremely small samples that would otherwise be inaccessible.« less

  14. An interface for the direct coupling of small liquid samples to AMS

    DOE PAGES

    Ognibene, T. J.; Thomas, A. T.; Daley, P. F.; ...

    2015-05-28

    We describe the moving wire interface attached to the 1-MV AMS system at LLNL’s Center for Accelerator Mass Spectrometry for the analysis of nonvolatile liquid samples as either discrete drops or from the direct output of biochemical separatory instrumentation, such as high-performance liquid chromatography (HPLC). Discrete samples containing at least a few 10 s of nanograms of carbon and as little as 50 zmol 14C can be measured with a 3–5% precision in a few minutes. The dynamic range of our system spans approximately 3 orders in magnitude. Sample to sample memory is minimized by the use of fresh targetsmore » for each discrete sample or by minimizing the amount of carbon present in a peak generated by an HPLC containing a significant amount of 14C. As a result, liquid sample AMS provides a new technology to expand our biomedical AMS program by enabling the capability to measure low-level biochemicals in extremely small samples that would otherwise be inaccessible.« less

  15. Trends and Cost-Analysis of Lower Extremity Nerve Injury Using the National Inpatient Sample.

    PubMed

    Foster, Chase H; Karsy, Michael; Jensen, Michael R; Guan, Jian; Eli, Ilyas; Mahan, Mark A

    2018-06-08

    Peripheral nerve injuries (PNIs) of the lower extremities have been assessed in small cohort studies; however, the actual incidence, national trends, comorbidities, and cost of care in lower extremity PNI are not defined. Lack of sufficient data limits discussion on national policies, payors, and other aspects fundamental to the delivery of care in the US. To establish estimates of lower extremity PNIs incidence, associated diagnoses, and cost in the US using a comprehensive database with a minimum of a decade of data. The National Inpatient Sample was utilized to evaluate International Classification of Disease codes for specific lower extremity PNIs (9560-9568) between 2001 and 2013. Lower extremity PNIs occurred with a mean incidence of 13.3 cases per million population annually, which declined minimally from 2001 to 2013. The mean ± SEM age was 41.6 ± 0.1 yr; 61.1% of patients were males. Most were admitted via the emergency department (56.0%). PNIs occurred to the sciatic (16.6%), femoral (10.7%), tibial (6.0%), peroneal (33.4%), multiple nerves (1.3%), and other (32.0%). Associated diagnoses included lower extremity fracture (13.4%), complications of care (11.2%), open wounds (10.3%), crush injury (9.7%), and other (7.2%). Associated procedures included tibial fixation (23.3%), closure of skin (20.1%), debridement of open fractures (15.4%), fixation of other bones (13.5%), and wound debridement (14.5%). The mean annual unadjusted compounded growth rate of charges was 8.8%. The mean ± SEM annual charge over the time period was $64 031.20 ± $421.10, which was associated with the number of procedure codes (β = 0.2), length of stay (β = 0.6), and year (β = 0.1) in a multivariable analysis (P = .0001). These data describe associations in the treatment of lower extremity PNIs, which are important for considering national policies, costs, research and the delivery of care.

  16. Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1996-01-01

    Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.

  17. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  18. A Geology Sampling System for Small Bodies

    NASA Technical Reports Server (NTRS)

    Hood, A. D.; Naids, A. J.; Graff, T.; Abell, P.

    2015-01-01

    Human exploration of Small Bodies is being investigated as a precursor to a Mars surface mission. Asteroids, comets, dwarf planets, and the moons of Mars all fall into this Small Bodies category and some are being discussed as potential mission tar-gets. Obtaining geological samples for return to Earth will be a major objective for any mission to a Small Body. Currently the knowledge base for geology sampling in microgravity is in its infancy. Furthermore, humans interacting with non-engineered surfaces in a microgravity environment poses unique challenges. In preparation for such missions, a team at the National Aeronautics and Space Administration (NASA) John-son Space Center (JSC) has been working to gain experience on how to safely obtain numerous sample types in such an environment. This abstract briefly summarizes the type of samples the science community is interested in, discusses an integrated geology sampling solution, and highlights some of the unique challenges associated with this type of exploration.

  19. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    PubMed

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  20. A sub-sampled approach to extremely low-dose STEM

    DOE PAGES

    Stevens, A.; Luzi, L.; Yang, H.; ...

    2018-01-22

    The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less

  1. A sub-sampled approach to extremely low-dose STEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A.; Luzi, L.; Yang, H.

    The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less

  2. Extreme Quantum Memory Advantage for Rare-Event Sampling

    NASA Astrophysics Data System (ADS)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  3. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    PubMed

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  4. Small Sample Reactivity Measurements in the RRR/SEG Facility: Reanalysis using TRIPOLI-4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummel, Andrew; Palmiotti, Guiseppe

    2016-08-01

    This work involved reanalyzing the RRR/SEG integral experiments performed at the Rossendorf facility in Germany throughout the 1970s and 80s. These small sample reactivity worth measurements were carried out using the pile oscillator technique for many different fission products, structural materials, and standards. The coupled fast-thermal system was designed such that the measurements would provide insight into elemental data, specifically the competing effects between neutron capture and scatter. Comparing the measured to calculated reactivity values can then provide adjustment criteria to ultimately improve nuclear data for fast reactor designs. Due to the extremely small reactivity effects measured (typically less thanmore » 1 pcm) and the specific heterogeneity of the core, the tool chosen for this analysis was TRIPOLI-4. This code allows for high fidelity 3-dimensional geometric modeling, and the most recent, unreleased version, is capable of exact perturbation theory.« less

  5. Using extreme phenotype sampling to identify the rare causal variants of quantitative traits in association studies.

    PubMed

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David

    2011-12-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.

  6. Using Extreme Phenotype Sampling to Identify the Rare Causal Variants of Quantitative Traits in Association Studies

    PubMed Central

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David

    2014-01-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541

  7. Automated storm water sampling on small watersheds

    USGS Publications Warehouse

    Harmel, R.D.; King, K.W.; Slade, R.M.

    2003-01-01

    Few guidelines are currently available to assist in designing appropriate automated storm water sampling strategies for small watersheds. Therefore, guidance is needed to develop strategies that achieve an appropriate balance between accurate characterization of storm water quality and loads and limitations of budget, equipment, and personnel. In this article, we explore the important sampling strategy components (minimum flow threshold, sampling interval, and discrete versus composite sampling) and project-specific considerations (sampling goal, sampling and analysis resources, and watershed characteristics) based on personal experiences and pertinent field and analytical studies. These components and considerations are important in achieving the balance between sampling goals and limitations because they determine how and when samples are taken and the potential sampling error. Several general recommendations are made, including: setting low minimum flow thresholds, using flow-interval or variable time-interval sampling, and using composite sampling to limit the number of samples collected. Guidelines are presented to aid in selection of an appropriate sampling strategy based on user's project-specific considerations. Our experiences suggest these recommendations should allow implementation of a successful sampling strategy for most small watershed sampling projects with common sampling goals.

  8. A new approach for the description of discharge extremes in small catchments

    NASA Astrophysics Data System (ADS)

    Pavia Santolamazza, Daniela; Lebrenz, Henning; Bárdossy, András

    2017-04-01

    Small catchment basins in Northwestern Switzerland, characterized by small concentration times, are frequently targeted by floods. The peak and the volume of these floods are commonly estimated by a frequency analysis of occurrence and described by a random variable, assuming a uniform distributed probability and stationary input drivers (e.g. precipitation, temperature). For these small catchments, we attempt to describe and identify the underlying mechanisms and dynamics at the occurrence of extremes by means of available high temporal resolution (10 min) observations and to explore the possibilities to regionalize hydrological parameters for short intervals. Therefore, we investigate new concepts for the flood description such as entropy as a measure of disorder and dispersion of precipitation. First findings and conclusions of this ongoing research are presented.

  9. Sample Handling in Extreme Environments

    NASA Technical Reports Server (NTRS)

    Avellar, Louisa; Badescu, Mircea; Sherrit, Stewart; Bar-Cohen, Yoseph

    2013-01-01

    Harsh environments, such as that on Venus, preclude the use of existing equipment for functions that involve interaction with the environment. The operating limitations of current high temperature electronics are well below the actual temperature and pressure found on Venus (460 deg C and 92 atm), so proposed lander configurations typically include a pressure vessel where the science instruments are kept at Earth-like temperature and pressure (25 deg C and 1 atm). The purpose of this project was to develop and demonstrate a method for sample transfer from an external drill to internal science instruments for a lander on Venus. The initial concepts were string and pneumatically driven systems; and the latter system was selected for its ability to deliver samples at very high speed. The pneumatic system was conceived to be driven by the pressure difference between the Venusian atmosphere and the inside of the lander. The pneumatic transfer of a small capsule was demonstrated, and velocity data was collected from the lab experiment. The sample transfer system was modeled using CAD software and prototyped using 3D printing. General structural and thermal analyses were performed to approximate the proposed system's mass and effects on the temperature and pressure inside of the lander. Additionally, a sampler breadboard for use on Titan was tested and functionality problems were resolved.

  10. Big assumptions for small samples in crop insurance

    Treesearch

    Ashley Elaine Hungerford; Barry Goodwin

    2014-01-01

    The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.

  11. The small-scale treatability study sample exemption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coalgate, J.

    1991-01-01

    In 1981, the Environmental Protection Agency (EPA) issued an interim final rule that conditionally exempted waste samples collected solely for the purpose of monitoring or testing to determine their characteristics or composition'' from RCRA Subtitle C hazardous waste regulations. This exemption (40 CFR 261.4(d)) apples to the transportation of samples between the generator and testing laboratory, temporary storage of samples at the laboratory prior to and following testing, and storage at a laboratory for specific purposes such as an enforcement action. However, the exclusion did not include large-scale samples used in treatability studies or other testing at pilot plants ormore » other experimental facilities. As a result of comments received by the EPA subsequent to the issuance of the interim final rule, the EPA reopened the comment period on the interim final rule on September 18, 1987, and specifically requested comments on whether or not the sample exclusion should be expanded to include waste samples used in small-scale treatability studies. Almost all responders commented favorably on such a proposal. As a result, the EPA issued a final rule (53 FR 27290, July 19, 1988) conditionally exempting waste samples used in small-scale treatability studies from full regulation under Subtitle C of RCRA. The question of whether or not to extend the exclusion to larger scale as proposed by the Hazardous Waste Treatment Council was deferred until a later date. This information Brief summarizes the requirements of the small-scale treatability exemption.« less

  12. Accelerator mass spectrometry of small biological samples.

    PubMed

    Salehpour, Mehran; Forsgard, Niklas; Possnert, Göran

    2008-12-01

    Accelerator mass spectrometry (AMS) is an ultra-sensitive technique for isotopic ratio measurements. In the biomedical field, AMS can be used to measure femtomolar concentrations of labeled drugs in body fluids, with direct applications in early drug development such as Microdosing. Likewise, the regenerative properties of cells which are of fundamental significance in stem-cell research can be determined with an accuracy of a few years by AMS analysis of human DNA. However, AMS nominally requires about 1 mg of carbon per sample which is not always available when dealing with specific body substances such as localized, organ-specific DNA samples. Consequently, it is of analytical interest to develop methods for the routine analysis of small samples in the range of a few tens of microg. We have used a 5 MV Pelletron tandem accelerator to study small biological samples using AMS. Different methods are presented and compared. A (12)C-carrier sample preparation method is described which is potentially more sensitive and less susceptible to contamination than the standard procedures.

  13. Relative optical navigation around small bodies via Extreme Learning Machine

    NASA Astrophysics Data System (ADS)

    Law, Andrew M.

    To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.

  14. Autonomous Sample Acquisition for Planetary and Small Body Explorations

    NASA Technical Reports Server (NTRS)

    Ghavimi, Ali R.; Serricchio, Frederick; Dolgin, Ben; Hadaegh, Fred Y.

    2000-01-01

    Robotic drilling and autonomous sample acquisition are considered as the key technology requirements in future planetary or small body exploration missions. Core sampling or subsurface drilling operation is envisioned to be off rovers or landers. These supporting platforms are inherently flexible, light, and can withstand only limited amount of reaction forces and torques. This, together with unknown properties of sampled materials, makes the sampling operation a tedious task and quite challenging. This paper highlights the recent advancements in the sample acquisition control system design and development for the in situ scientific exploration of planetary and small interplanetary missions.

  15. Thermoregulatory value of cracking-clay soil shelters for small vertebrates during extreme desert conditions.

    PubMed

    Waudby, Helen P; Petit, Sophie

    2017-05-01

    Deserts exhibit extreme climatic conditions. Small desert-dwelling vertebrates have physiological and behavioral adaptations to cope with these conditions, including the ability to seek shelter. We investigated the temperature (T) and relative humidity (RH) regulating properties of the soil cracks that characterize the extensive cracking-clay landscapes of arid Australia, and the extent of their use by 2 small marsupial species: fat-tailed and stripe-faced dunnarts (Sminthopsis crassicaudata and Sminthopsis macroura). We measured hourly (over 24-h periods) the T and RH of randomly-selected soil cracks compared to outside conditions, during 2 summers and 2 winters. We tracked 17 dunnarts (8 Sminthopsis crassicaudata and 9 Sminthopsis macroura) to quantify their use of cracks. Cracks consistently moderated microclimate, providing more stable conditions than available from non-crack points, which often displayed comparatively dramatic fluctuations in T and RH. Both dunnart species used crack shelters extensively. Cracks constitute important shelter for small animals during extreme conditions by providing a stable microclimate, which is typically cooler than outside conditions in summer and warmer in winter. Cracks likely play a fundamental sheltering role by sustaining the physiological needs of small mammal populations. Globally, cracking-clay areas are dominated by agricultural land uses, including livestock grazing. Management of these systems should focus not only on vegetation condition, but also on soil integrity, to maintain shelter resources for ground-dwelling fauna. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  16. Technology Tips: Sample Too Small? Probably Not!

    ERIC Educational Resources Information Center

    Strayer, Jeremy F.

    2013-01-01

    Statistical studies are referenced in the news every day, so frequently that people are sometimes skeptical of reported results. Often, no matter how large a sample size researchers use in their studies, people believe that the sample size is too small to make broad generalizations. The tasks presented in this article use simulations of repeated…

  17. Microgravity Testing of a Surface Sampling System for Sample Return from Small Solar System Bodies

    NASA Technical Reports Server (NTRS)

    Franzen, M. A.; Preble, J.; Schoenoff, M.; Halona, K.; Long, T. E.; Park, T.; Sears, D. W. G.

    2004-01-01

    The return of samples from solar system bodies is becoming an essential element of solar system exploration. The recent National Research Council Solar System Exploration Decadal Survey identified six sample return missions as high priority missions: South-Aitken Basin Sample Return, Comet Surface Sample Return, Comet Surface Sample Return-sample from selected surface sites, Asteroid Lander/Rover/Sample Return, Comet Nucleus Sample Return-cold samples from depth, and Mars Sample Return [1] and the NASA Roadmap also includes sample return missions [2] . Sample collection methods that have been flown on robotic spacecraft to date return subgram quantities, but many scientific issues (like bulk composition, particle size distributions, petrology, chronology) require tens to hundreds of grams of sample. Many complex sample collection devices have been proposed, however, small robotic missions require simplicity. We present here the results of experiments done with a simple but innovative collection system for sample return from small solar system bodies.

  18. Pulsed Direct Current Electrospray: Enabling Systematic Analysis of Small Volume Sample by Boosting Sample Economy.

    PubMed

    Wei, Zhenwei; Xiong, Xingchuang; Guo, Chengan; Si, Xingyu; Zhao, Yaoyao; He, Muyi; Yang, Chengdui; Xu, Wei; Tang, Fei; Fang, Xiang; Zhang, Sichun; Zhang, Xinrong

    2015-11-17

    We had developed pulsed direct current electrospray ionization mass spectrometry (pulsed-dc-ESI-MS) for systematically profiling and determining components in small volume sample. Pulsed-dc-ESI utilized constant high voltage to induce the generation of single polarity pulsed electrospray remotely. This method had significantly boosted the sample economy, so as to obtain several minutes MS signal duration from merely picoliter volume sample. The elongated MS signal duration enable us to collect abundant MS(2) information on interested components in a small volume sample for systematical analysis. This method had been successfully applied for single cell metabolomics analysis. We had obtained 2-D profile of metabolites (including exact mass and MS(2) data) from single plant and mammalian cell, concerning 1034 components and 656 components for Allium cepa and HeLa cells, respectively. Further identification had found 162 compounds and 28 different modification groups of 141 saccharides in a single Allium cepa cell, indicating pulsed-dc-ESI a powerful tool for small volume sample systematical analysis.

  19. Integrating sphere based reflectance measurements for small-area semiconductor samples

    NASA Astrophysics Data System (ADS)

    Saylan, S.; Howells, C. T.; Dahlem, M. S.

    2018-05-01

    This article describes a method that enables reflectance spectroscopy of small semiconductor samples using an integrating sphere, without the use of additional optical elements. We employed an inexpensive sample holder to measure the reflectance of different samples through 2-, 3-, and 4.5-mm-diameter apertures and applied a mathematical formulation to remove the bias from the measured spectra caused by illumination of the holder. Using the proposed method, the reflectance of samples fabricated using expensive or rare materials and/or low-throughput processes can be measured. It can also be incorporated to infer the internal quantum efficiency of small-area, research-level solar cells. Moreover, small samples that reflect light at large angles and develop scattering may also be measured reliably, by virtue of an integrating sphere insensitive to directionalities.

  20. THE MULTI-WAVELENGTH EXTREME STARBURST SAMPLE OF LUMINOUS GALAXIES. I. SAMPLE CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laag, Edward; Croft, Steve; Canalizo, Gabriela

    2010-12-15

    This paper introduces the Multi-wavelength Extreme Starburst Sample (MESS), a new catalog of 138 star-forming galaxies (0.1 < z < 0.3) optically selected from the Sloan Digital Sky Survey using emission line strength diagnostics to have a high absolute star formation rate (SFR; minimum 11 M{sub sun} yr{sup -1} with median SFR {approx} 61 M{sub sun} yr{sup -1} based on a Kroupa initial mass function). The MESS was designed to complement samples of nearby star-forming galaxies such as the luminous infrared galaxies (LIRGs) and ultraviolet luminous galaxies (UVLGs). Observations using the Multi-band Imaging Photometer (24, 70, and 160 {mu}m channels)more » on the Spitzer Space Telescope indicate that the MESS galaxies have IR luminosities similar to those of LIRGs, with an estimated median L{sub TIR} {approx} 3 x 10{sup 11} L{sub sun}. The selection criteria for the MESS objects suggest they may be less obscured than typical far-IR-selected galaxies with similar estimated SFRs. Twenty out of 70 of the MESS objects detected in the Galaxy Evolution Explorer FUV band also appear to be UVLGs. We estimate the SFRs based directly on luminosities to determine the agreement for these methods in the MESS. We compare these estimates to the emission line strength technique, since the effective measurement of dust attenuation plays a central role in these methods. We apply an image stacking technique to the Very Large Array FIRST survey radio data to retrieve 1.4 GHz luminosity information for 3/4 of the sample covered by FIRST including sources too faint, and at too high a redshift, to be detected in FIRST. We also discuss the relationship between the MESS objects and samples selected through alternative criteria. Morphologies will be the subject of a forthcoming paper.« less

  1. Challenging Conventional Wisdom for Multivariate Statistical Models with Small Samples

    ERIC Educational Resources Information Center

    McNeish, Daniel

    2017-01-01

    In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…

  2. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  3. Apparatus for Measuring Total Emissivity of Small, Low-Emissivity Samples

    NASA Technical Reports Server (NTRS)

    Tuttle, James; DiPirro, Michael J.

    2011-01-01

    An apparatus was developed for measuring total emissivity of small, lightweight, low-emissivity samples at low temperatures. The entire apparatus fits inside a small laboratory cryostat. Sample installation and removal are relatively quick, allowing for faster testing. The small chamber surrounding the sample is lined with black-painted aluminum honeycomb, which simplifies data analysis. This results in the sample viewing a very high-emissivity surface on all sides, an effect which would normally require a much larger chamber volume. The sample and chamber temperatures are individually controlled using off-the-shelf PID (proportional integral derivative) controllers, allowing flexibility in the test conditions. The chamber can be controlled at a higher temperature than the sample, allowing a direct absorptivity measurement. The lightweight sample is suspended by its heater and thermometer leads from an isothermal bar external to the chamber. The wires run out of the chamber through small holes in its corners, and the wires do not contact the chamber itself. During a steady-state measurement, the thermometer and bar are individually controlled at the same temperature, so there is zero heat flow through the wires. Thus, all of sample-temperature-control heater power is radiated to the chamber. Double-aluminized Kapton (DAK) emissivity was studied down to 10 K, which was about 25 K colder than any previously reported measurements. This verified a minimum in the emissivity at about 35 K and a rise as the temperature dropped to lower values.

  4. Saint Louis region : small sample travel survey

    DOT National Transportation Integrated Search

    1991-02-01

    This report summarizes results of the St. Louis Region Small Sample Travel Survey. A total of 1,446 households participated in the survey, which was designed to collect travel characteristics data from residents of the St. Louis metropolitan region. ...

  5. So Small, So Loud: Extremely High Sound Pressure Level from a Pygmy Aquatic Insect (Corixidae, Micronectinae)

    PubMed Central

    Sueur, Jérôme; Mackie, David; Windmill, James F. C.

    2011-01-01

    To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6–82.2) SPL rms re 2.10−5 Pa with a peak at 99.2 (85.7–104.6) SPL re 2.10−5 Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure. PMID:21698252

  6. So small, so loud: extremely high sound pressure level from a pygmy aquatic insect (Corixidae, Micronectinae).

    PubMed

    Sueur, Jérôme; Mackie, David; Windmill, James F C

    2011-01-01

    To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6-82.2) SPL rms re 2.10(-5) Pa with a peak at 99.2 (85.7-104.6) SPL re 2.10(-5) Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure.

  7. Rainfall extremes from TRMM data and the Metastatistical Extreme Value Distribution

    NASA Astrophysics Data System (ADS)

    Zorzetto, Enrico; Marani, Marco

    2017-04-01

    A reliable quantification of the probability of weather extremes occurrence is essential for designing resilient water infrastructures and hazard mitigation measures. However, it is increasingly clear that the presence of inter-annual climatic fluctuations determines a substantial long-term variability in the frequency of occurrence of extreme events. This circumstance questions the foundation of the traditional extreme value theory, hinged on stationary Poisson processes or on asymptotic assumptions to derive the Generalized Extreme Value (GEV) distribution. We illustrate here, with application to daily rainfall, a new approach to extreme value analysis, the Metastatistical Extreme Value Distribution (MEVD). The MEVD relaxes the above assumptions and is based on the whole distribution of daily rainfall events, thus allowing optimal use of all available observations. Using a global dataset of rain gauge observations, we show that the MEVD significantly outperforms the Generalized Extreme Value distribution, particularly for long average recurrence intervals and when small samples are available. The latter property suggests MEVD to be particularly suited for applications to satellite rainfall estimates, which only cover two decades, thus making extreme value estimation extremely challenging. Here we apply MEVD to the TRMM TMPA 3B42 product, an 18-year dataset of remotely-sensed daily rainfall providing a quasi-global coverage. Our analyses yield a global scale mapping of daily rainfall extremes and of their distributional tail properties, bridging the existing large gaps in ground-based networks. Finally, we illustrate how our global-scale analysis can provide insight into how properties of local rainfall regimes affect tail estimation uncertainty when using the GEV or MEVD approach. We find a dependence of the estimation uncertainty, for both the GEV- and MEV-based approaches, on the average annual number and on the inter-annual variability of rainy days. In

  8. Small-scale studies of roasted ore waste reveal extreme ranges of stable mercury isotope signatures

    NASA Astrophysics Data System (ADS)

    Smith, Robin S.; Wiederhold, Jan G.; Jew, Adam D.; Brown, Gordon E.; Bourdon, Bernard; Kretzschmar, Ruben

    2014-07-01

    Active and closed Hg mines are significant sources of Hg contamination to the environment, mainly due to large volumes of mine waste material disposed of on-site. The application of Hg isotopes as source tracer from such contaminated sites requires knowledge of the Hg isotope signatures of different materials potentially released to the environment. Previous work has shown that calcine, the waste residue of the on-site ore roasting process, can exhibit distinct Hg isotope signatures compared with the primary ore. Here, we report results from a detailed small-scale study of Hg isotope variations in calcine collected from the closed New Idria Hg mine, San Benito County, CA, USA. The calcine samples exhibited different internal layering features which were investigated using optical microscopy, micro X-ray fluorescence, micro X-ray absorption spectroscopy (μ-XAS), and stable Hg isotope analysis. Significant Fe, S, and Hg concentration gradients were found across the different internal layers. Isotopic analyses revealed an extreme variation with pronounced isotopic gradients across the internal layered features. Overall, δ202Hg (±0.10‰, 2 SD) describing mass-dependent fractionation (MDF) ranged from -5.96 to 14.49‰, which is by far the largest range of δ202Hg values reported for any environmental sample. In addition, Δ199Hg (±0.06‰, 2 SD) describing mass-independent fractionation (MIF) ranged from -0.17 to 0.21‰. The μ-XAS analyses suggested that cinnabar and metacinnabar are the dominant Hg-bearing phases in the calcine. Our results demonstrate that the incomplete roasting of HgS ores in Hg mines can cause extreme mass-dependent Hg isotope fractionations at the scale of individual calcine pieces with enrichments in both light and heavy Hg isotopes relative to the primary ore signatures. This finding has important implications for the application of Hg isotopes as potential source tracers for Hg released to the environment from closed Hg mines and

  9. A Geology Sampling System for Small Bodies

    NASA Technical Reports Server (NTRS)

    Naids, Adam J.; Hood, Anthony D.; Abell, Paul; Graff, Trevor; Buffington, Jesse

    2016-01-01

    Human exploration of microgravity bodies is being investigated as a precursor to a Mars surface mission. Asteroids, comets, dwarf planets, and the moons of Mars all fall into this microgravity category and some are being discussed as potential mission targets. Obtaining geological samples for return to Earth will be a major objective for any mission to a small body. Currently, the knowledge base for geology sampling in microgravity is in its infancy. Humans interacting with non-engineered surfaces in microgravity environment pose unique challenges. In preparation for such missions a team at the NASA Johnson Space Center has been working to gain experience on how to safely obtain numerous sample types in such an environment. This paper describes the type of samples the science community is interested in, highlights notable prototype work, and discusses an integrated geology sampling solution.

  10. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alipour, N.; Safari, H.; Innes, D. E.

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separatedmore » using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.« less

  11. Demonstration/Validation of Incremental Sampling at Two Diverse Military Ranges and Development of an Incremental Sampling Tool

    DTIC Science & Technology

    2010-06-01

    Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit

  12. Accurate EPR radiosensitivity calibration using small sample masses

    NASA Astrophysics Data System (ADS)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  13. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  14. Sample Return from Small Solar System Bodies

    NASA Astrophysics Data System (ADS)

    Orgel, L.; A'Hearn, M.; Bada, J.; Baross, J.; Chapman, C.; Drake, M.; Kerridge, J.; Race, M.; Sogin, M.; Squyres, S.

    With plans for multiple sample return missions in the next decade, NASA requested guidance from the National Research Council's SSB on how to treat samples returned from solar system bodies such as planetary satellites, asteroids and comets. A special Task Group assessed the potential for a living entity to be included in return samples from various bodies as well as the potential for large scale effects if such an entity were inadvertently introduced into the Earth's biosphere. The Group also assessed differences among solar system bodies, identified investigations that could reduce uncertainty about the bodies, and considered risks of returned samples compared to natural influx of material to the Earth in the form of interplanetary dust particles, meteorites and other small impactors. The final report (NRC, 1998) provides a decision making framework for future missions and makes recommendations on how to handle samples from different planetary satellites and primitive solar system bodies

  15. Do small changes in rotation affect measurements of lower extremity limb alignment?

    PubMed

    Jamali, Amir A; Meehan, John P; Moroski, Nathan M; Anderson, Matthew J; Lamba, Ramit; Parise, Carol

    2017-05-22

    The alignment of the lower extremity has important implications in the development of knee arthritis. The effect of incremental rotations of the limb on common parameters of alignment has not been studied. The purpose of the study was to (1) determine the standardized neutral position measurements of alignment and (2) determine the effect of rotation on commonly used measurements of alignment. Eighty-seven full length CT angiography studies (49 males and 38 females, average age 66 years old) were included. Three-dimensional models were created using a rendering software program and placed on a virtual plane. An image of the extremity was obtained. Thirty scans were randomly selected, and those models were rotated in 3° intervals around the longitudinal axis and additional images were obtained. In the neutral position, the mechanical lateral distal femoral articular angle (mLDFA) was 85.6 ± 2.3°, medial proximal tibial angle (MPTA) was 86.1 ± 2.8°, and mechanical tibiofemoral angle (mTFA) was -0.7 ± 3.1°. Females had a more valgus alignment with a mTFA of 0.5 ± 2.9° while males had a more varus alignment with a mTFA of -1.7 ± 2.9°. The anatomic tibiofemoral angle (aTFA) was 4.8 ± 2.6°, the anatomic lateral distal femoral angle (aLDFA) measured 80.2 ± 2.2°, and the anatomical-mechanical angle (AMA) was 5.4 ± 0.7°. The prevalence of constitutional varus was 18%. The effect of rotation on the rotated scans led to statistically significant differences relative to the 0° measurement for all measurements. These effects may be small, and their clinical importance is unknown. This study provides new information on standardized measures of lower extremity alignment and the relationship between discreet axial rotations of the entire lower extremity and these parameters.

  16. Self-Reported Extremely Adverse Life Events and Longitudinal Changes in Five-Factor Model Personality Traits in an Urban Sample

    PubMed Central

    Löckenhoff, Corinna E.; Terracciano, Antonio; Patriciu, Nicholas S.; Eaton, William W.; Costa, Paul T.

    2009-01-01

    This study examined longitudinal personality change in response to extremely adverse life events in a sample (N = 458) drawn from the East Baltimore Epidemiologic Catchment Area study. Five-factor model personality traits were assessed twice over an average interval of 8 years. Twenty-five percent of the participants reported an extremely horrifying or frightening event within 2 years before the second personality assessment. Relative to the rest of the sample, they showed increases in neuroticism, decreases in the compliance facet of agreeableness, and decreases in openness to values. Baseline personality was unrelated to future events, but among participants who reported extreme events, lower extraversion and/or conscientiousness at baseline as well as longitudinal increases in neuroticism predicted lower mental health at follow-up. PMID:19230009

  17. Monitoring Cellular Events in Living Mast Cells Stimulated with an Extremely Small Amount of Fluid on a Microchip

    NASA Astrophysics Data System (ADS)

    Munaka, Tatsuya; Abe, Hirohisa; Kanai, Masaki; Sakamoto, Takashi; Nakanishi, Hiroaki; Yamaoka, Tetsuji; Shoji, Shuichi; Murakami, Akira

    2006-07-01

    We successfully developed a measurement system for real-time analysis of cellular function using a newly designed microchip. This microchip was equipped with a micro cell incubation chamber (240 nl) and was stimulated by a very small amount of stimuli (as small as 24 nl). Using the microchip system, cultivation of mast cells was successfully carried out. Monitoring of the cellular events after stimulation with an extremely small amount of fluid on a microchip was performed. This system could be applicable for various types of cellular analysis including real-time monitoring of cellular response by stimulation.

  18. Method to make accurate concentration and isotopic measurements for small gas samples

    NASA Astrophysics Data System (ADS)

    Palmer, M. R.; Wahl, E.; Cunningham, K. L.

    2013-12-01

    Carbon isotopic ratio measurements of CO2 and CH4 provide valuable insight into carbon cycle processes. However, many of these studies, like soil gas, soil flux, and water head space experiments, provide very small gas sample volumes, too small for direct measurement by current constant-flow Cavity Ring-Down (CRDS) isotopic analyzers. Previously, we addressed this issue by developing a sample introduction module which enabled the isotopic ratio measurement of 40ml samples or smaller. However, the system, called the Small Sample Isotope Module (SSIM), does dilute the sample during the delivery with inert carrier gas which causes a ~5% reduction in concentration. The isotopic ratio measurements are not affected by this small dilution, but researchers are naturally interested accurate concentration measurements. We present the accuracy and precision of a new method of using this delivery module which we call 'double injection.' Two portions of the 40ml of the sample (20ml each) are introduced to the analyzer, the first injection of which flushes out the diluting gas and the second injection is measured. The accuracy of this new method is demonstrated by comparing the concentration and isotopic ratio measurements for a gas sampled directly and that same gas measured through the SSIM. The data show that the CO2 concentration measurements were the same within instrument precision. The isotopic ratio precision (1σ) of repeated measurements was 0.16 permil for CO2 and 1.15 permil for CH4 at ambient concentrations. This new method provides a significant enhancement in the information provided by small samples.

  19. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    PubMed Central

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  20. Small black holes and near-extremal CFTs

    DOE PAGES

    Benjamin, Nathan; Dyer, Ethan; Fitzpatrick, A. Liam; ...

    2016-08-02

    Pure theories of AdS 3 quantum gravity are conjectured to be dual to CFTs with sparse spectra of light primary operators. The sparsest possible spectrum consistent with modular invariance includes only black hole states above the vacuum. Witten conjectured the existence of a family of extremal CFTs, which realize this spectrum for all admissible values of the central charge. We consider the quantum corrections to the classical spectrum, and propose a specific modification of Witten’s conjecture which takes into account the existence of “small” black hole states. These have zero classical horizon area, with a calculable entropy attributed solely tomore » loop effects. Lastly, our conjecture passes various consistency checks, especially when generalized to include theories with supersymmetry. In theories with N = 2 supersymmetry, this “near-extremal CFT” proposal precisely evades the no-go results of Gaberdiel et al.« less

  1. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    PubMed

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  2. Sequencing small genomic targets with high efficiency and extreme accuracy

    PubMed Central

    Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.

    2015-01-01

    The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638

  3. Extreme Mean and Its Applications

    NASA Technical Reports Server (NTRS)

    Swaroop, R.; Brownlow, J. D.

    1979-01-01

    Extreme value statistics obtained from normally distributed data are considered. An extreme mean is defined as the mean of p-th probability truncated normal distribution. An unbiased estimate of this extreme mean and its large sample distribution are derived. The distribution of this estimate even for very large samples is found to be nonnormal. Further, as the sample size increases, the variance of the unbiased estimate converges to the Cramer-Rao lower bound. The computer program used to obtain the density and distribution functions of the standardized unbiased estimate, and the confidence intervals of the extreme mean for any data are included for ready application. An example is included to demonstrate the usefulness of extreme mean application.

  4. Statistical issues in reporting quality data: small samples and casemix variation.

    PubMed

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  5. Design and Manufacturing of Extremely Low Mass Flight Systems

    NASA Technical Reports Server (NTRS)

    Johnson, Michael R.

    2002-01-01

    Extremely small flight systems pose some unusual design and manufacturing challenges. The small size of the components that make up the system generally must be built with extremely tight tolerances to maintain the functionality of the assembled item. Additionally, the total mass of the system is extremely sensitive to what would be considered small perturbations in a larger flight system. The MUSES C mission, designed, built, and operated by Japan, has a small rover provided by NASA that falls into this small flight system category. This NASA-provided rover is used as a case study of an extremely small flight system design. The issues that were encountered with the rover portion of the MUSES C program are discussed and conclusions about the recommended mass margins at different stages of a small flight system project are presented.

  6. Conversion of Small Algal Oil Sample to JP-8

    DTIC Science & Technology

    2012-01-01

    cracking of Algal Oil to SPK Hydroprocessing Lab Plant uop Nitrogen Hydrogen Product ., __ Small Scale Lab Hydprocessing plant - Down flow trickle ... bed configuration - Capable of retaining 25 cc of catalyst bed Meter UOP ·CONFIDENTIAL File Number The catalytic deoxygenation stage of the...content which combined with the samples acidity, is a challenge to reactor metallurgy. None the less, an attempt was made to convert this sample to

  7. Investigating Test Equating Methods in Small Samples through Various Factors

    ERIC Educational Resources Information Center

    Asiret, Semih; Sünbül, Seçil Ömür

    2016-01-01

    In this study, equating methods for random group design using small samples through factors such as sample size, difference in difficulty between forms, and guessing parameter was aimed for comparison. Moreover, which method gives better results under which conditions was also investigated. In this study, 5,000 dichotomous simulated data…

  8. Transition from Forward Smoldering to Flaming in Small Polyurethane Foam Samples

    NASA Technical Reports Server (NTRS)

    Bar-Ilan, A.; Putzeys, O.; Rein, G.; Fernandez-Pello, A. C.

    2004-01-01

    Experimental observations are presented of the effect of the flow velocity and oxygen concentration, and of a thermal radiant flux, on the transition from smoldering to flaming in forward smoldering of small samples of polyurethane foam with a gas/solid interface. The experiments are part of a project studying the transition from smolder to flaming under conditions encountered in spacecraft facilities, i.e., microgravity, low velocity variable oxygen concentration flows. Because the microgravity experiments are planned for the International Space Station, the foam samples had to be limited in size for safety and launch mass reasons. The feasible sample size is too small for smolder to self propagate because of heat losses to the surrounding environment. Thus, the smolder propagation and the transition to flaming had to be assisted by reducing the heat losses to the surroundings and increasing the oxygen concentration. The experiments are conducted with small parallelepiped samples vertically placed in a wind tunnel. Three of the sample lateral-sides are maintained at elevated temperature and the fourth side is exposed to an upward flow and to a radiant flux. It is found that decreasing the flow velocity and increasing its oxygen concentration, and/or increasing the radiant flux enhances the transition to flaming, and reduces the delay time to transition. Limiting external ambient conditions for the transition to flaming are reported for the present experimental set-up. The results show that smolder propagation and the transition to flaming can occur in relatively small fuel samples if the external conditions are appropriate. The results also indicate that transition to flaming occurs in the char left behind by the smolder reaction, and it has the characteristics of a gas-phase ignition induced by the smolder reaction, which acts as the source of both gaseous fuel and heat.

  9. [New population curves in spanish extremely preterm neonates].

    PubMed

    García-Muñoz Rodrigo, F; García-Alix Pérez, A; Figueras Aloy, J; Saavedra Santana, P

    2014-08-01

    Most anthropometric reference data for extremely preterm infants used in Spain are outdated and based on non-Spanish populations, or are derived from small hospital-based samples that failed to include neonates of borderline viability. To develop gender-specific, population-based curves for birth weight, length, and head circumference in extremely preterm Caucasian infants, using a large contemporary sample size of Spanish singletons. Anthropometric data from neonates ≤ 28 weeks of gestational age were collected between January 2002 and December 2010 using the Spanish database SEN1500. Gestational age was estimated according to obstetric data (early pregnancy ultrasound). The data were analyzed with the SPSS.20 package, and centile tables were created for males and females using the Cole and Green LMS method. This study presents the first population-based growth curves for extremely preterm infants, including those of borderline viability, in Spain. A sexual dimorphism is evident for all of the studied parameters, starting at early gestation. These new gender-specific and population-based data could be useful for the improvement of growth assessments of extremely preterm infants in our country, for the development of epidemiological studies, for the evaluation of temporal trends, and for clinical or public health interventions seeking to optimize fetal growth. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.

  10. A scanning tunneling microscope capable of imaging specified micron-scale small samples.

    PubMed

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt∕Ir wire to an isolated individual 32.5 × 32.5 μm(2) graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  11. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  12. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  13. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  14. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  15. Respondent-driven sampling and the recruitment of people with small injecting networks.

    PubMed

    Paquette, Dana; Bryant, Joanne; de Wit, John

    2012-05-01

    Respondent-driven sampling (RDS) is a form of chain-referral sampling, similar to snowball sampling, which was developed to reach hidden populations such as people who inject drugs (PWID). RDS is said to reach members of a hidden population that may not be accessible through other sampling methods. However, less attention has been paid as to whether there are segments of the population that are more likely to be missed by RDS. This study examined the ability of RDS to capture people with small injecting networks. A study of PWID, using RDS, was conducted in 2009 in Sydney, Australia. The size of participants' injecting networks was examined by recruitment chain and wave. Participants' injecting network characteristics were compared to those of participants from a separate pharmacy-based study. A logistic regression analysis was conducted to examine the characteristics independently associated with having small injecting networks, using the combined RDS and pharmacy-based samples. In comparison with the pharmacy-recruited participants, RDS participants were almost 80% less likely to have small injecting networks, after adjusting for other variables. RDS participants were also more likely to have their injecting networks form a larger proportion of those in their social networks, and to have acquaintances as part of their injecting networks. Compared to those with larger injecting networks, individuals with small injecting networks were equally likely to engage in receptive sharing of injecting equipment, but less likely to have had contact with prevention services. These findings suggest that those with small injecting networks are an important group to recruit, and that RDS is less likely to capture these individuals.

  16. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    PubMed

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  17. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with amore » test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.« less

  18. Nano-Scale Sample Acquisition Systems for Small Class Exploration Spacecraft

    NASA Astrophysics Data System (ADS)

    Paulsen, G.

    2015-12-01

    The paradigm for space exploration is changing. Large and expensive missions are very rare and the space community is turning to smaller, lighter, and less expensive missions that could still perform great exploration. These missions are also within reach of commercial companies such as the Google Lunar X Prize teams that develop small scale lunar missions. Recent commercial endeavors such as "Planet Labs inc." and Sky Box Imaging, inc. show that there are new benefits and business models associated with miniaturization of space hardware. The Nano-Scale Sample Acquisition System includes NanoDrill for capture of small rock cores and PlanetVac for capture of surface regolith. These two systems are part of the ongoing effort to develop "Micro Sampling" systems for deployment by the small spacecraft with limited payload capacities. The ideal applications include prospecting missions to the Moon and Asteroids. The MicroDrill is a rotary-percussive coring drill that captures cores 7 mm in diameter and up to 2 cm long. The drill weighs less than 1 kg and can capture a core from a 40 MPa strength rock within a few minutes, with less than 10 Watt power and less than 10 Newton of preload. The PlanetVac is a pneumatic based regolith acquisition system that can capture surface sample in touch-and-go maneuver. These sampling systems were integrated within the footpads of commercial quadcopter for testing. As such, they could also be used by geologists on Earth to explore difficult to get to locations.

  19. Global Weirding? - Using Very Large Ensembles and Extreme Value Theory to assess Changes in Extreme Weather Events Today

    NASA Astrophysics Data System (ADS)

    Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.

    2014-12-01

    A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been

  20. A rational decision rule with extreme events.

    PubMed

    Basili, Marcello

    2006-12-01

    Risks induced by extreme events are characterized by small or ambiguous probabilities, catastrophic losses, or windfall gains. Through a new functional, that mimics the restricted Bayes-Hurwicz criterion within the Choquet expected utility approach, it is possible to represent the decisionmaker behavior facing both risky (large and reliable probability) and extreme (small or ambiguous probability) events. A new formalization of the precautionary principle (PP) is shown and a new functional, which encompasses both extreme outcomes and expectation of all the possible results for every act, is claimed.

  1. Extremely preterm infants who are small for gestational age have a high risk of early hypophosphatemia and hypokalemia.

    PubMed

    Boubred, F; Herlenius, E; Bartocci, M; Jonsson, B; Vanpée, M

    2015-11-01

    Electrolyte balances have not been sufficiently evaluated in extremely preterm infants after early parenteral nutrition. We investigated the risk of early hypophosphatemia and hypokalemia in extremely preterm infants born small for gestational age (SGA) who received nutrition as currently recommended. This prospective, observational cohort study included all consecutive extremely preterm infants born at 24-27 weeks who received high amino acids and lipid perfusion from birth. We evaluated the electrolyte levels of SGA infants and infants born appropriate for gestational age (AGA) during the first five days of life. The 12 SGA infants had lower plasma potassium levels from Day One compared to the 36 AGA infants and were more likely to have hypokalemia (58% vs 17%, p = 0.001) and hypophosphatemia (40% vs 9%, p < 0.01) during the five-day observation period. After adjusting for perinatal factors, SGA remained significantly associated with hypophosphatemia (odds ratio 1.39, confidence intervals 1.07-1.81, p = 0.01). Extremely preterm infants born SGA who were managed with currently recommended early parenteral nutrition had a high risk of early hypokalemia and hypophosphatemia. Potassium and phosphorus intakes should be set at sufficient levels from birth onwards, especially in SGA infants. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  2. Maintaining Equivalent Cut Scores for Small Sample Test Forms

    ERIC Educational Resources Information Center

    Dwyer, Andrew C.

    2016-01-01

    This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…

  3. A Small Diameter Rosette for Sampling Ice Covered Waters

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.; Smethie, W. M.; Perry, R. S.; Schlosser, P.; Friedrich, R.

    2011-12-01

    A gas tight, small diameter, lightweight rosette, supporting equipment and an effective operational protocol has been developed for aircraft supported sampling of sea water across the Lincoln Sea. The system incorporates a commercial off the shelf CTD electronics (SBE19+ sensor package and SBE33 deck unit) to provide real-time measurement data at the surface. We designed and developed modular water sample units and custom electronics to decode the bottle firing commands and close the sample bottles. For a typical station, we land a ski-equipped deHaviland Twin Otter (DHC-6) aircraft on a suitable piece of sea-ice, drill a 12" diameter hole through the ice next to the cargo door and set up a tent to provide a reasonable working environment over the hole. A small winch with 0.1" diameter single conductor cable is mounted in the aircraft by the cargo door and a tripod supports a sheave above the hole. The CTD module is connected to the end of the wire and the water sampling modules are stacked on top as the system is lowered. For most stations, three sample modules are used to provide 12 four (4) liter sample bottles. Data collected during the down-cast is used to formulate the sampling plan which is executed on the up-cast. The system is powered by a 3,700 Watt, 120VAC gasoline generator. After collection, the sample modules are stored in passively temperature stabilized ice chests during the flight back to the logistics facility at Alert where a broad range of samples are drawn and stored for future analysis. The transport mechanism has a good track record of maintaining water samples within about two degrees of the original collection temperature which minimizes out-gassing. The system has been successfully deployed during a field program each spring starting in 2004 along a transect between the north end of Ellesmere Island (Alert, Nunavut) and the North Pole. During the eight field programs we have taken 48 stations with twelve bottles at most stations (eight at

  4. Multibody Simulation Software Testbed for Small-Body Exploration and Sampling

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Mandic, Milan

    2011-01-01

    G-TAG is a software tool for the multibody simulation of a spacecraft with a robotic arm and a sampling mechanism, which performs a touch-and-go (TAG) maneuver for sampling from the surface of a small celestial body. G-TAG utilizes G-DYN, a multi-body simulation engine described in the previous article, and interfaces to controllers, estimators, and environmental forces that affect the spacecraft. G-TAG can easily be adapted for the analysis of the mission stress cases to support the design of a TAG system, as well as for comprehensive Monte Carlo simulations to analyze and evaluate a particular TAG system design. Any future small-body mission will benefit from using G-TAG, which has already been extensively used in Comet Odyssey and Galahad Asteroid New Frontiers proposals.

  5. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis

    PubMed Central

    Lin, Johnny; Bentler, Peter M.

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511

  6. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    PubMed

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  7. A General Linear Method for Equating with Small Samples

    ERIC Educational Resources Information Center

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  8. The Utility of IRT in Small-Sample Testing Applications.

    ERIC Educational Resources Information Center

    Sireci, Stephen G.

    The utility of modified item response theory (IRT) models in small sample testing applications was studied. The modified IRT models were modifications of the one- and two-parameter logistic models. One-, two-, and three-parameter models were also studied. Test data were from 4 years of a national certification examination for persons desiring…

  9. MARS: bringing the automation of small-molecule bioanalytical sample preparations to a new frontier.

    PubMed

    Li, Ming; Chou, Judy; Jing, Jing; Xu, Hui; Costa, Aldo; Caputo, Robin; Mikkilineni, Rajesh; Flannelly-King, Shane; Rohde, Ellen; Gan, Lawrence; Klunk, Lewis; Yang, Liyu

    2012-06-01

    In recent years, there has been a growing interest in automating small-molecule bioanalytical sample preparations specifically using the Hamilton MicroLab(®) STAR liquid-handling platform. In the most extensive work reported thus far, multiple small-molecule sample preparation assay types (protein precipitation extraction, SPE and liquid-liquid extraction) have been integrated into a suite that is composed of graphical user interfaces and Hamilton scripts. Using that suite, bioanalytical scientists have been able to automate various sample preparation methods to a great extent. However, there are still areas that could benefit from further automation, specifically, the full integration of analytical standard and QC sample preparation with study sample extraction in one continuous run, real-time 2D barcode scanning on the Hamilton deck and direct Laboratory Information Management System database connectivity. We developed a new small-molecule sample-preparation automation system that improves in all of the aforementioned areas. The improved system presented herein further streamlines the bioanalytical workflow, simplifies batch run design, reduces analyst intervention and eliminates sample-handling error.

  10. Optimization of techniques for multiple platform testing in small, precious samples such as human chorionic villus sampling.

    PubMed

    Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John

    2016-11-01

    Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  11. Regularised extreme learning machine with misclassification cost and rejection cost for gene expression data classification.

    PubMed

    Lu, Huijuan; Wei, Shasha; Zhou, Zili; Miao, Yanzi; Lu, Yi

    2015-01-01

    The main purpose of traditional classification algorithms on bioinformatics application is to acquire better classification accuracy. However, these algorithms cannot meet the requirement that minimises the average misclassification cost. In this paper, a new algorithm of cost-sensitive regularised extreme learning machine (CS-RELM) was proposed by using probability estimation and misclassification cost to reconstruct the classification results. By improving the classification accuracy of a group of small sample which higher misclassification cost, the new CS-RELM can minimise the classification cost. The 'rejection cost' was integrated into CS-RELM algorithm to further reduce the average misclassification cost. By using Colon Tumour dataset and SRBCT (Small Round Blue Cells Tumour) dataset, CS-RELM was compared with other cost-sensitive algorithms such as extreme learning machine (ELM), cost-sensitive extreme learning machine, regularised extreme learning machine, cost-sensitive support vector machine (SVM). The results of experiments show that CS-RELM with embedded rejection cost could reduce the average cost of misclassification and made more credible classification decision than others.

  12. Radiocarbon measurements of small gaseous samples at CologneAMS

    NASA Astrophysics Data System (ADS)

    Stolz, A.; Dewald, A.; Altenkirch, R.; Herb, S.; Heinze, S.; Schiffer, M.; Feuerstein, C.; Müller-Gatermann, C.; Wotte, A.; Rethemeyer, J.; Dunai, T.

    2017-09-01

    A second SO-110 B (Arnold et al., 2010) ion source was installed at the 6 MV CologneAMS for the measurement of gaseous samples. For the gas supply a dedicated device from Ionplus AG was connected to the ion source. Special effort was devoted to determine optimized operation parameters for the ion source, which give a high carbon current output and a high 14C- yield. The latter is essential in cases when only small samples are available. Additionally a modified immersion lens and modified target pieces were tested and the target position was optimized.

  13. The Bragg Reflection Polarimeter On the Gravity and Extreme Magnetism Small Explorer Mission

    NASA Astrophysics Data System (ADS)

    Allured, Ryan; Griffiths, S.; Daly, R.; Prieskorn, Z.; Marlowe, H.; Kaaret, P.; GEMS Team

    2011-09-01

    The strong gravity associated with black holes warps the spacetime outside of the event horizon, and it is predicted that this will leave characteristic signatures on the polarization of X-ray emission originating in the accretion disk. The Gravity and Extreme Magnetism Small Explorer (GEMS) mission will be the first observatory with the capability to make polarization measurements with enough sensitivity to quantitatively test this prediction. Students at the University of Iowa are currently working on the development of the Bragg Reflection Polarimeter (BRP), a soft X-ray polarimeter sensitive at 500 eV, that is the student experiment on GEMS. The BRP will complement the main experiment by making a polarization measurement from accreting black holes below the main energy band (2-10 keV). This measurement will constrain the inclination of the accretion disk and tighten measurements of black hole spin.

  14. Biota dose assessment of small mammals sampled near uranium mines in northern Arizona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jannik, T.; Minter, K.; Kuhne, W.

    In 2015, the U. S. Geological Survey (USGS) collected approximately 50 small mammal carcasses from Northern Arizona uranium mines and other background locations. Based on the highest gross alpha results, 11 small mammal samples were selected for radioisotopic analyses. None of the background samples had significant gross alpha results. The 11 small mammals were identified relative to the three ‘indicator’ mines located south of Fredonia, AZ on the Kanab Plateau (Kanab North Mine, Pinenut Mine, and Arizona 1 Mine) (Figure 1-1) and are operated by Energy Fuels Resources Inc. (EFRI). EFRI annually reports soil analysis for uranium and radium-226 usingmore » Arizona Department of Environmental Quality (ADEQ)-approved Standard Operating Procedures for Soil Sampling (EFRI 2016a, 2016b, 2017). In combination with the USGS small mammal radioiosotopic tissue analyses, a biota dose assessment was completed by Savannah River National Laboratory (SRNL) using the RESidual RADioactivity-BIOTA (RESRAD-BIOTA, V. 1.8) dose assessment tool provided by the Argonne National Laboratory (ANL 2017).« less

  15. Improving Ramsey spectroscopy in the extreme-ultraviolet region with a random-sampling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eramo, R.; Bellini, M.; European Laboratory for Non-linear Spectroscopy

    2011-04-15

    Ramsey-like techniques, based on the coherent excitation of a sample by delayed and phase-correlated pulses, are promising tools for high-precision spectroscopic tests of QED in the extreme-ultraviolet (xuv) spectral region, but currently suffer experimental limitations related to long acquisition times and critical stability issues. Here we propose a random subsampling approach to Ramsey spectroscopy that, by allowing experimentalists to reach a given spectral resolution goal in a fraction of the usual acquisition time, leads to substantial improvements in high-resolution spectroscopy and may open the way to a widespread application of Ramsey-like techniques to precision measurements in the xuv spectral region.

  16. Parameter Estimation with Small Sample Size: A Higher-Order IRT Model Approach

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Hong, Yuan

    2010-01-01

    Sample size ranks as one of the most important factors that affect the item calibration task. However, due to practical concerns (e.g., item exposure) items are typically calibrated with much smaller samples than what is desired. To address the need for a more flexible framework that can be used in small sample item calibration, this article…

  17. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    PubMed

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  18. Entropy of hydrological systems under small samples: Uncertainty and variability

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Wang, Yuankun; Wu, Jichun; Singh, Vijay P.; Zeng, Xiankui; Wang, Lachun; Chen, Yuanfang; Chen, Xi; Zhang, Liyuan; Gu, Shenghua

    2016-01-01

    Entropy theory has been increasingly applied in hydrology in both descriptive and inferential ways. However, little attention has been given to the small-sample condition widespread in hydrological practice, where either hydrological measurements are limited or are even nonexistent. Accordingly, entropy estimated under this condition may incur considerable bias. In this study, small-sample condition is considered and two innovative entropy estimators, the Chao-Shen (CS) estimator and the James-Stein-type shrinkage (JSS) estimator, are introduced. Simulation tests are conducted with common distributions in hydrology, that lead to the best-performing JSS estimator. Then, multi-scale moving entropy-based hydrological analyses (MM-EHA) are applied to indicate the changing patterns of uncertainty of streamflow data collected from the Yangtze River and the Yellow River, China. For further investigation into the intrinsic property of entropy applied in hydrological uncertainty analyses, correlations of entropy and other statistics at different time-scales are also calculated, which show connections between the concept of uncertainty and variability.

  19. Small sample estimation of the reliability function for technical products

    NASA Astrophysics Data System (ADS)

    Lyamets, L. L.; Yakimenko, I. V.; Kanishchev, O. A.; Bliznyuk, O. A.

    2017-12-01

    It is demonstrated that, in the absence of big statistic samples obtained as a result of testing complex technical products for failure, statistic estimation of the reliability function of initial elements can be made by the moments method. A formal description of the moments method is given and its advantages in the analysis of small censored samples are discussed. A modified algorithm is proposed for the implementation of the moments method with the use of only the moments at which the failures of initial elements occur.

  20. Susceptibility to Mortality in Weather Extremes: Effect Modification by Personal and Small Area Characteristics In a Multi-City Case-Only Analysis

    PubMed Central

    Zanobetti, Antonella; O’Neill, Marie S.; Gronlund, Carina J.; Schwartz, Joel D

    2015-01-01

    Background Extremes of temperature have been associated with short-term increases in daily mortality. We identified subpopulations with increased susceptibility to dying during temperature extremes, based on personal demographics, small-area characteristics and preexisting medical conditions. Methods We examined Medicare participants in 135 U.S. cities and identified preexisting conditions based on hospitalization records prior to their deaths, from 1985–2006. Personal characteristics were obtained from the Medicare records, and area characteristics were assigned based on zip-code of residence. We conducted a case-only analysis of over 11 million deaths, and evaluated modification of the risk of dying associated with extremely hot days and extremely cold days, continuous temperatures, and water-vapor pressure. Modifiers included preexisting conditions, personal characteristics, zip-code-level population characteristics, and land-cover characteristics. For each effect modifier, a city-specific logistic regression model was fitted and then an overall national estimate was calculated using meta-analysis. Results People with certain preexisting conditions were more susceptible to extreme heat, with an additional 6% (95% confidence interval= 4% – 8%) increase in the risk of dying on an extremely hot day in subjects with previous admission for atrial fibrillation, an additional 8% (4%–12%) in subjects with Alzheimer disease, and an additional 6% (3%–9%) in subjects with dementia. Zip-code level and personal characteristics were also associated with increased susceptibility to temperature. Conclusions We identified several subgroups of the population who are particularly susceptible to temperature extremes, including persons with atrial fibrillation. PMID:24045717

  1. Cerebral Small Vessel Disease Burden Is Associated with Motor Performance of Lower and Upper Extremities in Community-Dwelling Populations

    PubMed Central

    Su, Ning; Zhai, Fei-Fei; Zhou, Li-Xin; Ni, Jun; Yao, Ming; Li, Ming-Li; Jin, Zheng-Yu; Gong, Gao-Lang; Zhang, Shu-Yang; Cui, Li-Ying; Tian, Feng; Zhu, Yi-Cheng

    2017-01-01

    Objective: To investigate the correlation between cerebral small vessel disease (CSVD) burden and motor performance of lower and upper extremities in community-dwelling populations. Methods: We performed a cross-sectional analysis on 770 participants enrolled in the Shunyi study, which is a population-based cohort study. CSVD burden, including white matter hyperintensities (WMH), lacunes, cerebral microbleeds (CMBs), perivascular spaces (PVS), and brain atrophy were measured using 3T magnetic resonance imaging. All participants underwent quantitative motor assessment of lower and upper extremities, which included 3-m walking speed, 5-repeat chair-stand time, 10-repeat pronation–supination time, and 10-repeat finger-tapping time. Data on demographic characteristics, vascular risk factors, and cognitive functions were collected. General linear model analysis was performed to identify potential correlations between motor performance measures and imaging markers of CSVD after controlling for confounding factors. Results: For motor performance of the lower extremities, WMH was negatively associated with gait speed (standardized β = -0.092, p = 0.022) and positively associated with chair-stand time (standardized β = 0.153, p < 0.0001, surviving FDR correction). For motor performance of the upper extremities, pronation–supination time was positively associated with WMH (standardized β = 0.155, p < 0.0001, surviving FDR correction) and negatively with brain parenchymal fraction (BPF; standardized β = -0.125, p = 0.011, surviving FDR correction). Only BPF was found to be negatively associated with finger-tapping time (standardized β = -0.123, p = 0.012). However, lacunes, CMBs, or PVS were not found to be associated with motor performance of lower or upper extremities in multivariable analysis. Conclusion: Our findings suggest that cerebral microstructural changes related to CSVD may affect motor performance of both lower and upper extremities. WMH and brain

  2. Cerebral Small Vessel Disease Burden Is Associated with Motor Performance of Lower and Upper Extremities in Community-Dwelling Populations.

    PubMed

    Su, Ning; Zhai, Fei-Fei; Zhou, Li-Xin; Ni, Jun; Yao, Ming; Li, Ming-Li; Jin, Zheng-Yu; Gong, Gao-Lang; Zhang, Shu-Yang; Cui, Li-Ying; Tian, Feng; Zhu, Yi-Cheng

    2017-01-01

    Objective: To investigate the correlation between cerebral small vessel disease (CSVD) burden and motor performance of lower and upper extremities in community-dwelling populations. Methods: We performed a cross-sectional analysis on 770 participants enrolled in the Shunyi study, which is a population-based cohort study. CSVD burden, including white matter hyperintensities (WMH), lacunes, cerebral microbleeds (CMBs), perivascular spaces (PVS), and brain atrophy were measured using 3T magnetic resonance imaging. All participants underwent quantitative motor assessment of lower and upper extremities, which included 3-m walking speed, 5-repeat chair-stand time, 10-repeat pronation-supination time, and 10-repeat finger-tapping time. Data on demographic characteristics, vascular risk factors, and cognitive functions were collected. General linear model analysis was performed to identify potential correlations between motor performance measures and imaging markers of CSVD after controlling for confounding factors. Results: For motor performance of the lower extremities, WMH was negatively associated with gait speed (standardized β = -0.092, p = 0.022) and positively associated with chair-stand time (standardized β = 0.153, p < 0.0001, surviving FDR correction). For motor performance of the upper extremities, pronation-supination time was positively associated with WMH (standardized β = 0.155, p < 0.0001, surviving FDR correction) and negatively with brain parenchymal fraction (BPF; standardized β = -0.125, p = 0.011, surviving FDR correction). Only BPF was found to be negatively associated with finger-tapping time (standardized β = -0.123, p = 0.012). However, lacunes, CMBs, or PVS were not found to be associated with motor performance of lower or upper extremities in multivariable analysis. Conclusion: Our findings suggest that cerebral microstructural changes related to CSVD may affect motor performance of both lower and upper extremities. WMH and brain atrophy

  3. The spatial distribution of threats to plant species with extremely small populations

    NASA Astrophysics Data System (ADS)

    Wang, Chunjing; Zhang, Jing; Wan, Jizhong; Qu, Hong; Mu, Xianyun; Zhang, Zhixiang

    2017-03-01

    Many biological conservationists take actions to conserve plant species with extremely small populations (PSESP) in China; however, there have been few studies on the spatial distribution of threats to PSESP. Hence, we selected distribution data of PSESP and made a map of the spatial distribution of threats to PSESP in China. First, we used the weight assignment method to evaluate the threat risk to PSESP at both country and county scales. Second, we used a geographic information system to map the spatial distribution of threats to PSESP, and explored the threat factors based on linear regression analysis. Finally, we suggested some effective conservation options. We found that the PSESP with high values of protection, such as the plants with high scientific research values and ornamental plants, were threatened by over-exploitation and utilization, habitat fragmentation, and a small sized wild population in broad-leaved forests and bush fallows. We also identified some risk hotspots for PSESP in China. Regions with low elevation should be given priority for ex- and in-situ conservation. Moreover, climate change should be considered for conservation of PSESP. To avoid intensive over-exploitation or utilization and habitat fragmentation, in-situ conservation should be practiced in regions with high temperatures and low temperature seasonality, particularly in the high risk hotspots for PSESP that we proposed. Ex-situ conservation should be applied in these same regions, and over-exploitation and utilization of natural resources should be prevented. It is our goal to apply the concept of PSESP to the global scale in the future.

  4. Influence of various water quality sampling strategies on load estimates for small streams

    USGS Publications Warehouse

    Robertson, Dale M.; Roerish, Eric D.

    1999-01-01

    Extensive streamflow and water quality data from eight small streams were systematically subsampled to represent various water‐quality sampling strategies. The subsampled data were then used to determine the accuracy and precision of annual load estimates generated by means of a regression approach (typically used for big rivers) and to determine the most effective sampling strategy for small streams. Estimation of annual loads by regression was imprecise regardless of the sampling strategy used; for the most effective strategy, median absolute errors were ∼30% based on the load estimated with an integration method and all available data, if a regression approach is used with daily average streamflow. The most effective sampling strategy depends on the length of the study. For 1‐year studies, fixed‐period monthly sampling supplemented by storm chasing was the most effective strategy. For studies of 2 or more years, fixed‐period semimonthly sampling resulted in not only the least biased but also the most precise loads. Additional high‐flow samples, typically collected to help define the relation between high streamflow and high loads, result in imprecise, overestimated annual loads if these samples are consistently collected early in high‐flow events.

  5. Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models

    ERIC Educational Resources Information Center

    Price, Larry R.

    2012-01-01

    The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…

  6. Protection of obstetric dimensions in a small-bodied human sample.

    PubMed

    Kurki, Helen K

    2007-08-01

    In human females, the bony pelvis must find a balance between being small (narrow) for efficient bipedal locomotion, and being large to accommodate a relatively large newborn. It has been shown that within a given population, taller/larger-bodied women have larger pelvic canals. This study investigates whether in a population where small body size is the norm, pelvic geometry (size and shape), on average, shows accommodation to protect the obstetric canal. Osteometric data were collected from the pelves, femora, and clavicles (body size indicators) of adult skeletons representing a range of adult body size. Samples include Holocene Later Stone Age (LSA) foragers from southern Africa (n = 28 females, 31 males), Portuguese from the Coimbra-identified skeletal collection (CISC) (n = 40 females, 40 males) and European-Americans from the Hamann-Todd osteological collection (H-T) (n = 40 females, 40 males). Patterns of sexual dimorphism are similar in the samples. Univariate and multivariate analyses of raw and Mosimann shape-variables indicate that compared to the CISC and H-T females, the LSA females have relatively large midplane and outlet canal planes (particularly posterior and A-P lengths). The LSA males also follow this pattern, although with absolutely smaller pelves in multivariate space. The CISC females, who have equally small stature, but larger body mass, do not show the same type of pelvic canal size and shape accommodation. The results suggest that adaptive allometric modeling in at least some small-bodied populations protects the obstetric canal. These findings support the use of population-specific attributes in the clinical evaluation of obstetric risk. (c) 2007 Wiley-Liss, Inc.

  7. TableSim--A program for analysis of small-sample categorical data.

    Treesearch

    David J. Rugg

    2003-01-01

    Documents a computer program for calculating correct P-values of 1-way and 2-way tables when sample sizes are small. The program is written in Fortran 90; the executable code runs in 32-bit Microsoft-- command line environments.

  8. Association of athlete's foot with cellulitis of the lower extremities: diagnostic value of bacterial cultures of ipsilateral interdigital space samples.

    PubMed

    Semel, J D; Goldin, H

    1996-11-01

    We performed a study to determine how often patients with cellulitis of the lower extremities in the absence of trauma, peripheral vascular disease, or chronic open ulcers have ipsilateral interdigital athlete's foot and whether cultures of samples from the involved interdigital spaces would yield potentially pathogenic bacteria. Athlete's foot was present in 20 (83%) of 24 episodes of cellulitis that were studied. Cultures of samples from interdigital spaces yielded Beta-hemolytic streptococci in 17 (85%) of 20 cases, Staphylococcus aureus in 9 (45%) of 20 cases, and gram-negative rods in 7 (35%) of 20 cases. Only Beta-hemolytic streptococci were recovered significantly more often from patients than from a group of controls with athlete's foot who did not have cellulitis (P < .01). Athlete's foot may be a common predisposing condition for cellulitis of the lower extremities. In comparison with attempts at microbiological diagnosis such as aspiration and/or biopsy of the area of cellulitis, cultures of samples from the interdigital spaces combined with serial determinations of antistreptolysin titers may offer a simpler noninvasive method of microbiological diagnosis.

  9. Sampling small mammals in southeastern forests: the importance of trapping in trees

    Treesearch

    Susan C. Loeb; Gregg L. Chapman; Theodore R. Ridley

    2001-01-01

    Because estimates of small mammal species richness and diversity are strongly influenced by sampling methodology, 2 or more trap types are often used in studies of small mammal communities. However, in most cases, all traps are placed at ground level. In contrast, we used Sherman live traps placed at 1.5 m in trees in addition to Sherman live traps and Mosby box traps...

  10. Some Small Sample Results for Maximum Likelihood Estimation in Multidimensional Scaling.

    ERIC Educational Resources Information Center

    Ramsay, J. O.

    1980-01-01

    Some aspects of the small sample behavior of maximum likelihood estimates in multidimensional scaling are investigated with Monte Carlo techniques. In particular, the chi square test for dimensionality is examined and a correction for bias is proposed and evaluated. (Author/JKS)

  11. SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.

    PubMed

    Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman

    2017-03-01

    We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).

  12. On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.

    PubMed

    Westgate, Philip M; Burchett, Woodrow W

    2017-03-15

    The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Impact of sampling techniques on measured stormwater quality data for small streams

    USGS Publications Warehouse

    Harmel, R.D.; Slade, R.M.; Haney, R.L.

    2010-01-01

    Science-based sampling methodologies are needed to enhance water quality characterization for setting appropriate water quality standards, developing Total Maximum Daily Loads, and managing nonpoint source pollution. Storm event sampling, which is vital for adequate assessment of water quality in small (wadeable) streams, is typically conducted by manual grab or integrated sampling or with an automated sampler. Although it is typically assumed that samples from a single point adequately represent mean cross-sectional concentrations, especially for dissolved constituents, this assumption of well-mixed conditions has received limited evaluation. Similarly, the impact of temporal (within-storm) concentration variability is rarely considered. Therefore, this study evaluated differences in stormwater quality measured in small streams with several common sampling techniques, which in essence evaluated within-channel and within-storm concentration variability. Constituent concentrations from manual grab samples and from integrated samples were compared for 31 events, then concentrations were also compared for seven events with automated sample collection. Comparison of sampling techniques indicated varying degrees of concentration variability within channel cross sections for both dissolved and particulate constituents, which is contrary to common assumptions of substantial variability in particulate concentrations and of minimal variability in dissolved concentrations. Results also indicated the potential for substantial within-storm (temporal) concentration variability for both dissolved and particulate constituents. Thus, failing to account for potential cross-sectional and temporal concentration variability in stormwater monitoring projects can introduce additional uncertainty in measured water quality data. Copyright ?? 2010 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  14. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  15. Applying Incremental Sampling Methodology to Soils Containing Heterogeneously Distributed Metallic Residues to Improve Risk Analysis.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    This study compares conventional grab sampling to incremental sampling methodology (ISM) to characterize metal contamination at a military small-arms-range. Grab sample results had large variances, positively skewed non-normal distributions, extreme outliers, and poor agreement between duplicate samples even when samples were co-located within tens of centimeters of each other. The extreme outliers strongly influenced the grab sample means for the primary contaminants lead (Pb) and antinomy (Sb). In contrast, median and mean metal concentrations were similar for the ISM samples. ISM significantly reduced measurement uncertainty of estimates of the mean, increasing data quality (e.g., for environmental risk assessments) with fewer samples (e.g., decreasing total project costs). Based on Monte Carlo resampling simulations, grab sampling resulted in highly variable means and upper confidence limits of the mean relative to ISM.

  16. Extreme event statistics in a drifting Markov chain

    NASA Astrophysics Data System (ADS)

    Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur

    2017-07-01

    We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.

  17. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  18. Model-based inference for small area estimation with sampling weights

    PubMed Central

    Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.

    2017-01-01

    Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860

  19. THE EXTREME SMALL SCALES: DO SATELLITE GALAXIES TRACE DARK MATTER?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Douglas F.; Berlind, Andreas A.; McBride, Cameron K.

    2012-04-10

    We investigate the radial distribution of galaxies within their host dark matter halos as measured in the Sloan Digital Sky Survey by modeling their small-scale clustering. Specifically, we model the Jiang et al. measurements of the galaxy two-point correlation function down to very small projected separations (10 h{sup -1} kpc {<=} r {<=} 400 h{sup -1} kpc), in a wide range of luminosity threshold samples (absolute r-band magnitudes of -18 up to -23). We use a halo occupation distribution framework with free parameters that specify both the number and spatial distribution of galaxies within their host dark matter halos. Wemore » assume one galaxy resides in the halo center and additional galaxies are considered satellites that follow a radial density profile similar to the dark matter Navarro-Frenk-White (NFW) profile, except that the concentration and inner slope are allowed to vary. We find that in low luminosity samples (M{sub r} < -19.5 and lower), satellite galaxies have radial profiles that are consistent with NFW. M{sub r} < -20 and brighter satellite galaxies have radial profiles with significantly steeper inner slopes than NFW (we find inner logarithmic slopes ranging from -1.6 to -2.1, as opposed to -1 for NFW). We define a useful metric of concentration, M{sub 1/10}, which is the fraction of satellite galaxies (or mass) that are enclosed within one-tenth of the virial radius of a halo. We find that M{sub 1/10} for low-luminosity satellite galaxies agrees with NFW, whereas for luminous galaxies it is 2.5-4 times higher, demonstrating that these galaxies are substantially more centrally concentrated within their dark matter halos than the dark matter itself. Our results therefore suggest that the processes that govern the spatial distribution of galaxies, once they have merged into larger halos, must be luminosity dependent, such that luminous galaxies become poor tracers of the underlying dark matter.« less

  20. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate

  1. A review of empirical research related to the use of small quantitative samples in clinical outcome scale development.

    PubMed

    Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S

    2016-11-01

    There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.

  2. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  3. Recent Developments in MC-ICP-MS for Uranium Isotopic Determination from Small Samples.

    NASA Astrophysics Data System (ADS)

    Field, P.; Lloyd, N. S.

    2016-12-01

    V002: Advances in approaches and instruments for isotope studies Session ID#: 12653 Recent Developments in MC-ICP-MS for Uranium Isotopic Determination from small samples.M. Paul Field 1 & Nicholas S. Lloyd. 1 Elemental Scientific Inc., Omaha, Nebraska, USA. field@icpms.com 2 Thermo Fisher Scientific, Hanna-Kunath-Str. 11, 28199 Bremen, Germany. nicholas.lloyd@thermofisher.com Uranium isotope ratio determination for nuclear, nuclear safeguards and for environmental applications can be challenging due to, 1) the large isotopic differences between samples and 2) low abundance of 234U and 236U. For some applications the total uranium quantities can be limited, or it is desirable to run at lower concentrations for radiological protection. Recent developments in inlet systems and detector technologies allow small samples to be analyzed at higher precisions using MC-ICP-MS. Here we evaluate the combination of Elemental Scientific apex omega desolvation system and microFAST-MC dual loop-loading flow-injection system with the Thermo Scientific NEPTUNE Plus MC-ICP-MS. The inlet systems allow for the automated syringe loading and injecting handling of small sample volumes with efficient desolvation to minimize the hydride interference on 236U. The highest ICP ion sampling efficiency is realized using the Thermo Scientific Jet Interface. Thermo Scientific 1013 ohm amplifier technology allows small ion beams to be measured at higher precision, offering the highest signal/noise ratio with a linear and stable response that covers a wide dynamic range (ca. 1 kcps - 30 Mcps). For nanogram quantities of low enriched and depleted uranium standards the 235U was measured with 1013 ohm amplifier technology. The minor isotopes (234U and 236U) were measured by SEM ion counters with RPQ lens filters, which offer the lowest detection limits. For sample amounts ca. 20 ng the minor isotopes can be moved onto 1013 ohm amplifiers and the 235U onto standard 1011 ohm amplifier. To illustrate the

  4. Extremism without extremists: Deffuant model with emotions

    NASA Astrophysics Data System (ADS)

    Sobkowicz, Pawel

    2015-03-01

    The frequent occurrence of extremist views in many social contexts, often growing from small minorities to almost total majority, poses a significant challenge for democratic societies. The phenomenon can be described within the sociophysical paradigm. We present a modified version of the continuous bounded confidence opinion model, including a simple description of the influence of emotions on tolerances, and eventually on the evolution of opinions. Allowing for psychologically based correlation between the extreme opinions, high emotions and low tolerance for other people's views leads to quick dominance of the extreme views within the studied model, without introducing a special class of agents, as has been done in previous works. This dominance occurs even if the initial numbers of people with extreme opinions is very small. Possible suggestions related to mitigation of the process are briefly discussed.

  5. Measurements of characteristic parameters of extremely small cogged wheels with low module by means of low-coherence interferometry

    NASA Astrophysics Data System (ADS)

    Pakula, Anna; Tomczewski, Slawomir; Skalski, Andrzej; Biało, Dionizy; Salbut, Leszek

    2010-05-01

    This paper presents novel application of Low Coherence Interferometry (LCI) in measurements of characteristic parameters as circular pitch, foot diameter, heads diameter, in extremely small cogged wheels (cogged wheel diameter lower than θ=3 mm and module m = 0.15) produced from metal and ceramics. The most interesting issue concerning small diameter cogged wheels occurs during their production. The characteristic parameters of the wheel depend strongly on the manufacturing process and while inspecting small diameter wheels the shrinkage during the cast varies with the slight change of fabrication process. In the paper the LCI interferometric Twyman - Green setup with pigtailed high power light emitting diode, for cogged wheels measurement, is described. Due to its relatively big field of view the whole wheel can be examined in one measurement, without the necessity of numerical stitching. For purposes of small cogged wheel's characteristic parameters measurement the special binarization algorithm was developed and successfully applied. At the end the results of measurement of heads and foot diameters of two cogged wheels obtained by proposed LCI setup are presented and compared with the results obtained by the commercial optical profiler. The results of examination of injection moulds used for fabrication of measured cogged wheels are also presented. Additionally, the value of cogged wheels shrinkage is calculated as a conclusion for obtained results. Proposed method is suitable for complex measurements of small diameter cogged wheels with low module especially when there are no measurements standards for such objects.

  6. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    PubMed

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Efficiency of baited hoop nets for sampling catfish in southeastern U.S. small impoundments

    USGS Publications Warehouse

    Wallace, Benjamin C.; Weaver, Daniel M.; Kwak, Thomas J.

    2011-01-01

    Many U.S. natural resource agencies stock catfish (Ictaluridae) into small impoundments to provide recreational fishing opportunities. However, effective standardized methods for sampling catfish in small impoundments have not been developed for wide application, particularly in the southeastern United States. We evaluated the efficiency of three bait treatments (i.e., soybean cake, sunflower cake, and no bait) of tandem hoop nets in two North Carolina small impoundments during the fall of 2008 and spring of 2009 in a factorial experimental design. The impoundments were stocked with catchable-size channel catfish Ictalurus punctatus at contrastingly low (5.5 fi sh/ha) and high (90.0 fi sh/ha) rates prior to our sampling. Nets baited with soybean cake consistently sampled more channel catfish than any other treatment. Channel catfish catch ranged as high as 3,251 fi sh per net series during the fall in nets baited with soybean cake in the intensively stocked impoundment and was up to 8.5 and 15.3 times higher during the fall than in the spring in each impoundment. Nets baited with soybean cake sampled significantly (12 and 24 times) more channel catfish than those with no bait in the two impoundments. These trends did not occur among other catfish species. Nonictalurid fish and turtle catch was higher during spring compared to that of fall, corresponding with low channel catfish catches. Our results indicate that tandem hoop nets baited with soybean cake during the fall is a more efficient method for sampling channel catfish compared to nets baited with sunflower cake or no bait in spring or fall. Our findings validate this technique for application in southeastern U.S. small impoundments to assess catfish abundance to guide management and evaluate the success of catfish stocking programs.

  8. Small body size and extreme cortical bone remodeling indicate phyletic dwarfism in Magyarosaurus dacus (Sauropoda: Titanosauria)

    PubMed Central

    Stein, Koen; Csiki, Zoltan; Rogers, Kristina Curry; Weishampel, David B.; Redelstorff, Ragna; Carballido, Jose L.; Sander, P. Martin

    2010-01-01

    Sauropods were the largest terrestrial tetrapods (>105 kg) in Earth's history and grew at rates that rival those of extant mammals. Magyarosaurus dacus, a titanosaurian sauropod from the Upper Cretaceous (Maastrichtian) of Romania, is known exclusively from small individuals (<103 kg) and conflicts with the idea that all sauropods were massive. The diminutive M. dacus was a classical example of island dwarfism (phyletic nanism) in dinosaurs, but a recent study suggested that the small Romanian titanosaurs actually represent juveniles of a larger-bodied taxon. Here we present strong histological evidence that M. dacus was indeed a dwarf (phyletic nanoid). Bone histological analysis of an ontogenetic series of Magyarosaurus limb bones indicates that even the smallest Magyarosaurus specimens exhibit a bone microstructure identical to fully mature or old individuals of other sauropod taxa. Comparison of histologies with large-bodied sauropods suggests that Magyarosaurus had an extremely reduced growth rate, but had retained high basal metabolic rates typical for sauropods. The uniquely decreased growth rate and diminutive body size in Magyarosaurus were adaptations to life on a Cretaceous island and show that sauropod dinosaurs were not exempt from general ecological principles limiting body size. PMID:20435913

  9. Small body size and extreme cortical bone remodeling indicate phyletic dwarfism in Magyarosaurus dacus (Sauropoda: Titanosauria).

    PubMed

    Stein, Koen; Csiki, Zoltan; Rogers, Kristina Curry; Weishampel, David B; Redelstorff, Ragna; Carballido, Jose L; Sander, P Martin

    2010-05-18

    Sauropods were the largest terrestrial tetrapods (>10(5) kg) in Earth's history and grew at rates that rival those of extant mammals. Magyarosaurus dacus, a titanosaurian sauropod from the Upper Cretaceous (Maastrichtian) of Romania, is known exclusively from small individuals (<10(3) kg) and conflicts with the idea that all sauropods were massive. The diminutive M. dacus was a classical example of island dwarfism (phyletic nanism) in dinosaurs, but a recent study suggested that the small Romanian titanosaurs actually represent juveniles of a larger-bodied taxon. Here we present strong histological evidence that M. dacus was indeed a dwarf (phyletic nanoid). Bone histological analysis of an ontogenetic series of Magyarosaurus limb bones indicates that even the smallest Magyarosaurus specimens exhibit a bone microstructure identical to fully mature or old individuals of other sauropod taxa. Comparison of histologies with large-bodied sauropods suggests that Magyarosaurus had an extremely reduced growth rate, but had retained high basal metabolic rates typical for sauropods. The uniquely decreased growth rate and diminutive body size in Magyarosaurus were adaptations to life on a Cretaceous island and show that sauropod dinosaurs were not exempt from general ecological principles limiting body size.

  10. Fiber Bragg Grating Dilatometry in Extreme Magnetic Field and Cryogenic Conditions.

    PubMed

    Jaime, Marcelo; Corvalán Moya, Carolina; Weickert, Franziska; Zapf, Vivien; Balakirev, Fedor F; Wartenbe, Mark; Rosa, Priscila F S; Betts, Jonathan B; Rodriguez, George; Crooker, Scott A; Daou, Ramzy

    2017-11-08

    In this work, we review single mode SiO₂ fiber Bragg grating techniques for dilatometry studies of small single-crystalline samples in the extreme environments of very high, continuous, and pulsed magnetic fields of up to 150 T and at cryogenic temperatures down to <1 K. Distinct millimeter-long materials are measured as part of the technique development, including metallic, insulating, and radioactive compounds. Experimental strategies are discussed for the observation and analysis of the related thermal expansion and magnetostriction of materials, which can achieve a strain sensitivity ( ΔL/L ) as low as a few parts in one hundred million (≈10 -8 ). The impact of experimental artifacts, such as those originating in the temperature dependence of the fiber's index of diffraction, light polarization rotation in magnetic fields, and reduced strain transfer from millimeter-long specimens, is analyzed quantitatively using analytic models available in the literature. We compare the experimental results with model predictions in the small-sample limit, and discuss the uncovered discrepancies.

  11. Electronics for Extreme Environments

    NASA Astrophysics Data System (ADS)

    Patel, J. U.; Cressler, J.; Li, Y.; Niu, G.

    2001-01-01

    Most of the NASA missions involve extreme environments comprising radiation and low or high temperatures. Current practice of providing friendly ambient operating environment to electronics costs considerable power and mass (for shielding). Immediate missions such as the Europa orbiter and lander and Mars landers require the electronics to perform reliably in extreme conditions during the most critical part of the mission. Some other missions planned in the future also involve substantial surface activity in terms of measurements, sample collection, penetration through ice and crust and the analysis of samples. Thus it is extremely critical to develop electronics that could reliably operate under extreme space environments. Silicon On Insulator (SOI) technology is an extremely attractive candidate for NASA's future low power and high speed electronic systems because it offers increased transconductance, decreased sub-threshold slope, reduced short channel effects, elimination of kink effect, enhanced low field mobility, and immunity from radiation induced latch-up. A common belief that semiconductor devices function better at low temperatures is generally true for bulk devices but it does not hold true for deep sub-micron SOI CMOS devices with microscopic device features of 0.25 micrometers and smaller. Various temperature sensitive device parameters and device characteristics have recently been reported in the literature. Behavior of state of the art technology devices under such conditions needs to be evaluated in order to determine possible modifications in the device design for better performance and survivability under extreme environments. Here, we present a unique approach of developing electronics for extreme environments to benefit future NASA missions as described above. This will also benefit other long transit/life time missions such as the solar sail and planetary outposts in which electronics is out open in the unshielded space at the ambient space

  12. Determination of phosphorus in small amounts of protein samples by ICP-MS.

    PubMed

    Becker, J Sabine; Boulyga, Sergei F; Pickhardt, Carola; Becker, J; Buddrus, Stefan; Przybylski, Michael

    2003-02-01

    Inductively coupled plasma mass spectrometry (ICP-MS) is used for phosphorus determination in protein samples. A small amount of solid protein sample (down to 1 micro g) or digest (1-10 micro L) protein solution was denatured in nitric acid and hydrogen peroxide by closed-microvessel microwave digestion. Phosphorus determination was performed with an optimized analytical method using a double-focusing sector field inductively coupled plasma mass spectrometer (ICP-SFMS) and quadrupole-based ICP-MS (ICP-QMS). For quality control of phosphorus determination a certified reference material (CRM), single cell proteins (BCR 273) with a high phosphorus content of 26.8+/-0.4 mg g(-1), was analyzed. For studies on phosphorus determination in proteins while reducing the sample amount as low as possible the homogeneity of CRM BCR 273 was investigated. Relative standard deviation and measurement accuracy in ICP-QMS was within 2%, 3.5%, 11% and 12% when using CRM BCR 273 sample weights of 40 mg, 5 mg, 1 mg and 0.3 mg, respectively. The lowest possible sample weight for an accurate phosphorus analysis in protein samples by ICP-MS is discussed. The analytical method developed was applied for the analysis of homogeneous protein samples in very low amounts [1-100 micro g of solid protein sample, e.g. beta-casein or down to 1 micro L of protein or digest in solution (e.g., tau protein)]. A further reduction of the diluted protein solution volume was achieved by the application of flow injection in ICP-SFMS, which is discussed with reference to real protein digests after protein separation using 2D gel electrophoresis.The detection limits for phosphorus in biological samples were determined by ICP-SFMS down to the ng g(-1) level. The present work discusses the figure of merit for the determination of phosphorus in a small amount of protein sample with ICP-SFMS in comparison to ICP-QMS.

  13. Collisions of Terrestrial Worlds: The Occurrence of Extreme Mid-infrared Excesses around Low-mass Field Stars

    NASA Astrophysics Data System (ADS)

    Theissen, Christopher A.; West, Andrew A.

    2017-04-01

    We present the results of an investigation into the occurrence and properties (stellar age and mass trends) of low-mass field stars exhibiting extreme mid-infrared (MIR) excesses ({L}{IR}/{L}* ≳ 0.01). Stars for the analysis were initially selected from the Motion Verified Red Stars (MoVeRS) catalog of photometric stars with Sloan Digital Sky Survey, 2MASS, and WISE photometry and significant proper motions. We identify 584 stars exhibiting extreme MIR excesses, selected based on an empirical relationship for main-sequence W1-W3 colors. For a small subset of the sample, we show, using spectroscopic tracers of stellar age (Hα and Li I) and luminosity class, that the parent sample is most likely comprised of field dwarfs (≳ 1 Gyr). We also develop the Low-mass Kinematics (LoKi) galactic model to estimate the completeness of the extreme MIR excess sample. Using Galactic height as a proxy for stellar age, the completeness-corrected analysis indicates a distinct age dependence for field stars exhibiting extreme MIR excesses. We also find a trend with stellar mass (using r - z color as a proxy). Our findings are consistent with the detected extreme MIR excesses originating from dust created in a short-lived collisional cascade (≲100,000 years) during a giant impact between two large planetismals or terrestrial planets. These stars with extreme MIR excesses also provide support for planetary collisions being the dominant mechanism in creating the observed Kepler dichotomy (the need for more than a single mode, typically two, to explain the variety of planetary system architectures Kepler has observed), rather than different formation mechanisms.

  14. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  15. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  16. Evolution caused by extreme events.

    PubMed

    Grant, Peter R; Grant, B Rosemary; Huey, Raymond B; Johnson, Marc T J; Knoll, Andrew H; Schmitt, Johanna

    2017-06-19

    Extreme events can be a major driver of evolutionary change over geological and contemporary timescales. Outstanding examples are evolutionary diversification following mass extinctions caused by extreme volcanism or asteroid impact. The evolution of organisms in contemporary time is typically viewed as a gradual and incremental process that results from genetic change, environmental perturbation or both. However, contemporary environments occasionally experience strong perturbations such as heat waves, floods, hurricanes, droughts and pest outbreaks. These extreme events set up strong selection pressures on organisms, and are small-scale analogues of the dramatic changes documented in the fossil record. Because extreme events are rare, almost by definition, they are difficult to study. So far most attention has been given to their ecological rather than to their evolutionary consequences. We review several case studies of contemporary evolution in response to two types of extreme environmental perturbations, episodic (pulse) or prolonged (press). Evolution is most likely to occur when extreme events alter community composition. We encourage investigators to be prepared for evolutionary change in response to rare events during long-term field studies.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).

  17. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity

    NASA Technical Reports Server (NTRS)

    Miller, Robert A.; Kuczmarski, Maria a.

    2009-01-01

    This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data.

  18. Multiplex APLP System for High-Resolution Haplogrouping of Extremely Degraded East-Asian Mitochondrial DNAs

    PubMed Central

    Kakuda, Tsuneo; Shojo, Hideki; Tanaka, Mayumi; Nambiar, Phrabhakaran; Minaguchi, Kiyoshi; Umetsu, Kazuo; Adachi, Noboru

    2016-01-01

    Mitochondrial DNA (mtDNA) serves as a powerful tool for exploring matrilineal phylogeographic ancestry, as well as for analyzing highly degraded samples, because of its polymorphic nature and high copy numbers per cell. The recent advent of complete mitochondrial genome sequencing has led to improved techniques for phylogenetic analyses based on mtDNA, and many multiplex genotyping methods have been developed for the hierarchical analysis of phylogenetically important mutations. However, few high-resolution multiplex genotyping systems for analyzing East-Asian mtDNA can be applied to extremely degraded samples. Here, we present a multiplex system for analyzing mitochondrial single nucleotide polymorphisms (mtSNPs), which relies on a novel amplified product-length polymorphisms (APLP) method that uses inosine-flapped primers and is specifically designed for the detailed haplogrouping of extremely degraded East-Asian mtDNAs. We used fourteen 6-plex polymerase chain reactions (PCRs) and subsequent electrophoresis to examine 81 haplogroup-defining SNPs and 3 insertion/deletion sites, and we were able to securely assign the studied mtDNAs to relevant haplogroups. Our system requires only 1×10−13 g (100 fg) of crude DNA to obtain a full profile. Owing to its small amplicon size (<110 bp), this new APLP system was successfully applied to extremely degraded samples for which direct sequencing of hypervariable segments using mini-primer sets was unsuccessful, and proved to be more robust than conventional APLP analysis. Thus, our new APLP system is effective for retrieving reliable data from extremely degraded East-Asian mtDNAs. PMID:27355212

  19. Multiplex APLP System for High-Resolution Haplogrouping of Extremely Degraded East-Asian Mitochondrial DNAs.

    PubMed

    Kakuda, Tsuneo; Shojo, Hideki; Tanaka, Mayumi; Nambiar, Phrabhakaran; Minaguchi, Kiyoshi; Umetsu, Kazuo; Adachi, Noboru

    2016-01-01

    Mitochondrial DNA (mtDNA) serves as a powerful tool for exploring matrilineal phylogeographic ancestry, as well as for analyzing highly degraded samples, because of its polymorphic nature and high copy numbers per cell. The recent advent of complete mitochondrial genome sequencing has led to improved techniques for phylogenetic analyses based on mtDNA, and many multiplex genotyping methods have been developed for the hierarchical analysis of phylogenetically important mutations. However, few high-resolution multiplex genotyping systems for analyzing East-Asian mtDNA can be applied to extremely degraded samples. Here, we present a multiplex system for analyzing mitochondrial single nucleotide polymorphisms (mtSNPs), which relies on a novel amplified product-length polymorphisms (APLP) method that uses inosine-flapped primers and is specifically designed for the detailed haplogrouping of extremely degraded East-Asian mtDNAs. We used fourteen 6-plex polymerase chain reactions (PCRs) and subsequent electrophoresis to examine 81 haplogroup-defining SNPs and 3 insertion/deletion sites, and we were able to securely assign the studied mtDNAs to relevant haplogroups. Our system requires only 1×10-13 g (100 fg) of crude DNA to obtain a full profile. Owing to its small amplicon size (<110 bp), this new APLP system was successfully applied to extremely degraded samples for which direct sequencing of hypervariable segments using mini-primer sets was unsuccessful, and proved to be more robust than conventional APLP analysis. Thus, our new APLP system is effective for retrieving reliable data from extremely degraded East-Asian mtDNAs.

  20. Small Sample Sizes Yield Biased Allometric Equations in Temperate Forests

    PubMed Central

    Duncanson, L.; Rourke, O.; Dubayah, R.

    2015-01-01

    Accurate quantification of forest carbon stocks is required for constraining the global carbon cycle and its impacts on climate. The accuracies of forest biomass maps are inherently dependent on the accuracy of the field biomass estimates used to calibrate models, which are generated with allometric equations. Here, we provide a quantitative assessment of the sensitivity of allometric parameters to sample size in temperate forests, focusing on the allometric relationship between tree height and crown radius. We use LiDAR remote sensing to isolate between 10,000 to more than 1,000,000 tree height and crown radius measurements per site in six U.S. forests. We find that fitted allometric parameters are highly sensitive to sample size, producing systematic overestimates of height. We extend our analysis to biomass through the application of empirical relationships from the literature, and show that given the small sample sizes used in common allometric equations for biomass, the average site-level biomass bias is ~+70% with a standard deviation of 71%, ranging from −4% to +193%. These findings underscore the importance of increasing the sample sizes used for allometric equation generation. PMID:26598233

  1. Improved variance estimation of classification performance via reduction of bias caused by small sample size.

    PubMed

    Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders

    2006-03-13

    Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.

  2. A cross-sectional, randomized cluster sample survey of household vulnerability to extreme heat among slum dwellers in ahmedabad, india.

    PubMed

    Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy

    2013-06-18

    Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.

  3. Identification of multiple mRNA and DNA sequences from small tissue samples isolated by laser-assisted microdissection.

    PubMed

    Bernsen, M R; Dijkman, H B; de Vries, E; Figdor, C G; Ruiter, D J; Adema, G J; van Muijen, G N

    1998-10-01

    Molecular analysis of small tissue samples has become increasingly important in biomedical studies. Using a laser dissection microscope and modified nucleic acid isolation protocols, we demonstrate that multiple mRNA as well as DNA sequences can be identified from a single-cell sample. In addition, we show that the specificity of procurement of tissue samples is not compromised by smear contamination resulting from scraping of the microtome knife during sectioning of lesions. The procedures described herein thus allow for efficient RT-PCR or PCR analysis of multiple nucleic acid sequences from small tissue samples obtained by laser-assisted microdissection.

  4. Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.

    ERIC Educational Resources Information Center

    Parshall, Cynthia G.; Kromrey, Jeffrey D.

    1996-01-01

    Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)

  5. Extreme Magnitude Earthquakes and their Economical Consequences

    NASA Astrophysics Data System (ADS)

    Chavez, M.; Cabrera, E.; Ashworth, M.; Perea, N.; Emerson, D.; Salazar, A.; Moulinec, C.

    2011-12-01

    The frequency of occurrence of extreme magnitude earthquakes varies from tens to thousands of years, depending on the considered seismotectonic region of the world. However, the human and economic losses when their hypocenters are located in the neighborhood of heavily populated and/or industrialized regions, can be very large, as recently observed for the 1985 Mw 8.01 Michoacan, Mexico and the 2011 Mw 9 Tohoku, Japan, earthquakes. Herewith, a methodology is proposed in order to estimate the probability of exceedance of: the intensities of extreme magnitude earthquakes, PEI and of their direct economical consequences PEDEC. The PEI are obtained by using supercomputing facilities to generate samples of the 3D propagation of extreme earthquake plausible scenarios, and enlarge those samples by Monte Carlo simulation. The PEDEC are computed by using appropriate vulnerability functions combined with the scenario intensity samples, and Monte Carlo simulation. An example of the application of the methodology due to the potential occurrence of extreme Mw 8.5 subduction earthquakes on Mexico City is presented.

  6. Exploiting molecular dynamics in Nested Sampling simulations of small peptides

    NASA Astrophysics Data System (ADS)

    Burkoff, Nikolas S.; Baldock, Robert J. N.; Várnai, Csilla; Wild, David L.; Csányi, Gábor

    2016-04-01

    Nested Sampling (NS) is a parameter space sampling algorithm which can be used for sampling the equilibrium thermodynamics of atomistic systems. NS has previously been used to explore the potential energy surface of a coarse-grained protein model and has significantly outperformed parallel tempering when calculating heat capacity curves of Lennard-Jones clusters. The original NS algorithm uses Monte Carlo (MC) moves; however, a variant, Galilean NS, has recently been introduced which allows NS to be incorporated into a molecular dynamics framework, so NS can be used for systems which lack efficient prescribed MC moves. In this work we demonstrate the applicability of Galilean NS to atomistic systems. We present an implementation of Galilean NS using the Amber molecular dynamics package and demonstrate its viability by sampling alanine dipeptide, both in vacuo and implicit solvent. Unlike previous studies of this system, we present the heat capacity curves of alanine dipeptide, whose calculation provides a stringent test for sampling algorithms. We also compare our results with those calculated using replica exchange molecular dynamics (REMD) and find good agreement. We show the computational effort required for accurate heat capacity estimation for small peptides. We also calculate the alanine dipeptide Ramachandran free energy surface for a range of temperatures and use it to compare the results using the latest Amber force field with previous theoretical and experimental results.

  7. Extremely small polarization beam splitter based on a multimode interference coupler with a silicon hybrid plasmonic waveguide.

    PubMed

    Guan, Xiaowei; Wu, Hao; Shi, Yaocheng; Dai, Daoxin

    2014-01-15

    A novel polarization beam splitter (PBS) with an extremely small footprint is proposed based on a multimode interference (MMI) coupler with a silicon hybrid plasmonic waveguide. The MMI section, covered with a metal strip partially, is designed to achieve mirror imaging for TE polarization. On the other hand, for TM polarization, there is almost no MMI effect since the higher-order TM modes are hardly excited due to the hybrid plasmonic effect. With this design, the whole PBS including the 1.1 μm long MMI section as well as the output section has a footprint as small as ∼1.8 μm×2.5 μm. Besides, the fabrication process is simple since the waveguide dimension is relatively large (e.g., the input/output waveguides widths w ≥300 nm and the MMI width w(MMI)=800 nm). Numerical simulations show that the designed PBS has a broad band of ∼80 nm for an ER >10 dB as well as a large fabrication tolerance to allow a silicon core width variation of -30 nm<Δw<50 nm and a metal strip width variation of -200 nm<Δw(m)<0.

  8. Effects of extreme climatic events on small-scale spatial patterns: a 20-year study of the distribution of a desert spider.

    PubMed

    Birkhofer, Klaus; Henschel, Joh; Lubin, Yael

    2012-11-01

    Individuals of most animal species are non-randomly distributed in space. Extreme climatic events are often ignored as potential drivers of distribution patterns, and the role of such events is difficult to assess. Seothyra henscheli (Araneae, Eresidae) is a sedentary spider found in the Namib dunes in Namibia. The spider constructs a sticky-edged silk web on the sand surface, connected to a vertical, silk-lined burrow. Above-ground web structures can be damaged by strong winds or heavy rainfall, and during dispersal spiders are susceptible to environmental extremes. Locations of burrows were mapped in three field sites in 16 out of 20 years from 1987 to 2007, and these grid-based data were used to identify the relationship between spatial patterns, climatic extremes and sampling year. According to Morisita's index, individuals had an aggregated distribution in most years and field sites, and Geary's C suggests clustering up to scales of 2 m. Individuals were more aggregated in years with high maximum wind speed and low annual precipitation. Our results suggest that clustering is a temporally stable property of populations that holds even under fluctuating burrow densities. Climatic extremes, however, affect the intensity of clustering behaviour: individuals seem to be better protected in field sites with many conspecific neighbours. We suggest that burrow-site selection is driven at least partly by conspecific cuing, and this behaviour may protect populations from collapse during extreme climatic events.

  9. Creating Composite Age Groups to Smooth Percentile Rank Distributions of Small Samples

    ERIC Educational Resources Information Center

    Lopez, Francesca; Olson, Amy; Bansal, Naveen

    2011-01-01

    Individually administered tests are often normed on small samples, a process that may result in irregularities within and across various age or grade distributions. Test users often smooth distributions guided by Thurstone assumptions (normality and linearity) to result in norms that adhere to assumptions made about how the data should look. Test…

  10. Decoder calibration with ultra small current sample set for intracortical brain-machine interface

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Ma, Xuan; Chen, Luyao; Zhou, Jin; Wang, Changyong; Li, Wei; He, Jiping

    2018-04-01

    Objective. Intracortical brain-machine interfaces (iBMIs) aim to restore efficient communication and movement ability for paralyzed patients. However, frequent recalibration is required for consistency and reliability, and every recalibration will require relatively large most current sample set. The aim in this study is to develop an effective decoder calibration method that can achieve good performance while minimizing recalibration time. Approach. Two rhesus macaques implanted with intracortical microelectrode arrays were trained separately on movement and sensory paradigm. Neural signals were recorded to decode reaching positions or grasping postures. A novel principal component analysis-based domain adaptation (PDA) method was proposed to recalibrate the decoder with only ultra small current sample set by taking advantage of large historical data, and the decoding performance was compared with other three calibration methods for evaluation. Main results. The PDA method closed the gap between historical and current data effectively, and made it possible to take advantage of large historical data for decoder recalibration in current data decoding. Using only ultra small current sample set (five trials of each category), the decoder calibrated using the PDA method could achieve much better and more robust performance in all sessions than using other three calibration methods in both monkeys. Significance. (1) By this study, transfer learning theory was brought into iBMIs decoder calibration for the first time. (2) Different from most transfer learning studies, the target data in this study were ultra small sample set and were transferred to the source data. (3) By taking advantage of historical data, the PDA method was demonstrated to be effective in reducing recalibration time for both movement paradigm and sensory paradigm, indicating a viable generalization. By reducing the demand for large current training data, this new method may facilitate the application

  11. Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.

    PubMed

    Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira

    2012-07-15

    Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com

  12. Fiber Bragg Grating Dilatometry in Extreme Magnetic Field and Cryogenic Conditions

    PubMed Central

    Corvalán Moya, Carolina; Weickert, Franziska; Zapf, Vivien; Balakirev, Fedor F.; Wartenbe, Mark; Rosa, Priscila F. S.; Betts, Jonathan B.; Crooker, Scott A.; Daou, Ramzy

    2017-01-01

    In this work, we review single mode SiO2 fiber Bragg grating techniques for dilatometry studies of small single-crystalline samples in the extreme environments of very high, continuous, and pulsed magnetic fields of up to 150 T and at cryogenic temperatures down to <1 K. Distinct millimeter-long materials are measured as part of the technique development, including metallic, insulating, and radioactive compounds. Experimental strategies are discussed for the observation and analysis of the related thermal expansion and magnetostriction of materials, which can achieve a strain sensitivity (ΔL/L) as low as a few parts in one hundred million (≈10−8). The impact of experimental artifacts, such as those originating in the temperature dependence of the fiber’s index of diffraction, light polarization rotation in magnetic fields, and reduced strain transfer from millimeter-long specimens, is analyzed quantitatively using analytic models available in the literature. We compare the experimental results with model predictions in the small-sample limit, and discuss the uncovered discrepancies. PMID:29117137

  13. Fiber Bragg Grating Dilatometry in Extreme Magnetic Field and Cryogenic Conditions

    DOE PAGES

    Jaime, Marcelo; Corvalán Moya, Carolina; Weickert, Franziska; ...

    2017-11-08

    In this work, we review single mode SiO 2 fiber Bragg grating techniques for dilatometry studies of small single-crystalline samples in the extreme environments of very high, continuous, and pulsed magnetic fields of up to 150 T and at cryogenic temperatures down to <1 K. Distinct millimeter-long materials are measured as part of the technique development, including metallic, insulating, and radioactive compounds. Experimental strategies are discussed for the observation and analysis of the related thermal expansion and magnetostriction of materials, which can achieve a strain sensitivity (ΔL/L) as low as a few parts in one hundred million (≈10 -8). Themore » impact of experimental artifacts, such as those originating in the temperature dependence of the fiber’s index of diffraction, light polarization rotation in magnetic fields, and reduced strain transfer from millimeter-long specimens, is analyzed quantitatively using analytic models available in the literature. We compare the experimental results with model predictions in the small-sample limit, and discuss the uncovered discrepancies.« less

  14. What Can You Do with a Returned Sample of Martian Dust?

    NASA Technical Reports Server (NTRS)

    Zolensky, Michael E.; Nakamura-Messenger, K.

    2007-01-01

    A major issue that we managed to successfully address for the Stardust Mission was the magnitude and manner of preliminary examination (PET) of the returned samples, which totaled much less than 1 mg. Not since Apollo and Luna days had anyone faced this issue, and the lessons of Apollo PET were not extremely useful because of the very different sample masses in this case, and the incredible advances in analytical capabilities since the 1960s. This paper reviews some of the techniques for examination of small very rare samples that would be returned from Mars missions.

  15. A Unique Sample of Extreme-BCG Clusters at 0.2 < z < 0.5

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2017-09-01

    The recently-discovered Phoenix cluster harbors the most extreme BCG in the known universe. Despite the cluster's high mass and X-ray luminosity, it was consistently identified by surveys as an isolated AGN, due to the bright central point source and the compact cool core. Armed with hindsight, we have undertaken an all-sky survey based on archival X-ray, OIR, and radio data to identify other similarly-extreme systems that were likewise missed. A pilot study demonstrated that this strategy works, leading to the discovery of a new, massive cluster at z 0.2 which was missed by previous X-ray surveys due to the presence of a bright central QSO. We propose here to observe 6 new clusters from our complete northern-sky survey, which harbor some of the most extreme central galaxies known.

  16. Extreme obesity reduces bone mineral density: complementary evidence from mice and women.

    PubMed

    Núñez, Nomelí P; Carpenter, Catherine L; Perkins, Susan N; Berrigan, David; Jaque, S Victoria; Ingles, Sue Ann; Bernstein, Leslie; Forman, Michele R; Barrett, J Carl; Hursting, Stephen D

    2007-08-01

    To evaluate the effects of body adiposity on bone mineral density in the presence and absence of ovarian hormones in female mice and postmenopausal women. We assessed percentage body fat, serum leptin levels, and bone mineral density in ovariectomized and non-ovariectomized C57BL/6 female mice that had been fed various calorically dense diets to induce body weight profiles ranging from lean to very obese. Additionally, we assessed percentage body fat and whole body bone mineral density in 37 overweight and extremely obese postmenopausal women from the Women's Contraceptive and Reproductive Experiences study. In mice, higher levels of body adiposity (>40% body fat) were associated with lower bone mineral density in ovariectomized C57BL/6 female mice. A similar trend was observed in a small sample of postmenopausal women. The complementary studies in mice and women suggest that extreme obesity in postmenopausal women may be associated with reduced bone mineral density. Thus, extreme obesity (BMI > 40 kg/m2) may increase the risk for osteopenia and osteoporosis. Given the obesity epidemic in the U.S. and in many other countries, and, in particular, the rising number of extremely obese adult women, increased attention should be drawn to the significant and interrelated public health issues of obesity and osteoporosis.

  17. Are extreme events (statistically) special? (Invited)

    NASA Astrophysics Data System (ADS)

    Main, I. G.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A. F.; McCloskey, J.

    2009-12-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic’, do they ‘know’ how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic’-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball’ fits to unconsciously (but

  18. A nonlethal sampling method to obtain, generate and assemble whole blood transcriptomes from small, wild mammals.

    PubMed

    Huang, Zixia; Gallot, Aurore; Lao, Nga T; Puechmaille, Sébastien J; Foley, Nicole M; Jebb, David; Bekaert, Michaël; Teeling, Emma C

    2016-01-01

    The acquisition of tissue samples from wild populations is a constant challenge in conservation biology, especially for endangered species and protected species where nonlethal sampling is the only option. Whole blood has been suggested as a nonlethal sample type that contains a high percentage of bodywide and genomewide transcripts and therefore can be used to assess the transcriptional status of an individual, and to infer a high percentage of the genome. However, only limited quantities of blood can be nonlethally sampled from small species and it is not known if enough genetic material is contained in only a few drops of blood, which represents the upper limit of sample collection for some small species. In this study, we developed a nonlethal sampling method, the laboratory protocols and a bioinformatic pipeline to sequence and assemble the whole blood transcriptome, using Illumina RNA-Seq, from wild greater mouse-eared bats (Myotis myotis). For optimal results, both ribosomal and globin RNAs must be removed before library construction. Treatment of DNase is recommended but not required enabling the use of smaller amounts of starting RNA. A large proportion of protein-coding genes (61%) in the genome were expressed in the blood transcriptome, comparable to brain (65%), kidney (63%) and liver (58%) transcriptomes, and up to 99% of the mitogenome (excluding D-loop) was recovered in the RNA-Seq data. In conclusion, this nonlethal blood sampling method provides an opportunity for a genomewide transcriptomic study of small, endangered or critically protected species, without sacrificing any individuals. © 2015 John Wiley & Sons Ltd.

  19. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  20. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  1. Modelling hydrological extremes under non-stationary conditions using climate covariates

    NASA Astrophysics Data System (ADS)

    Vasiliades, Lampros; Galiatsatou, Panagiota; Loukas, Athanasios

    2013-04-01

    Extreme value theory is a probabilistic theory that can interpret the future probabilities of occurrence of extreme events (e.g. extreme precipitation and streamflow) using past observed records. Traditionally, extreme value theory requires the assumption of temporal stationarity. This assumption implies that the historical patterns of recurrence of extreme events are static over time. However, the hydroclimatic system is nonstationary on time scales that are relevant to extreme value analysis, due to human-mediated and natural environmental change. In this study the generalized extreme value (GEV) distribution is used to assess nonstationarity in annual maximum daily rainfall and streamflow timeseries at selected meteorological and hydrometric stations in Greece and Cyprus. The GEV distribution parameters (location, scale, and shape) are specified as functions of time-varying covariates and estimated using the conditional density network (CDN) as proposed by Cannon (2010). The CDN is a probabilistic extension of the multilayer perceptron neural network. Model parameters are estimated via the generalized maximum likelihood (GML) approach using the quasi-Newton BFGS optimization algorithm, and the appropriate GEV-CDN model architecture for the selected meteorological and hydrometric stations is selected by fitting increasingly complicated models and choosing the one that minimizes the Akaike information criterion with small sample size correction. For all case studies in Greece and Cyprus different formulations are tested with combinational cases of stationary and nonstationary parameters of the GEV distribution, linear and non-linear architecture of the CDN and combinations of the input climatic covariates. Climatic indices such as the Southern Oscillation Index (SOI), which describes atmospheric circulation in the eastern tropical pacific related to El Niño Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO) index that varies on an interdecadal

  2. Polygenic determinants in extremes of high-density lipoprotein cholesterol.

    PubMed

    Dron, Jacqueline S; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A; Robinson, John F; McIntyre, Adam D; Ban, Matthew R; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J; Lettre, Guillaume; Tardif, Jean-Claude; Hegele, Robert A

    2017-11-01

    HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.

  3. The Extreme Ultraviolet Explorer

    NASA Technical Reports Server (NTRS)

    Malina, R. F.; Bowyer, S.; Lampton, M.; Finley, D.; Paresce, F.; Penegor, G.; Heetderks, H.

    1982-01-01

    The Extreme Ultraviolet Explorer Mission is described. The purpose of this mission is to search the celestial sphere for astronomical sources of extreme ultraviolet (EUV) radiation (100 to 1000 A). The search will be accomplished with the use of three EUV telescopes, each sensitive to different bands within the EUV band. A fourth telescope will perform a higher sensitivity search of a limited sample of the sky in a single EUV band. In six months, the entire sky will be scanned at a sensitivity level comparable to existing surveys in other more traditional astronomical bandpasses.

  4. A Novel Videography Method for Generating Crack-Extension Resistance Curves in Small Bone Samples

    PubMed Central

    Katsamenis, Orestis L.; Jenkins, Thomas; Quinci, Federico; Michopoulou, Sofia; Sinclair, Ian; Thurner, Philipp J.

    2013-01-01

    Assessment of bone quality is an emerging solution for quantifying the effects of bone pathology or treatment. Perhaps one of the most important parameters characterising bone quality is the toughness behaviour of bone. Particularly, fracture toughness, is becoming a popular means for evaluating bone quality. The method is moving from a single value approach that models bone as a linear-elastic material (using the stress intensity factor, K) towards full crack extension resistance curves (R-curves) using a non-linear model (the strain energy release rate in J-R curves). However, for explanted human bone or small animal bones, there are difficulties in measuring crack-extension resistance curves due to size constraints at the millimetre and sub-millimetre scale. This research proposes a novel “whitening front tracking” method that uses videography to generate full fracture resistance curves in small bone samples where crack propagation cannot typically be observed. Here we present this method on sharp edge notched samples (<1 mm×1 mm×Length) prepared from four human femora tested in three-point bending. Each sample was loaded in a mechanical tester with the crack propagation recorded using videography and analysed using an algorithm to track the whitening (damage) zone. Using the “whitening front tracking” method, full R-curves and J-R curves could be generated for these samples. The curves for this antiplane longitudinal orientation were similar to those found in the literature, being between the published longitudinal and transverse orientations. The proposed technique shows the ability to generate full “crack” extension resistance curves by tracking the whitening front propagation to overcome the small size limitations and the single value approach. PMID:23405186

  5. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  6. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-10-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  7. Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.

    2010-05-01

    In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.

  8. Polygenic determinants in extremes of high-density lipoprotein cholesterol[S

    PubMed Central

    Dron, Jacqueline S.; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A.; Robinson, John F.; McIntyre, Adam D.; Ban, Matthew R.; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J.; Lettre, Guillaume; Tardif, Jean-Claude

    2017-01-01

    HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. PMID:28870971

  9. Chironomid midges (Diptera, chironomidae) show extremely small genome sizes.

    PubMed

    Cornette, Richard; Gusev, Oleg; Nakahara, Yuichi; Shimura, Sachiko; Kikawada, Takahiro; Okuda, Takashi

    2015-06-01

    Chironomid midges (Diptera; Chironomidae) are found in various environments from the high Arctic to the Antarctic, including temperate and tropical regions. In many freshwater habitats, members of this family are among the most abundant invertebrates. In the present study, the genome sizes of 25 chironomid species were determined by flow cytometry and the resulting C-values ranged from 0.07 to 0.20 pg DNA (i.e. from about 68 to 195 Mbp). These genome sizes were uniformly very small and included, to our knowledge, the smallest genome sizes recorded to date among insects. Small proportion of transposable elements and short intron sizes were suggested to contribute to the reduction of genome sizes in chironomids. We discuss about the possible developmental and physiological advantages of having a small genome size and about putative implications for the ecological success of the family Chironomidae.

  10. Air sampling with solid phase microextraction

    NASA Astrophysics Data System (ADS)

    Martos, Perry Anthony

    There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds

  11. How extreme are extremes?

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2016-04-01

    High temperatures have an impact on the energy balance of any living organism and on the operational capabilities of critical infrastructures. Heat-wave indicators have been mainly developed with the aim of capturing the potential impacts on specific sectors (agriculture, health, wildfires, transport, power generation and distribution). However, the ability to capture the occurrence of extreme temperature events is an essential property of a multi-hazard extreme climate indicator. Aim of this study is to develop a standardized heat-wave indicator, that can be combined with other indices in order to describe multiple hazards in a single indicator. The proposed approach can be used in order to have a quantified indicator of the strenght of a certain extreme. As a matter of fact, extremes are usually distributed in exponential or exponential-exponential functions and it is difficult to quickly asses how strong was an extreme events considering only its magnitude. The proposed approach simplify the quantitative and qualitative communication of extreme magnitude

  12. Reliability and validity of the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower limb musculoskeletal disorders.

    PubMed

    Negahban, Hossein; Hessam, Masumeh; Tabatabaei, Saeid; Salehi, Reza; Sohani, Soheil Mansour; Mehravar, Mohammad

    2014-01-01

    The aim was to culturally translate and validate the Persian lower extremity functional scale (LEFS) in a heterogeneous sample of outpatients with lower extremity musculoskeletal disorders (n = 304). This is a prospective methodological study. After a standard forward-backward translation, psychometric properties were assessed in terms of test-retest reliability, internal consistency, construct validity, dimensionality, and ceiling or floor effects. The acceptable level of intraclass correlation coefficient >0.70 and Cronbach's alpha coefficient >0.70 was obtained for the Persian LEFS. Correlations between Persian LEFS and Short-Form 36 Health Survey (SF-36) subscales of Physical Health component (rs range = 0.38-0.78) were higher than correlations between Persian LEFS and SF-36 subscales of Mental Health component (rs range = 0.15-0.39). A corrected item--total correlation of >0.40 (Spearman's rho) was obtained for all items of the Persian LEFS. Horn's parallel analysis detected a total of two factors. No ceiling or floor effects were detected for the Persian LEFS. The Persian version of the LEFS is a reliable and valid instrument that can be used to measure functional status in Persian-speaking patients with different musculoskeletal disorders of the lower extremity. Implications for Rehabilitation The Persian lower extremity functional scale (LEFS) is a reliable, internally consistent and valid instrument, with no ceiling or floor effects, to determine functional status of heterogeneous patients with musculoskeletal disorders of the lower extremity. The Persian version of the LEFS can be used in clinical and research settings to measure function in Iranian patients with different musculoskeletal disorders of the lower extremity.

  13. Density-based clustering of small peptide conformations sampled from a molecular dynamics simulation.

    PubMed

    Kim, Minkyoung; Choi, Seung-Hoon; Kim, Junhyoung; Choi, Kihang; Shin, Jae-Min; Kang, Sang-Kee; Choi, Yun-Jaie; Jung, Dong Hyun

    2009-11-01

    This study describes the application of a density-based algorithm to clustering small peptide conformations after a molecular dynamics simulation. We propose a clustering method for small peptide conformations that enables adjacent clusters to be separated more clearly on the basis of neighbor density. Neighbor density means the number of neighboring conformations, so if a conformation has too few neighboring conformations, then it is considered as noise or an outlier and is excluded from the list of cluster members. With this approach, we can easily identify clusters in which the members are densely crowded in the conformational space, and we can safely avoid misclustering individual clusters linked by noise or outliers. Consideration of neighbor density significantly improves the efficiency of clustering of small peptide conformations sampled from molecular dynamics simulations and can be used for predicting peptide structures.

  14. Investigation of Phase Transition-Based Tethered Systems for Small Body Sample Capture

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Scharf, Daniel; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and possible return to Earth. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  15. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    ERIC Educational Resources Information Center

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  16. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    ERIC Educational Resources Information Center

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  17. Using Extreme Groups Strategy When Measures Are Not Normally Distributed.

    ERIC Educational Resources Information Center

    Fowler, Robert L.

    1992-01-01

    A Monte Carlo simulation explored how to optimize power in the extreme groups strategy when sampling from nonnormal distributions. Results show that the optimum percent for the extreme group selection was approximately the same for all population shapes, except the extremely platykurtic (uniform) distribution. (SLD)

  18. Relationship between extreme ultraviolet microflares and small-scale magnetic fields in the quiet Sun

    NASA Astrophysics Data System (ADS)

    Jiang, Fayu; Zhang, Jun; Yang, Shuhong

    2016-04-01

    Microflares are small dynamic signatures observed in X-ray and extreme-ultraviolet channels. Because of their impulsive emission enhancements and wide distribution, they are thought to be closely related to coronal heating. By using the high resolution 171 Å images from the Atmospheric Imaging Assembly and the lines-of-sight magnetograms obtained by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory, we trace 10794 microflares in a quiet region near the disk center with a field of view of 960''× 1068'' during 24 hr. The microflares have an occurrence rate of 4.4 × 103 hr-1 extrapolated over the whole Sun. Their average brightness, size, and lifetime are 1.7 I0 (of the quiet Sun), 9.6 Mm2, and 3.6 min, respectively. There exists a mutual positive correlation between the microflares' brightness, area and lifetime. In general, the microflares distribute uniformly across the solar disk, but form network patterns locally, which are similar to and matched with the magnetic network structures. Typical cases show that the microflares prefer to occur in magnetic cancellation regions of network boundaries. We roughly calculate the upper limit of energy flux supplied by the microflares and find that the result is still a factor of ˜15 below the coronal heating requirement.

  19. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    PubMed

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  20. Extreme Programming: Maestro Style

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2009-01-01

    "Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme

  1. Nanoparticle functionalised small-core suspended-core fibre - a novel platform for efficient sensing.

    PubMed

    Doherty, Brenda; Csáki, Andrea; Thiele, Matthias; Zeisberger, Matthias; Schwuchow, Anka; Kobelke, Jens; Fritzsche, Wolfgang; Schmidt, Markus A

    2017-02-01

    Detecting small quantities of specific target molecules is of major importance within bioanalytics for efficient disease diagnostics. One promising sensing approach relies on combining plasmonically-active waveguides with microfluidics yielding an easy-to-use sensing platform. Here we introduce suspended-core fibres containing immobilised plasmonic nanoparticles surrounding the guiding core as a concept for an entirely integrated optofluidic platform for efficient refractive index sensing. Due to the extremely small optical core and the large adjacent microfluidic channels, over two orders of magnitude of nanoparticle coverage densities have been accessed with millimetre-long sample lengths showing refractive index sensitivities of 170 nm/RIU for aqueous analytes where the fibre interior is functionalised by gold nanospheres. Our concept represents a fully integrated optofluidic sensing system demanding small sample volumes and allowing for real-time analyte monitoring, both of which are highly relevant within invasive bioanalytics, particularly within molecular disease diagnostics and environmental science.

  2. Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology

    PubMed Central

    Vavrek, Matthew J.

    2015-01-01

    Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770

  3. STARDUST and HAYABUSA: Sample Return Missions to Small Bodies in the Solar System

    NASA Technical Reports Server (NTRS)

    Sandford, S. A.

    2005-01-01

    There are currently two active spacecraft missions designed to return samples to Earth from small bodies in our Solar System. STARDUST will return samples from the comet Wild 2, and HAYABUSA will return samples from the asteroid Itokawa. On January 3,2004, the STARDUST spacecraft made the closest ever flyby (236 km) of the nucleus of a comet - Comet Wild 2. During the flyby the spacecraft collected samples of dust from the coma of the comet. These samples will be returned to Earth on January 15,2006. After a brief preliminary examination to establish the nature of the returned samples, they will be made available to the general scientific community for study. The HAYABUSA spacecraft arrived at the Near Earth Asteroid Itokawa in September 2005 and is currently involved in taking remote sensing data from the asteroid. Several practice landings have been made and a sample collection landing will be made soon. The collected sample will be returned to Earth in June 2007. During my talk I will discuss the scientific goals of the STARDUST and HAYABUSA missions and provide an overview of their designs and flights to date. I will also show some of the exciting data returned by these spacecraft during their encounters with their target objects.

  4. A Fast Reduced Kernel Extreme Learning Machine.

    PubMed

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Porosity estimation by semi-supervised learning with sparsely available labeled samples

    NASA Astrophysics Data System (ADS)

    Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi

    2017-09-01

    This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.

  6. A Test-Length Correction to the Estimation of Extreme Proficiency Levels

    ERIC Educational Resources Information Center

    Magis, David; Beland, Sebastien; Raiche, Gilles

    2011-01-01

    In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…

  7. DESIGN OF A SIMPLE SLOW COOLING DEVICE FOR CRYOPRESERVATION OF SMALL BIOLOGICAL SAMPLES.

    PubMed

    de Paz, Leonardo Juan; Robert, Maria Celeste; Graf, Daniel Adolfo; Guibert, Edgardo Elvio; Rodriguez, Joaquin Valentin

    2015-01-01

    Slow cooling is a cryopreservation methodology where samples are cooled to its storage temperature at controlled cooling rates. Design, construction and evaluation of a simple and low cost device for slow cooling of small biological samples. The device was constructed based on Pye's freezer idea. A Dewar flask filled with liquid nitrogen was used as heat sink and a methanol bath containing the sample was cooled at constant rates using copper bars as heat conductor. Sample temperature may be lowered at controlled cooling rate (ranging from 0.4°C/min to 6.0°C/min) down to ~-60°C, where it could be conserved at lower temperatures. An example involving the cryopreservation of Neuro-2A cell line showed a marked influence of cooling rate over post preservation cell viability with optimal values between 2.6 and 4.6°C/min. The cooling device proved to be a valuable alternative to more expensive systems allowing the assessment of different cooling rates to evaluate the optimal condition for cryopreservation of such samples.

  8. A Rational Approach for Discovering and Validating Cancer Markers in Very Small Samples Using Mass Spectrometry and ELISA Microarrays

    DOE PAGES

    Zangar, Richard C.; Varnum, Susan M.; Covington, Chandice Y.; ...

    2004-01-01

    Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF) or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result,more » a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS) to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.« less

  9. Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs.

    PubMed

    Fok, Carlotta Ching Ting; Henry, David; Allen, James

    2015-10-01

    The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, Prevention Science. 2015) in this special section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small sample alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples.

  10. Research Designs for Intervention Research with Small Samples II: Stepped Wedge and Interrupted Time-Series Designs

    PubMed Central

    Ting Fok, Carlotta Ching; Henry, David; Allen, James

    2015-01-01

    The stepped wedge design (SWD) and the interrupted time-series design (ITSD) are two alternative research designs that maximize efficiency and statistical power with small samples when contrasted to the operating characteristics of conventional randomized controlled trials (RCT). This paper provides an overview and introduction to previous work with these designs, and compares and contrasts them with the dynamic wait-list design (DWLD) and the regression point displacement design (RPDD), which were presented in a previous article (Wyman, Henry, Knoblauch, and Brown, 2015) in this Special Section. The SWD and the DWLD are similar in that both are intervention implementation roll-out designs. We discuss similarities and differences between the SWD and DWLD in their historical origin and application, along with differences in the statistical modeling of each design. Next, we describe the main design characteristics of the ITSD, along with some of its strengths and limitations. We provide a critical comparative review of strengths and weaknesses in application of the ITSD, SWD, DWLD, and RPDD as small samples alternatives to application of the RCT, concluding with a discussion of the types of contextual factors that influence selection of an optimal research design by prevention researchers working with small samples. PMID:26017633

  11. The relationship between lower extremity alignment and Medial Tibial Stress Syndrome among non-professional athletes

    PubMed Central

    Raissi, Golam Reza D; Cherati, Afsaneh D Safar; Mansoori, Kourosh D; Razi, Mohammad D

    2009-01-01

    Objective To determine the relationship between lower extremity alignment and MTSS amongst non-professional athletes Design In a prospective Study, sixty six subjects were evaluated. Bilateral navicular drop test, Q angle, Achilles angle, tibial angle, intermalleolar and intercondylar distance were measured. In addition, runner's height, body mass, history of previous running injury, running experience was recorded. Runners were followed for 17 weeks to determine occurrence of MTSS. Results The overall injury rate for MTSS was 19.7%. The MTSS injury rate in girls (22%) was not significantly different from the rate in boys (14.3%). Most MTSS injuries were induced after 60 hours of exercise, which did not differ between boys and girls. There was a significant difference in right and left navicular drop (ND) in athletes with MTSS. MTSS had no significant correlation with other variables including Quadriceps, Tibia and Achilles angles, intercondylar and intermaleolar lengths and lower extremity lengths. Limitation All measurements performed in this study were uniplanar and static. The small sample size deemed our main limitation. The accurate assessment of participants with previous history of anterior leg pain for MTSS was another limitation. Conclusion Although a significant relationship between navicular drop and MTSS was found in this study; there was not any significant relationship between lower extremity alignment and MTSS in our sample study. PMID:19519909

  12. Nonevaporable getter coating chambers for extreme high vacuum

    DOE PAGES

    Stutzman, Marcy L.; Adderley, Philip A.; Mamun, Md Abdullah Al; ...

    2018-03-01

    Techniques for NEG coating a large diameter chamber are presented along with vacuum measurements in the chamber using several pumping configurations, with base pressure as low as 1.56x10^-12 Torr (N2 equivalent) with only a NEG coating and small ion pump. We then describe modifications to the NEG coating process to coat complex geometry chambers for ultra-cold atom trap experiments. Surface analysis of NEG coated samples are used to measure composition and morphology of the thin films. Finally, pressure measurements are compared for two NEG coated polarized electron source chambers: the 130 kV polarized electron source at Jefferson Lab and themore » upgraded 350 kV polarized 2 electron source, both of which are approaching or within the extreme high vacuum (XHV) range, defined as P<7.5x10^-13 Torr.« less

  13. Nonevaporable getter coating chambers for extreme high vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stutzman, Marcy L.; Adderley, Philip A.; Mamun, Md Abdullah Al

    Techniques for NEG coating a large diameter chamber are presented along with vacuum measurements in the chamber using several pumping configurations, with base pressure as low as 1.56x10^-12 Torr (N2 equivalent) with only a NEG coating and small ion pump. We then describe modifications to the NEG coating process to coat complex geometry chambers for ultra-cold atom trap experiments. Surface analysis of NEG coated samples are used to measure composition and morphology of the thin films. Finally, pressure measurements are compared for two NEG coated polarized electron source chambers: the 130 kV polarized electron source at Jefferson Lab and themore » upgraded 350 kV polarized 2 electron source, both of which are approaching or within the extreme high vacuum (XHV) range, defined as P<7.5x10^-13 Torr.« less

  14. DRME: Count-based differential RNA methylation analysis at small sample size scenario.

    PubMed

    Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia

    2016-04-15

    Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  16. Nonuniformity of diffusing capacity from small alveolar gas samples is increased in smokers.

    PubMed

    Cotton, D J; Mink, J T; Graham, B L

    1998-01-01

    Although centrilobular emphysema, and small airway, interstitial and alveoli inflammation can be detected pathologically in the lungs of smokers with relatively well preserved lung function, these changes are difficult to assess using available physiological tests. Because submaximal single breath washout (SBWSM) manoeuvres improve the detection of abnormalities in ventilation inhomogeneity in the lung periphery in smokers compared with traditional vital capacity manoeuvres, SBWSM manoeuvres were used in this study to measure temporal differences in diffusing capacity using a rapid response carbon monoxide analyzer. To determine whether abnormalities in the lung periphery can be detected in smokers with normal forced expired volumes in 1 s using the three-equation diffusing capacity (DLcoSB-3EQ) among small alveolar gas samples and whether the abnormalities correlate with increases in peripheral ventilation inhomogeneity. Cross-sectional study in 21 smokers and 21 nonsmokers all with normal forced exhaled flow rates. Both smokers and nonsmokers performed SBWSM manoeuvres consisting of slow inhalation of test gas from functional residual capacity to one-half inspiratory capacity with either 0 or 10 s of breath holding and slow exhalation to residual volume (RV). They also performed conventional vital capacity single breath (SBWVC) manoeuvres consisting of slow inhalation of test gas from RV to total lung capacity and, without breath holding, slow exhalation to RV. DLcoSB-3EQ was calculated from the total alveolar gas sample. DLcoSB-3EQ was also calculated from four equal sequential, simulated aliquots of the total alveolar gas sample. DLcoSB-3EQ values from the four alveolar samples were normalized by expressing each as a percentge of DLcoSB-3EQ from the entire alveolar gas sample. An index of variation (DI) among the small-sample DLcoSB-3EQ values was correlated with the normalized phase III helium slope (Sn) and the mixing efficiency (Emix). For SBWSM, DI was

  17. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  18. Adolescent exposure to extremely violent movies.

    PubMed

    Sargent, James D; Heatherton, Todd F; Ahrens, M Bridget; Dalton, Madeline A; Tickle, Jennifer J; Beach, Michael L

    2002-12-01

    To determine exposure of young adolescents to extremely violent movies. Cross-sectional school-based survey of middle school students at 15 randomly selected New Hampshire and Vermont middle schools. Each survey contained a unique list of 50 movies, randomly selected from 603 top box office hits from 1988 to 1999, 51 of which were determined by content analysis to contain extremely violent material. Movie titles only were listed, and adolescents were asked to indicate which ones they had seen. Each movie appeared on approximately 470 surveys. We calculated the percentage of students who had seen each movie for a representative subsample of the student population. We also examined characteristics associated with seeing at least one extremely violent movie. Complete survey information was obtained from 5,456 students. The sample was primarily white and equally distributed by gender. On average, extremely violent movies were seen by 28% of the students in the sample (range 4% to 66%). The most popular movie, Scream, was seen by two-thirds of students overall and over 40% of fifth-graders. Other movies with sexualized violent content were seen by many of these adolescents. Examples include The General's Daughter (rated R for "graphic images related to sexual violence including a rape scene and perverse sexuality") and Natural Born Killers (rated R for "extreme violence and graphic carnage, shocking images, language, and sexuality"), seen by 27% and 20%, respectively. Older students, males, those of lower socioeconomic status, and those with poorer school performance were all significantly more likely to have seen at least one extremely violent movie. This study documents widespread exposure of young adolescents to movies with brutal, and often sexualized, violence. Given that many of these films were marketed to teens, better oversight of the marketing practices of the film industry may be warranted.

  19. Characterization and prediction of extreme events in turbulence

    NASA Astrophysics Data System (ADS)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  20. Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.

    PubMed

    Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A

    2012-01-03

    Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of

  1. Relationship between extreme ultraviolet microflares and small-scale magnetic fields in the quiet Sun

    NASA Astrophysics Data System (ADS)

    Jiang, Fayu; Zhang, Jun; Yang, Shuhong

    2015-06-01

    Microflares are small dynamic signatures observed in X-ray and extreme-ultraviolet channels. Because of their impulsive emission enhancements and wide distribution, they are thought to be closely related to coronal heating. By using the high-resolution 171 Å images from the Atmospheric Imaging Assembly and the lines-of-sight magnetograms obtained by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory, we trace 10794 microflares in a quiet region near the disk center with a field of view of 960″ × 1068″ during 24 hr. The microflares have an occurrence rate of 4.4 × 103 hr-1 extrapolated over the whole Sun. Their average brightness, size, and lifetime are 1.7I0 (of the quiet Sun), 9.6 Mm2, and 3.6 min, respectively. There exists a mutual positive correlation between the microflares' brightness, area, and lifetime. In general, the microflares distribute uniformly across the solar disk, but form network patterns locally, which are similar to and matched with the magnetic network structures. Typical cases show that the microflares prefer to occur in magnetic cancellation regions of network boundaries. We roughly calculate the upper limit of energy flux supplied by the microflares and find that the result is still a factor of ˜ 15 below the coronal heating requirement.

  2. [Crossing borders. The motivation of extreme sportsmen].

    PubMed

    Opaschowski, H W

    2005-08-01

    In his article "Crossing borders -- the motivation of extreme sportsmen" the author gets systematically to the bottom of the question of why extreme sportsmen voluntarily take risks and endanger themselves. Within the scope of a representative sampling 217 extreme sportsmen -- from the fields of mountain biking, trekking and free climbing, canoyning, river rafting and deep sea diving, paragliding, parachuting, bungee jumping and survival training -- give information about their personal motives. What fascinates them? The attraction of risk? The search for sensation? Or the drop out of everyday life? And what comes afterwards? Does in the end the whole life become an extreme sport? Fact is: they live extremely, because they want to move beyond well-trodden paths. To escape the boredom of everyday life they are searching for the kick, the thrill, the no-limit experience. It's about calculated risk between altitude flight and deep sea adventure.

  3. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  4. Time-integrated sampling of fluvial suspended sediment: a simple methodology for small catchments

    NASA Astrophysics Data System (ADS)

    Phillips, J. M.; Russell, M. A.; Walling, D. E.

    2000-10-01

    Fine-grained (<62·5 µm) suspended sediment transport is a key component of the geochemical flux in most fluvial systems. The highly episodic nature of suspended sediment transport imposes a significant constraint on the design of sampling strategies aimed at characterizing the biogeochemical properties of such sediment. A simple sediment sampler, utilizing ambient flow to induce sedimentation by settling, is described. The sampler can be deployed unattended in small streams to collect time-integrated suspended sediment samples. In laboratory tests involving chemically dispersed sediment, the sampler collected a maximum of 71% of the input sample mass. However, under natural conditions, the existence of composite particles or flocs can be expected to increase significantly the trapping efficiency. Field trials confirmed that the particle size composition and total carbon content of the sediment collected by the sampler were representative statistically of the ambient suspended sediment.

  5. Present-day irrigation mitigates heat extremes

    DOE PAGES

    Thiery, Wim; Davin, Edouard L.; Lawrence, David M.; ...

    2017-02-16

    Irrigation is an essential practice for sustaining global food production and many regional economies. Emerging scientific evidence indicates that irrigation substantially affects mean climate conditions in different regions of the world. Yet how this practice influences climate extremes is currently unknown. Here we use ensemble simulations with the Community Earth System Model to assess the impacts of irrigation on climate extremes. An evaluation of the model performance reveals that irrigation has a small yet overall beneficial effect on the representation of present-day near-surface climate. While the influence of irrigation on annual mean temperatures is limited, we find a large impactmore » on temperature extremes, with a particularly strong cooling during the hottest day of the year (-0.78 K averaged over irrigated land). The strong influence on extremes stems from the timing of irrigation and its influence on land-atmosphere coupling strength. Together these effects result in asymmetric temperature responses, with a more pronounced cooling during hot and/or dry periods. The influence of irrigation is even more pronounced when considering subgrid-scale model output, suggesting that local effects of land management are far more important than previously thought. In conclusion, our results underline that irrigation has substantially reduced our exposure to hot temperature extremes in the past and highlight the need to account for irrigation in future climate projections.« less

  6. Present-day irrigation mitigates heat extremes

    NASA Astrophysics Data System (ADS)

    Thiery, Wim; Davin, Edouard L.; Lawrence, David M.; Hirsch, Annette L.; Hauser, Mathias; Seneviratne, Sonia I.

    2017-02-01

    Irrigation is an essential practice for sustaining global food production and many regional economies. Emerging scientific evidence indicates that irrigation substantially affects mean climate conditions in different regions of the world. Yet how this practice influences climate extremes is currently unknown. Here we use ensemble simulations with the Community Earth System Model to assess the impacts of irrigation on climate extremes. An evaluation of the model performance reveals that irrigation has a small yet overall beneficial effect on the representation of present-day near-surface climate. While the influence of irrigation on annual mean temperatures is limited, we find a large impact on temperature extremes, with a particularly strong cooling during the hottest day of the year (-0.78 K averaged over irrigated land). The strong influence on extremes stems from the timing of irrigation and its influence on land-atmosphere coupling strength. Together these effects result in asymmetric temperature responses, with a more pronounced cooling during hot and/or dry periods. The influence of irrigation is even more pronounced when considering subgrid-scale model output, suggesting that local effects of land management are far more important than previously thought. Our results underline that irrigation has substantially reduced our exposure to hot temperature extremes in the past and highlight the need to account for irrigation in future climate projections.

  7. The Extreme Universe Space Observatory

    NASA Technical Reports Server (NTRS)

    Adams, Jim; Six, N. Frank (Technical Monitor)

    2002-01-01

    This talk will describe the Extreme Universe Space Observatory (EUSO) mission. EUSO is an ESA mission to explore the most powerful energy sources in the universe. The mission objectives of EUSO are to investigate EECRs, those with energies above 3x10(exp 19) eV, and very high-energy cosmic neutrinos. These objectives are directly related to extreme conditions in the physical world and possibly involve the early history of the big bang and the framework of GUTs. EUSO tackles the basic problem posed by the existence of these extreme-energy events. The solution could have a unique impact on fundamental physics, cosmology, and/or astrophysics. At these energies, magnetic deflection is thought to be so small that the EECR component would serve as the particle channel for astronomy. EUSO will make the first measurements of EAS from space by observing atmospheric fluorescence in the Earth's night sky. With measurements of the airshower track, EUSO will determine the energy and arrival direction of these extreme-energy events. EUSO will make high statistics observations of CRs beyond the predicted GZK cutoff energy and widen the channel for high-energy neutrino astronomy. The energy spectra, arrival directions, and shower profiles will be analyzed to distinguish the nature of these events and search for their sources. With EUSO data, we will have the possibility to discover a local EECR source, test Z-burst scenarios and other theories, and look for evidence of the breakdown of the relativity principle at extreme Lorentz factors.

  8. Evaluation applications of instrument calibration research findings in psychology for very small samples

    NASA Astrophysics Data System (ADS)

    Fisher, W. P., Jr.; Petry, P.

    2016-11-01

    Many published research studies document item calibration invariance across samples using Rasch's probabilistic models for measurement. A new approach to outcomes evaluation for very small samples was employed for two workshop series focused on stress reduction and joyful living conducted for health system employees and caregivers since 2012. Rasch-calibrated self-report instruments measuring depression, anxiety and stress, and the joyful living effects of mindfulness behaviors were identified in peer-reviewed journal articles. Items from one instrument were modified for use with a US population, other items were simplified, and some new items were written. Participants provided ratings of their depression, anxiety and stress, and the effects of their mindfulness behaviors before and after each workshop series. The numbers of participants providing both pre- and post-workshop data were low (16 and 14). Analysis of these small data sets produce results showing that, with some exceptions, the item hierarchies defining the constructs retained the same invariant profiles they had exhibited in the published research (correlations (not disattenuated) range from 0.85 to 0.96). In addition, comparisons of the pre- and post-workshop measures for the three constructs showed substantively and statistically significant changes. Implications for program evaluation comparisons, quality improvement efforts, and the organization of communications concerning outcomes in clinical fields are explored.

  9. The NuSTAR Serendipitous Survey: Hunting for the Most Extreme Obscured AGN at >10 keV

    NASA Astrophysics Data System (ADS)

    Lansbury, G. B.; Alexander, D. M.; Aird, J.; Gandhi, P.; Stern, D.; Koss, M.; Lamperti, I.; Ajello, M.; Annuar, A.; Assef, R. J.; Ballantyne, D. R.; Baloković, M.; Bauer, F. E.; Brandt, W. N.; Brightman, M.; Chen, C.-T. J.; Civano, F.; Comastri, A.; Del Moro, A.; Fuentes, C.; Harrison, F. A.; Marchesi, S.; Masini, A.; Mullaney, J. R.; Ricci, C.; Saez, C.; Tomsick, J. A.; Treister, E.; Walton, D. J.; Zappacosta, L.

    2017-09-01

    We identify sources with extremely hard X-ray spectra (I.e., with photon indices of {{Γ }}≲ 0.6) in the 13 deg2 NuSTAR serendipitous survey, to search for the most highly obscured active galactic nuclei (AGNs) detected at > 10 {keV}. Eight extreme NuSTAR sources are identified, and we use the NuSTAR data in combination with lower-energy X-ray observations (from Chandra, Swift XRT, and XMM-Newton) to characterize the broadband (0.5-24 keV) X-ray spectra. We find that all of the extreme sources are highly obscured AGNs, including three robust Compton-thick (CT; {N}{{H}}> 1.5× {10}24 cm-2) AGNs at low redshift (z< 0.1) and a likely CT AGN at higher redshift (z = 0.16). Most of the extreme sources would not have been identified as highly obscured based on the low-energy (< 10 keV) X-ray coverage alone. The multiwavelength properties (e.g., optical spectra and X-ray-mid-IR luminosity ratios) provide further support for the eight sources being significantly obscured. Correcting for absorption, the intrinsic rest-frame 10-40 keV luminosities of the extreme sources cover a broad range, from ≈ 5× {10}42 to 1045 erg s-1. The estimated number counts of CT AGNs in the NuSTAR serendipitous survey are in broad agreement with model expectations based on previous X-ray surveys, except for the lowest redshifts (z< 0.07), where we measure a high CT fraction of {f}{CT}{obs}={30}-12+16 % . For the small sample of CT AGNs, we find a high fraction of galaxy major mergers (50% ± 33%) compared to control samples of “normal” AGNs.

  10. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  11. Small RNA profiling of low biomass samples: identification and removal of contaminants

    DOE PAGES

    Heintz-Buschart, Anna; Yusuf, Dilmurat; Kaysen, Anne; ...

    2018-05-14

    Here, sequencing-based analyses of low-biomass samples are known to be prone to misinterpretation due to the potential presence of contaminating molecules derived from laboratory reagents and environments. DNA contamination has been previously reported, yet contamination with RNA is usually considered to be very unlikely due to its inherent instability. Small RNAs (sRNAs) identified in tissues and bodily fluids, such as blood plasma, have implications for physiology and pathology, and therefore the potential to act as disease biomarkers. Thus, the possibility for RNA contaminants demands careful evaluation.

  12. Small RNA profiling of low biomass samples: identification and removal of contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heintz-Buschart, Anna; Yusuf, Dilmurat; Kaysen, Anne

    Here, sequencing-based analyses of low-biomass samples are known to be prone to misinterpretation due to the potential presence of contaminating molecules derived from laboratory reagents and environments. DNA contamination has been previously reported, yet contamination with RNA is usually considered to be very unlikely due to its inherent instability. Small RNAs (sRNAs) identified in tissues and bodily fluids, such as blood plasma, have implications for physiology and pathology, and therefore the potential to act as disease biomarkers. Thus, the possibility for RNA contaminants demands careful evaluation.

  13. Mass amplifying probe for sensitive fluorescence anisotropy detection of small molecules in complex biological samples.

    PubMed

    Cui, Liang; Zou, Yuan; Lin, Ninghang; Zhu, Zhi; Jenkins, Gareth; Yang, Chaoyong James

    2012-07-03

    Fluorescence anisotropy (FA) is a reliable and excellent choice for fluorescence sensing. One of the key factors influencing the FA value for any molecule is the molar mass of the molecule being measured. As a result, the FA method with functional nucleic acid aptamers has been limited to macromolecules such as proteins and is generally not applicable for the analysis of small molecules because their molecular masses are relatively too small to produce observable FA value changes. We report here a molecular mass amplifying strategy to construct anisotropy aptamer probes for small molecules. The probe is designed in such a way that only when a target molecule binds to the probe does it activate its binding ability to an anisotropy amplifier (a high molecular mass molecule such as protein), thus significantly increasing the molecular mass and FA value of the probe/target complex. Specifically, a mass amplifying probe (MAP) consists of a targeting aptamer domain against a target molecule and molecular mass amplifying aptamer domain for the amplifier protein. The probe is initially rendered inactive by a small blocking strand partially complementary to both target aptamer and amplifier protein aptamer so that the mass amplifying aptamer domain would not bind to the amplifier protein unless the probe has been activated by the target. In this way, we prepared two probes that constitute a target (ATP and cocaine respectively) aptamer, a thrombin (as the mass amplifier) aptamer, and a fluorophore. Both probes worked well against their corresponding small molecule targets, and the detection limits for ATP and cocaine were 0.5 μM and 0.8 μM, respectively. More importantly, because FA is less affected by environmental interferences, ATP in cell media and cocaine in urine were directly detected without any tedious sample pretreatment. Our results established that our molecular mass amplifying strategy can be used to design aptamer probes for rapid, sensitive, and selective

  14. 40 CFR Appendix A to Subpart F of... - Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Enforcement Auditing of Small Nonroad Engines A Appendix A to Subpart F of Part 90 Protection of Environment...-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Selective Enforcement Auditing Pt. 90, Subpt. F, App. A Appendix A to Subpart F of Part 90—Sampling Plans for Selective Enforcement Auditing of Small Nonroad Engines...

  15. Magnetic and velocity fields in a dynamo operating at extremely small Ekman and magnetic Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Šimkanin, Ján; Kyselica, Juraj

    2017-12-01

    Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.

  16. Robot-assisted upper extremity rehabilitation for cervical spinal cord injuries: a systematic scoping review.

    PubMed

    Singh, Hardeep; Unger, Janelle; Zariffa, José; Pakosh, Maureen; Jaglal, Susan; Craven, B Catharine; Musselman, Kristin E

    2018-01-15

    Abstact Purpose: To provide an overview of the feasibility and outcomes of robotic-assisted upper extremity training for individuals with cervical spinal cord injury (SCI), and to identify gaps in current research and articulate future research directions. A systematic search was conducted using Medline, Embase, PsycINFO, CCTR, CDSR, CINAHL and PubMed on June 7, 2017. Search terms included 3 themes: (1) robotics; (2) SCI; (3) upper extremity. Studies using robots for upper extremity rehabilitation among individuals with cervical SCI were included. Identified articles were independently reviewed by two researchers and compared to pre-specified criteria. Disagreements regarding article inclusion were resolved through discussion. The modified Downs and Black checklist was used to assess article quality. Participant characteristics, study and intervention details, training outcomes, robot features, study limitations and recommendations for future studies were abstracted from included articles. Twelve articles (one randomized clinical trial, six case series, five case studies) met the inclusion criteria. Five robots were exoskeletons and three were end-effectors. Sample sizes ranged from 1 to 17 subjects. Articles had variable quality, with quality scores ranging from 8 to 20. Studies had a low internal validity primarily from lack of blinding or a control group. Individuals with mild-moderate impairments showed the greatest improvements on body structure/function and performance-level measures. This review is limited by the small number of articles, low-sample sizes and the diversity of devices and their associated training protocols, and outcome measures. Preliminary evidence suggests robot-assisted interventions are safe, feasible and can reduce active assistance provided by therapists. Implications for rehabilitation Robot-assisted upper extremity training for individuals with cervical spinal cord injury is safe, feasible and can reduce hands-on assistance provided

  17. Sensitive power compensated scanning calorimeter for analysis of phase transformations in small samples

    NASA Astrophysics Data System (ADS)

    Lopeandía, A. F.; Cerdó, L. l.; Clavaguera-Mora, M. T.; Arana, Leonel R.; Jensen, K. F.; Muñoz, F. J.; Rodríguez-Viejo, J.

    2005-06-01

    We have designed and developed a sensitive scanning calorimeter for use with microgram or submicrogram, thin film, or powder samples. Semiconductor processing techniques are used to fabricate membrane based microreactors with a small heat capacity of the addenda, 120nJ/K at room temperature. At heating rates below 10K/s the heat released or absorbed by the sample during a given transformation is compensated through a resistive Pt heater by a digital controller so that the calorimeter works as a power compensated device. Its use and dynamic sensitivity is demonstrated by analyzing the melting behavior of thin films of indium and high density polyethylene. Melting enthalpies in the range of 40-250μJ for sample masses on the order of 1.5μg have been measured with accuracy better than 5% at heating rates ˜0.2K/s. The signal-to-noise ratio, limited by the electronic setup, is 200nW.

  18. Extraction and labeling methods for microarrays using small amounts of plant tissue.

    PubMed

    Stimpson, Alexander J; Pereira, Rhea S; Kiss, John Z; Correll, Melanie J

    2009-03-01

    Procedures were developed to maximize the yield of high-quality RNA from small amounts of plant biomass for microarrays. Two disruption techniques (bead milling and pestle and mortar) were compared for the yield and the quality of RNA extracted from 1-week-old Arabidopsis thaliana seedlings (approximately 0.5-30 mg total biomass). The pestle and mortar method of extraction showed enhanced RNA quality at the smaller biomass samples compared with the bead milling technique, although the quality in the bead milling could be improved with additional cooling steps. The RNA extracted from the pestle and mortar technique was further tested to determine if the small quantity of RNA (500 ng-7 microg) was appropriate for microarray analyses. A new method of low-quantity RNA labeling for microarrays (NuGEN Technologies, Inc.) was used on five 7-day-old seedlings (approximately 2.5 mg fresh weight total) of Arabidopsis that were grown in the dark and exposed to 1 h of red light or continued dark. Microarray analyses were performed on a small plant sample (five seedlings; approximately 2.5 mg) using these methods and compared with extractions performed with larger biomass samples (approximately 500 roots). Many well-known light-regulated genes between the small plant samples and the larger biomass samples overlapped in expression changes, and the relative expression levels of selected genes were confirmed with quantitative real-time polymerase chain reaction, suggesting that these methods can be used for plant experiments where the biomass is extremely limited (i.e. spaceflight studies).

  19. A vastly improved method for in situ stable isotope analysis of very small water samples.

    NASA Astrophysics Data System (ADS)

    Coleman, M. L.; Christensen, L. E.; Kriesel, J.; Kelly, J.; Moran, J.; Vance, S.

    2016-12-01

    The stable isotope compositions of hydrogen and oxygen in water, ice and hydrated minerals are key characteristics to determine the origin and history of the material. Originally, analyses were performed by separating hydrogen and preparing CO2 from the oxygen in water for stable isotope ratio mass spectrometry. Subsequently, infrared absorption spectrometry in either a Herriot cell or by cavity ring down allowed direct analysis of water vapor. We are developing an instrument, intended for spaceflight and in situ deployment, which will exploit Capillary Absorption Spectrometry (CAS) for the H and O isotope analysis and a laser to sample planetary ices and hydrated minerals. The Tunable Laser Spectrometer (TLS) instrument (part of SAM on the MSL rover Curiosity) works by infrared absorption and we use its performance as a benchmark for comparison. TLS has a relatively large sample chamber to contain mirrors which give a long absorption pathlength. CAS works on the same principle but utilizes a hollow optic fiber, greatly reducing the sample volume. The fiber is a waveguide, enhancing the laser - water-vapor interaction and giving more than four orders of magnitude increase in sensitivity, despite a shorter optical path length. We have calculated that a fiber only 2 m long will be able to analyze 5 nanomoles of water with a precision of less than 1 per mil for D?H. The fiber is coiled to minimize instrument volume. Our instrument will couple this analytical capability with laser sampling to free water from hydrated minerals and ice and ideally we would use the same laser via a beam-splitter both for sampling and analysis. The ability to analyze very small samples is of benefit in two ways. In this concept it will allow much faster analysis of small sub-samples, while the high spatial sampling resolution offered by the laser will allow analysis of the heterogeneity of isotopic composition within grains or crystals, revealing the history of their growth.

  20. Confirmation of Small Dynamical and Stellar Masses for Extreme Emission Line Galaxies at z Approx. 2

    NASA Technical Reports Server (NTRS)

    Maseda, Michael V.; van Der Wel, Arjen; da Cunha, Elisabete; Rix, Hans-Walter; Pacifici, Camilla; Momcheva, Ivelina; Brammer, Gabriel B.; Franx, Marijn; van Dokkum, Pieter; Bell, Eric F.; hide

    2013-01-01

    Spectroscopic observations from the Large Binocular Telescope and the Very Large Telescope reveal kinematically narrow lines (approx. 50 km/s) for a sample of 14 extreme emission line galaxies at redshifts 1.4 < z < 2.3. These measurements imply that the total dynamical masses of these systems are low (< or approx. 3 × 10(exp 9) M). Their large [O III] (lambda)5007 equivalent widths (500-1100 Angstroms) and faint blue continuum emission imply young ages of 10-100 Myr and stellar masses of 10(exp 8)-10(exp 9)M, confirming the presence of a violent starburst. The dynamical masses represent the first such determinations for low-mass galaxies at z > 1. The stellar mass formed in this vigorous starburst phase represents a large fraction of the total (dynamical) mass, without a significantly massive underlying population of older stars. The occurrence of such intense events in shallow potentials strongly suggests that supernova-driven winds must be of critical importance in the subsequent evolution of these systems.

  1. Cardiovascular consequences of extreme prematurity: the EPICure study.

    PubMed

    McEniery, Carmel M; Bolton, Charlotte E; Fawke, Joseph; Hennessy, Enid; Stocks, Janet; Wilkinson, Ian B; Cockcroft, John R; Marlow, Neil

    2011-07-01

    The long-term consequences of extreme prematurity are becoming increasingly important, given recent improvements in neonatal intensive care. The aim of the current study was to examine the cardiovascular consequences of extreme prematurity in 11-year-olds born at or before 25 completed weeks of gestation. Age and sex-matched classmates were recruited as controls. Information concerning perinatal and maternal history was collected, and current anthropometric characteristics were measured in 219 children born extremely preterm and 153 classmates. A subset of the extremely preterm children (n = 68) and classmates (n = 90) then underwent detailed haemodynamic investigations, including measurement of supine blood pressure (BP), aortic pulse wave velocity (aPWV, a measure of aortic stiffness) and augmentation index (AIx, a measure of arterial pressure wave reflections). Seated brachial systolic and diastolic BP were not different between extremely preterm children and classmates (P = 0.3 for both), although there was a small, significant elevation in supine mean and diastolic BP in the extremely preterm children (P < 0.05 for both). Arterial pressure wave reflections were significantly elevated in the extremely preterm children (P < 0.001) and this persisted after adjusting for confounding variables. However, aortic stiffness was not different between the groups (P = 0.1). These data suggest that extreme prematurity is associated with altered arterial haemodynamics in children, not evident from the examination of brachial BP alone. Moreover, the smaller, preresistance and resistance vessels rather than large elastic arteries appear to be most affected. Children born extremely preterm may be at increased future cardiovascular risk.

  2. A sub-sampled approach to extremely low-dose STEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A.; Luzi, L.; Yang, H.

    The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less

  3. Evaluating the Small-World-Ness of a Sampled Network: Functional Connectivity of Entorhinal-Hippocampal Circuitry

    NASA Astrophysics Data System (ADS)

    She, Qi; Chen, Guanrong; Chan, Rosa H. M.

    2016-02-01

    The amount of publicly accessible experimental data has gradually increased in recent years, which makes it possible to reconsider many longstanding questions in neuroscience. In this paper, an efficient framework is presented for reconstructing functional connectivity using experimental spike-train data. A modified generalized linear model (GLM) with L1-norm penalty was used to investigate 10 datasets. These datasets contain spike-train data collected from the entorhinal-hippocampal region in the brains of rats performing different tasks. The analysis shows that entorhinal-hippocampal network of well-trained rats demonstrated significant small-world features. It is found that the connectivity structure generated by distance-dependent models is responsible for the observed small-world features of the reconstructed networks. The models are utilized to simulate a subset of units recorded from a large biological neural network using multiple electrodes. Two metrics for quantifying the small-world-ness both suggest that the reconstructed network from the sampled nodes estimates a more prominent small-world-ness feature than that of the original unknown network when the number of recorded neurons is small. Finally, this study shows that it is feasible to adjust the estimated small-world-ness results based on the number of neurons recorded to provide a more accurate reference of the network property.

  4. Performance verification of the Gravity and Extreme Magnetism Small explorer (GEMS) x-ray polarimeter

    NASA Astrophysics Data System (ADS)

    Enoto, Teruaki; Black, J. Kevin; Kitaguchi, Takao; Hayato, Asami; Hill, Joanne E.; Jahoda, Keith; Tamagawa, Toru; Kaneko, Kenta; Takeuchi, Yoko; Yoshikawa, Akifumi; Marlowe, Hannah; Griffiths, Scott; Kaaret, Philip E.; Kenward, David; Khalid, Syed

    2014-07-01

    Polarimetry is a powerful tool for astrophysical observations that has yet to be exploited in the X-ray band. For satellite-borne and sounding rocket experiments, we have developed a photoelectric gas polarimeter to measure X-ray polarization in the 2-10 keV range utilizing a time projection chamber (TPC) and advanced micro-pattern gas electron multiplier (GEM) techniques. We carried out performance verification of a flight equivalent unit (1/4 model) which was planned to be launched on the NASA Gravity and Extreme Magnetism Small Explorer (GEMS) satellite. The test was performed at Brookhaven National Laboratory, National Synchrotron Light Source (NSLS) facility in April 2013. The polarimeter was irradiated with linearly-polarized monochromatic X-rays between 2.3 and 10.0 keV and scanned with a collimated beam at 5 different detector positions. After a systematic investigation of the detector response, a modulation factor >=35% above 4 keV was obtained with the expected polarization angle. At energies below 4 keV where the photoelectron track becomes short, diffusion in the region between the GEM and readout strips leaves an asymmetric photoelectron image. A correction method retrieves an expected modulation angle, and the expected modulation factor, ~20% at 2.7 keV. Folding the measured values of modulation through an instrument model gives sensitivity, parameterized by minimum detectable polarization (MDP), nearly identical to that assumed at the preliminary design review (PDR).

  5. Abundance Ratios in a Large Sample of Emps with VLT+UVES

    NASA Astrophysics Data System (ADS)

    Hill, Vanessa; Cayrel, Roger; Spite, Monique; Bonifacio, Piercarlo; Eric, Depagne; Patrick, François; Timothy, Beers C.; Johannes, Andersen; Beatriz, Barbuy; Birgitta, Nordström

    Constraints on Early Galactic Enrichement from a large sample of Extremely Metal Poor Stars I will present the overall results from an large effort conducted at ESO-VLT+UVES to measure abundances in a sample of extremely metal-poor stars (EMPS) from high-resolution and high signal to noise spectra. More than 70 EMPS with [Fe/H]<-2.7 were observed equally distributed between turnoff and giants stars and very precise abundance ratios could be derived thanks to the high quality of the data. Among the results those of specific interest are lithium measurements in unevolved EMPS the much debated abundance of oxygen in the early galaxy (we present [OI] line measurements down to [O/Fe]=-3.5) and the trends of alpha elements iron group elements and Zinc. The scatter around these trends will also be discussed taking advantage of the small observationnal error-bars of this dataset. The implications on the early Galactic enrichement will be rewiewed while more specific topics covered by this large effort (and large team) will be adressed in devoted posters.

  6. Mechanism of shallow disrupted slide induced by extreme rainfall

    NASA Astrophysics Data System (ADS)

    Igwe, O.; Fukuoka, H.

    2010-12-01

    On July 16, 2010, extreme rainfall attacked western Japan and it caused very intense rainfall in Shobara city, Hiroshima prefecture, Japan. This rainfall induced hundreds of shallow disrupted slides and many of those became debris flows. One of this debris flows attacked a house standing in front of the exit of a channel, and claimed a resident’s life. Western Japan had repeatedly similar disasters in the past. Last event took place from July 19 to 26, 2009, when western Japan had a severe rainstorms and caused floods and landslides. Most of the landslides are debris slide - debris flows. Most devastated case took place in Hofu city, Japan. On July 21, extremely intense rainstorm caused numerous debris flows and mud flows in the hillslopes. Some of the debris flows destroyed residential houses and home for elderly people, and finally killed 14 residents. One of the unusual feature of both disaster was that landslides are distributed in very narrow area. In the 2010 Shobara city disaster, all of the landslides were distributed in 5 km x 3 km, and in the 2009 Hofu city disaster, most devastated zone of landslides were 10 km x 5 km. Rain radars of Meteorological Agency of Government of Japan detected the intense rainfall, however, the spatial resolution is usually larger than 5 km and the disaster area is too small to predict landslides nor issue warning. Furthermore, it was found that the growth rate of baby clouds was very quick. The geology of both areas are rhyolite (Shobara) and granite (Hofu), so the areal assessment of landslide hazard should be prepared before those intense rainfall will come. As for the Hofu city case, it was proved that debris flows took place in the high precipitation area and covered by covered by weathered granite sands and silts which is called “masa". This sands has been proved susceptible against landslides under extreme rainfall conditions. However, the transition from slide - debris flow process is not well revealed, except

  7. Extreme emission-line galaxies out to z ~ 1 in zCOSMOS. I. Sample and characterization of global properties

    NASA Astrophysics Data System (ADS)

    Amorín, R.; Pérez-Montero, E.; Contini, T.; Vílchez, J. M.; Bolzonella, M.; Tasca, L. A. M.; Lamareille, F.; Zamorani, G.; Maier, C.; Carollo, C. M.; Kneib, J.-P.; Le Fèvre, O.; Lilly, S.; Mainieri, V.; Renzini, A.; Scodeggio, M.; Bardelli, S.; Bongiorno, A.; Caputi, K.; Cucciati, O.; de la Torre, S.; de Ravel, L.; Franzetti, P.; Garilli, B.; Iovino, A.; Kampczyk, P.; Knobel, C.; Kovač, K.; Le Borgne, J.-F.; Le Brun, V.; Mignoli, M.; Pellò, R.; Peng, Y.; Presotto, V.; Ricciardelli, E.; Silverman, J. D.; Tanaka, M.; Tresse, L.; Vergani, D.; Zucca, E.

    2015-06-01

    Context. The study of large and representative samples of low-metallicity star-forming galaxies at different cosmic epochs is of great interest to the detailed understanding of the assembly history and evolution of low-mass galaxies. Aims: We present a thorough characterization of a large sample of 183 extreme emission-line galaxies (EELGs) at redshift 0.11 ≤ z ≤ 0.93 selected from the 20k zCOSMOS bright survey because of their unusually large emission line equivalent widths. Methods: We use multiwavelength COSMOS photometry, HST-ACS I-band imaging, and optical zCOSMOS spectroscopy to derive the main global properties of star-forming EELGs, such as sizes, stellar masses, star formation rates (SFR), and reliable oxygen abundances using both "direct" and "strong-line" methods. Results: The EELGs are extremely compact (r50 ~ 1.3 kpc), low-mass (M∗ ~ 107-1010 M⊙) galaxies forming stars at unusually high specific star formation rates (sSFR ≡ SFR/M⋆ up to 10-7 yr-1) compared to main sequence star-forming galaxies of the same stellar mass and redshift. At rest-frame UV wavelengths, the EELGs are luminous and show high surface brightness and include strong Lyα emitters, as revealed by GALEX spectroscopy. We show that zCOSMOS EELGs are high-ionization, low-metallicity systems, with median 12+log (O/H) = 8.16 ± 0.21 (0.2 Z⊙) including a handful of extremely metal-deficient (<0.1 Z⊙) EELGs. While ~80% of the EELGs show non-axisymmetric morphologies, including clumpy and cometary or tadpole galaxies, we find that ~29% of them show additional low-surface-brightness features, which strongly suggests recent or ongoing interactions. As star-forming dwarfs in the local Universe, EELGs are most often found in relative isolation. While only very few EELGs belong to compact groups, almost one third of them are found in spectroscopically confirmed loose pairs or triplets. Conclusions: The zCOSMOS EELGs are galaxies caught in a transient and probably early period of

  8. Diversity of Heterotrophic Protists from Extremely Hypersaline Habitats.

    PubMed

    Park, Jong Soo; Simpson, Alastair G B

    2015-09-01

    Heterotrophic protists (protozoa) are a diverse but understudied component of the biota of extremely hypersaline environments, with few data on molecular diversity within halophile 'species', and almost nothing known of their biogeographic distribution. We have garnered SSU rRNA gene sequences for several clades of halophilic protozoa from enrichments from waters of >12.5% salinity from Australia, North America, and Europe (6 geographic sites, 25 distinct samples). The small stramenopile Halocafeteria was found at all sites, but phylogenies did not show clear geographic clustering. The ciliate Trimyema was recorded from 6 non-European samples. Phylogenies confirmed a monophyletic halophilic Trimyema group that included possible south-eastern Australian, Western Australian and North American clusters. Several halophilic Heterolobosea were detected, demonstrating that Pleurostomum contains at least three relatively distinct clades, and increasing known continental ranges for Tulamoeba peronaphora and Euplaesiobystra hypersalinica. The unclassified flagellate Palustrimonas, found in one Australian sample, proves to be a novel deep-branching alveolate. These results are consistent with a global distribution of halophilic protozoa groups (∼ morphospecies), but the Trimyema case suggests that is worth testing whether larger forms exhibit biogeographic phylogenetic substructure. The molecular detection/characterization of halophilic protozoa is still far from complete at the clade level, let alone the 'species level'. Copyright © 2015 Elsevier GmbH. All rights reserved.

  9. Estimation of extremely small field radiation dose for brain stereotactic radiotherapy using the Vero4DRT system.

    PubMed

    Nakayama, Shinichi; Monzen, Hajime; Onishi, Yuichi; Kaneshige, Soichiro; Kanno, Ikuo

    2018-06-01

    The purpose of this study was a dosimetric validation of the Vero4DRT for brain stereotactic radiotherapy (SRT) with extremely small fields calculated by the treatment planning system (TPS) iPlan (Ver.4.5.1; algorithm XVMC). Measured and calculated data (e.g. percentage depth dose [PDD], dose profile, and point dose) were compared for small square fields of 30 × 30, 20 × 20, 10 × 10 and 5 × 5 mm 2 using ionization chambers of 0.01 or 0.04 cm 3 and a diamond detector. Dose verifications were performed using an ionization chamber and radiochromic film (EBT3; the equivalent field sizes used were 8.2, 8.7, 8.9, 9.5, and 12.9 mm 2 ) for five brain SRT cases irradiated with dynamic conformal arcs. The PDDs and dose profiles for the measured and calculated data were in good agreement for fields larger than or equal to 10 × 10 mm 2 when an appropriate detector was chosen. The dose differences for point doses in fields of 30 × 30, 20 × 20, 10 × 10 and 5 × 5 mm 2 were +0.48%, +0.56%, -0.52%, and +11.2% respectively. In the dose verifications for the brain SRT plans, the mean dose difference between the calculated and measured doses were -0.35% (range, -0.94% to +0.47%), with the average pass rates for the gamma index under the 3%/2 mm criterion being 96.71%, 93.37%, and 97.58% for coronal, sagittal, and axial planes respectively. The Vero4DRT system provides accurate delivery of radiation dose for small fields larger than or equal to 10 × 10 mm 2 . Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Space-time precipitation extremes for urban hydrology

    NASA Astrophysics Data System (ADS)

    Bardossy, A.; Pegram, G. G. S.

    2017-12-01

    Precipitation extremes are essential for hydrological design. In urban hydrology intensity duration frequency curves (IDFs) are estimated from observation records to design sewer systems. The conventional approaches seldom consider the areal extent of events. If they do so, duration-dependent area reduction factors (ARFs) are applied. In this contribution we investigate the influence of the size of the target urban area on the frequency of occurrence of extremes. We introduce two new concepts, (i) the maximum over an area and (ii) the sub-areal extremes. The properties of these are discussed. The space-time dependence of extremes strongly influences these statistics. The findings of this presentation show that the risk of urban flooding is routinely underestimated. We do this by sampling a long sequence of radar rainfall fields of 1 km resolution, not the usual limited information from gauge records at scattered point locations. The procedure we use is to generate 20 years of plausible 'radar' fields of 5 minute precipitation on a square frame of 128x128 one kilometer pixels and sample them in a regimented way. In this presentation we find that the traditional calculations are underestimating the extremes [by up to 30 % to 50 % depending on size and duration] and we show how we can revise them sensibly. The methodology we devise from simulated radar fields is checked against the records of a dense network of pluviometers covered by a radar in Baden-Württemberg, with a (regrettably) short 4-year record, as proof of concept.

  11. A Framework to Understand Extreme Space Weather Event Probability.

    PubMed

    Jonas, Seth; Fronczyk, Kassandra; Pratt, Lucas M

    2018-03-12

    An extreme space weather event has the potential to disrupt or damage infrastructure systems and technologies that many societies rely on for economic and social well-being. Space weather events occur regularly, but extreme events are less frequent, with a small number of historical examples over the last 160 years. During the past decade, published works have (1) examined the physical characteristics of the extreme historical events and (2) discussed the probability or return rate of select extreme geomagnetic disturbances, including the 1859 Carrington event. Here we present initial findings on a unified framework approach to visualize space weather event probability, using a Bayesian model average, in the context of historical extreme events. We present disturbance storm time (Dst) probability (a proxy for geomagnetic disturbance intensity) across multiple return periods and discuss parameters of interest to policymakers and planners in the context of past extreme space weather events. We discuss the current state of these analyses, their utility to policymakers and planners, the current limitations when compared to other hazards, and several gaps that need to be filled to enhance space weather risk assessments. © 2018 Society for Risk Analysis.

  12. Extremely red quasars in BOSS

    NASA Astrophysics Data System (ADS)

    Hamann, Fred; Zakamska, Nadia L.; Ross, Nicholas; Paris, Isabelle; Alexandroff, Rachael M.; Villforth, Carolin; Richards, Gordon T.; Herbst, Hanna; Brandt, W. Niel; Cook, Ben; Denney, Kelly D.; Greene, Jenny E.; Schneider, Donald P.; Strauss, Michael A.

    2017-01-01

    Red quasars are candidate young objects in an early transition stage of massive galaxy evolution. Our team recently discovered a population of extremely red quasars (ERQs) in the Baryon Oscillation Spectroscopic Survey (BOSS) that has a suite of peculiar emission-line properties including large rest equivalent widths (REWs), unusual `wingless' line profiles, large N V/Lyα, N V/C IV, Si IV/C IV and other flux ratios, and very broad and blueshifted [O III] λ5007. Here we present a new catalogue of C IV and N V emission-line data for 216 188 BOSS quasars to characterize the ERQ line properties further. We show that they depend sharply on UV-to-mid-IR colour, secondarily on REW(C IV), and not at all on luminosity or the Baldwin Effect. We identify a `core' sample of 97 ERQs with nearly uniform peculiar properties selected via I-W3 ≥ 4.6 (AB) and REW(C IV) ≥ 100 Å at redshifts 2.0-3.4. A broader search finds 235 more red quasars with similar unusual characteristics. The core ERQs have median luminosity ˜ 47.1, sky density 0.010 deg-2, surprisingly flat/blue UV spectra given their red UV-to-mid-IR colours, and common outflow signatures including BALs or BAL-like features and large C IV emission-line blueshifts. Their SEDs and line properties are inconsistent with normal quasars behind a dust reddening screen. We argue that the core ERQs are a unique obscured quasar population with extreme physical conditions related to powerful outflows across the line-forming regions. Patchy obscuration by small dusty clouds could produce the observed UV extinctions without substantial UV reddening.

  13. Decisions from Experience: Why Small Samples?

    ERIC Educational Resources Information Center

    Hertwig, Ralph; Pleskac, Timothy J.

    2010-01-01

    In many decisions we cannot consult explicit statistics telling us about the risks involved in our actions. In lieu of such data, we can arrive at an understanding of our dicey options by sampling from them. The size of the samples that we take determines, ceteris paribus, how good our choices will be. Studies of decisions from experience have…

  14. A simple Bayesian approach to quantifying confidence level of adverse event incidence proportion in small samples.

    PubMed

    Liu, Fang

    2016-01-01

    In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.

  15. New insights into olivo-cerebellar circuits for learning from a small training sample.

    PubMed

    Tokuda, Isao T; Hoang, Huu; Kawato, Mitsuo

    2017-10-01

    Artificial intelligence such as deep neural networks exhibited remarkable performance in simulated video games and 'Go'. In contrast, most humanoid robots in the DARPA Robotics Challenge fell down to ground. The dramatic contrast in performance is mainly due to differences in the amount of training data, which is huge and small, respectively. Animals are not allowed with millions of the failed trials, which lead to injury and death. Humans fall only several thousand times before they balance and walk. We hypothesize that a unique closed-loop neural circuit formed by the Purkinje cells, the cerebellar deep nucleus and the inferior olive in and around the cerebellum and the highest density of gap junctions, which regulate synchronous activities of the inferior olive nucleus, are computational machinery for learning from a small sample. We discuss recent experimental and computational advances associated with this hypothesis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Entropy, extremality, euclidean variations, and the equations of motion

    NASA Astrophysics Data System (ADS)

    Dong, Xi; Lewkowycz, Aitor

    2018-01-01

    We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.

  17. Modeling and Testing of Phase Transition-Based Deployable Systems for Small Body Sample Capture

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco; Backes, Paul; Wilkie, Keats; Giersch, Lou; Quijano, Ubaldo; Keim, Jason; Mukherjee, Rudranarayan

    2009-01-01

    This paper summarizes the modeling, simulation, and testing work related to the development of technology to investigate the potential that shape memory actuation has to provide mechanically simple and affordable solutions for delivering assets to a surface and for sample capture and return. We investigate the structural dynamics and controllability aspects of an adaptive beam carrying an end-effector which, by changing equilibrium phases is able to actively decouple the end-effector dynamics from the spacecraft dynamics during the surface contact phase. Asset delivery and sample capture and return are at the heart of several emerging potential missions to small bodies, such as asteroids and comets, and to the surface of large bodies, such as Titan.

  18. On the identification of Dragon Kings among extreme-valued outliers

    NASA Astrophysics Data System (ADS)

    Riva, M.; Neuman, S. P.; Guadagnini, A.

    2013-07-01

    Extreme values of earth, environmental, ecological, physical, biological, financial and other variables often form outliers to heavy tails of empirical frequency distributions. Quite commonly such tails are approximated by stretched exponential, log-normal or power functions. Recently there has been an interest in distinguishing between extreme-valued outliers that belong to the parent population of most data in a sample and those that do not. The first type, called Gray Swans by Nassim Nicholas Taleb (often confused in the literature with Taleb's totally unknowable Black Swans), is drawn from a known distribution of the tails which can thus be extrapolated beyond the range of sampled values. However, the magnitudes and/or space-time locations of unsampled Gray Swans cannot be foretold. The second type of extreme-valued outliers, termed Dragon Kings by Didier Sornette, may in his view be sometimes predicted based on how other data in the sample behave. This intriguing prospect has recently motivated some authors to propose statistical tests capable of identifying Dragon Kings in a given random sample. Here we apply three such tests to log air permeability data measured on the faces of a Berea sandstone block and to synthetic data generated in a manner statistically consistent with these measurements. We interpret the measurements to be, and generate synthetic data that are, samples from α-stable sub-Gaussian random fields subordinated to truncated fractional Gaussian noise (tfGn). All these data have frequency distributions characterized by power-law tails with extreme-valued outliers about the tail edges.

  19. Clustering on very small scales from a large, complete sample of confirmed quasar pairs

    NASA Astrophysics Data System (ADS)

    Eftekharzadeh, Sarah; Myers, Adam D.; Djorgovski, Stanislav G.; Graham, Matthew J.; Hennawi, Joseph F.; Mahabal, Ashish A.; Richards, Gordon T.

    2016-06-01

    We present by far the largest sample of spectroscopically confirmed binaryquasars with proper transverse separations of 17.0 ≤ Rprop ≤ 36.6 h-1 kpc. Our sample, whichis an order-of-magnitude larger than previous samples, is selected from Sloan Digital Sky Survey (SDSS) imaging over an area corresponding to the SDSS 6th data release (DR6). Our quasars are targeted using a Kernel Density Estimation technique (KDE), and confirmed using long-slit spectroscopy on a range of facilities.Our most complete sub-sample of 44 binary quasars with g<20.85, extends across angular scales of 2.9" < Δθ < 6.3", and is targeted from a parent sample that would be equivalent to a full spectroscopic survey of nearly 300,000 quasars.We determine the projected correlation function of quasars (\\bar Wp) over proper transverse scales of 17.0 ≤ Rprop ≤ 36.6 h-1 kpc, and also in 4 bins of scale within this complete range.To investigate the redshift evolution of quasar clustering on small scales, we make the first self-consistent measurement of the projected quasar correlation function in 4 bins of redshift over 0.4 ≤ z ≤ 2.3.

  20. Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.

    PubMed

    Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko

    2017-06-01

    Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.

  1. Extreme events and event size fluctuations in biased random walks on networks.

    PubMed

    Kishore, Vimal; Santhanam, M S; Amritkar, R E

    2012-05-01

    Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.

  2. Controllable gaussian-qubit interface for extremal quantum state engineering.

    PubMed

    Adesso, Gerardo; Campbell, Steve; Illuminati, Fabrizio; Paternostro, Mauro

    2010-06-18

    We study state engineering through bilinear interactions between two remote qubits and two-mode gaussian light fields. The attainable two-qubit states span the entire physically allowed region in the entanglement-versus-global-purity plane. Two-mode gaussian states with maximal entanglement at fixed global and marginal entropies produce maximally entangled two-qubit states in the corresponding entropic diagram. We show that a small set of parameters characterizing extremally entangled two-mode gaussian states is sufficient to control the engineering of extremally entangled two-qubit states, which can be realized in realistic matter-light scenarios.

  3. Monitoring, Modeling, and Diagnosis of Alkali-Silica Reaction in Small Concrete Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Vivek; Cai, Guowei; Gribok, Andrei V.

    Assessment and management of aging concrete structures in nuclear power plants require a more systematic approach than simple reliance on existing code margins of safety. Structural health monitoring of concrete structures aims to understand the current health condition of a structure based on heterogeneous measurements to produce high-confidence actionable information regarding structural integrity that supports operational and maintenance decisions. This report describes alkali-silica reaction (ASR) degradation mechanisms and factors influencing the ASR. A fully coupled thermo-hydro-mechanical-chemical model developed by Saouma and Perotti by taking into consideration the effects of stress on the reaction kinetics and anisotropic volumetric expansion is presentedmore » in this report. This model is implemented in the GRIZZLY code based on the Multiphysics Object Oriented Simulation Environment. The implemented model in the GRIZZLY code is randomly used to initiate ASR in a 2D and 3D lattice to study the percolation aspects of concrete. The percolation aspects help determine the transport properties of the material and therefore the durability and service life of concrete. This report summarizes the effort to develop small-size concrete samples with embedded glass to mimic ASR. The concrete samples were treated in water and sodium hydroxide solution at elevated temperature to study how ingress of sodium ions and hydroxide ions at elevated temperature impacts concrete samples embedded with glass. Thermal camera was used to monitor the changes in the concrete sample and results are summarized.« less

  4. Extreme pressure fluid sample transfer pump

    DOEpatents

    Halverson, Justin E.; Bowman, Wilfred W.

    1990-01-01

    A transfer pump for samples of fluids at very low or very high pressures comprising a cylinder having a piston sealed with an O-ring, the piston defining forward and back chambers, an inlet and exit port and valve arrangement for the fluid to enter and leave the forward chamber, and a port and valve arrangement in the back chamber for adjusting the pressure across the piston so that the pressure differential across the piston is essentially zero and approximately equal to the pressure of the fluid so that the O-ring seals against leakage of the fluid and the piston can be easily moved, regardless of the pressure of the fluid. The piston may be actuated by a means external to the cylinder with a piston rod extending through a hole in the cylinder sealed with a bellows attached to the piston head and the interior of the back chamber.

  5. The Extreme Hosts of Extreme Supernovae

    NASA Astrophysics Data System (ADS)

    Neill, James D.

    2012-01-01

    We present the results from a deeper survey of Luminous Supernova (LSN) hosts with the Galaxy Evolution Explorer (GALEX). We have added new, multiple kilo-second observations to our original observations of seventeen LSN hosts providing better constraints on their physical properties. We place the LSNe hosts on the galaxy NUV-r versus M(r) color magnitude diagram (CMD) with a larger comparison sample ( 26,000) to illustrate the extreme nature of these galaxies. The LSN hosts favor low-density regions of the galaxy CMD falling on the blue edge of the blue cloud toward the low luminosity end. The new observations provide tighter constraints on the star formation rates (SFRs) and stellar masses, M(*), and show that the LSNe result from regions of high specific star formation and yet low total SFR. This regime is of particular interest for exploring the upper end of the stellar IMF and its variation. If our understanding of the progenitors of the LSNe leans toward very massive (> 200 M_sun) progenitors, the potential for a conflict with IMF theory exists because the conditions found in the hosts producing the LSNe should not create such massive stars. If it also required that LSNe can only be produced in primordial or very low metallicity environments, then they will also provide evidence for strong variation in metallicity within a dwarf galaxy, since their masses are consistent with low, but not extreme metallicity.

  6. Small RNA profiling of low biomass samples: identification and removal of contaminants.

    PubMed

    Heintz-Buschart, Anna; Yusuf, Dilmurat; Kaysen, Anne; Etheridge, Alton; Fritz, Joëlle V; May, Patrick; de Beaufort, Carine; Upadhyaya, Bimal B; Ghosal, Anubrata; Galas, David J; Wilmes, Paul

    2018-05-14

    Sequencing-based analyses of low-biomass samples are known to be prone to misinterpretation due to the potential presence of contaminating molecules derived from laboratory reagents and environments. DNA contamination has been previously reported, yet contamination with RNA is usually considered to be very unlikely due to its inherent instability. Small RNAs (sRNAs) identified in tissues and bodily fluids, such as blood plasma, have implications for physiology and pathology, and therefore the potential to act as disease biomarkers. Thus, the possibility for RNA contaminants demands careful evaluation. Herein, we report on the presence of small RNA (sRNA) contaminants in widely used microRNA extraction kits and propose an approach for their depletion. We sequenced sRNAs extracted from human plasma samples and detected important levels of non-human (exogenous) sequences whose source could be traced to the microRNA extraction columns through a careful qPCR-based analysis of several laboratory reagents. Furthermore, we also detected the presence of artefactual sequences related to these contaminants in a range of published datasets, thereby arguing in particular for a re-evaluation of reports suggesting the presence of exogenous RNAs of microbial and dietary origin in blood plasma. To avoid artefacts in future experiments, we also devise several protocols for the removal of contaminant RNAs, define minimal amounts of starting material for artefact-free analyses, and confirm the reduction of contaminant levels for identification of bona fide sequences using 'ultra-clean' extraction kits. This is the first report on the presence of RNA molecules as contaminants in RNA extraction kits. The described protocols should be applied in the future to avoid confounding sRNA studies.

  7. A comparison between measured surface microtopography and observed scattering in the extreme ultraviolet

    NASA Technical Reports Server (NTRS)

    Green, James; Jelinsky, Sharon; Bowyer, Stuart; Malina, Roger F.

    1988-01-01

    The paper presents comparative measurements of surface roughness on prepared samples. These measurements have been made with both Talystep profilometers and WYKO interferometers. In addition, the scattering distribution from these samples was measured at extreme ultraviolet wavelengths. The utility of the WYKO interferometer and Talystep device for specifying extreme ultraviolet mirror surface quality is discussed.

  8. Can Concentration - Discharge Relationships Diagnose Material Source During Extreme Events?

    NASA Astrophysics Data System (ADS)

    Karwan, D. L.; Godsey, S.; Rose, L.

    2017-12-01

    Floods can carry >90% of the basin material exported in a given year as well as alter flow pathways and material sources. In turn, sediment and solute fluxes can increase flood damages and negatively impact water quality and integrate physical and chemical weathering of landscapes and channels. Concentration-discharge (C-Q) relationships are used to both describe export patterns as well as compute them. Metrics for describing C-Q patterns and inferring their controls are vulnerable to infrequent sampling that affects how C-Q relationships are interpolated and interpreted. C-Q relationships are typically evaluated from multiple samples, but because hydrological extremes are rare, data are often unavailable for extreme events. Because solute and sediment C-Q relationships likely respond to changes in hydrologic extremes in different ways, there is a pressing need to define their behavior under extreme conditions, including how to properly sample to capture these patterns. In the absence of such knowledge, improving load estimates in extreme floods will likely remain difficult. Here we explore the use of C-Q relationships to determine when an event alters a watershed system such that it enters a new material source/transport regime. We focus on watersheds with sediment and discharge time series include low-frequency and/or extreme events. For example, we compare solute and sediment patterns in White Clay Creek in southeastern Pennsylvania across a range of flows inclusive of multiple hurricanes for which we have ample ancillary hydrochemical data. TSS is consistently mobilized during high flow events, even during extreme floods associated with hurricanes, and sediment fingerprinting indicates different sediment sources, including in-channel remobilization and landscape erosion, are active at different times. In other words, TSS mobilization in C-Q space is not sensitive to the source of material being mobilized. Unlike sediments, weathering solutes in this watershed

  9. FastID: Extremely Fast Forensic DNA Comparisons

    DTIC Science & Technology

    2017-05-19

    FastID: Extremely Fast Forensic DNA Comparisons Darrell O. Ricke, PhD Bioengineering Systems & Technologies Massachusetts Institute of...Technology Lincoln Laboratory Lexington, MA USA Darrell.Ricke@ll.mit.edu Abstract—Rapid analysis of DNA forensic samples can have a critical impact on...time sensitive investigations. Analysis of forensic DNA samples by massively parallel sequencing is creating the next gold standard for DNA

  10. Extreme-value dependence: An application to exchange rate markets

    NASA Astrophysics Data System (ADS)

    Fernandez, Viviana

    2007-04-01

    Extreme value theory (EVT) focuses on modeling the tail behavior of a loss distribution using only extreme values rather than the whole data set. For a sample of 10 countries with dirty/free float regimes, we investigate whether paired currencies exhibit a pattern of asymptotic dependence. That is, whether an extremely large appreciation or depreciation in the nominal exchange rate of one country might transmit to another. In general, after controlling for volatility clustering and inertia in returns, we do not find evidence of extreme-value dependence between paired exchange rates. However, for asymptotic-independent paired returns, we find that tail dependency of exchange rates is stronger under large appreciations than under large depreciations.

  11. [Monitoring microbiological safety of small systems of water distribution. Comparison of two sampling programs in a town in central Italy].

    PubMed

    Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A

    2005-01-01

    To determine the frequency of sampling in small water distribution systems (<5,000 inhabitants) and compare the results according to different hypotheses in bacteria distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value <0.001) and the probability of II type error with the assumption of heterogeneity was higher with 4 samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.

  12. A comparative analysis of support vector machines and extreme learning machines.

    PubMed

    Liu, Xueyi; Gao, Chuanhou; Li, Ping

    2012-09-01

    The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. A simple X-ray source of two orthogonal beams for small samples imaging

    NASA Astrophysics Data System (ADS)

    Hrdý, J.

    2018-04-01

    A simple method for simultaneous imaging of small samples by two orthogonal beams is proposed. The method is based on one channel-cut crystal which is oriented such that the beam is diffracted on two crystallographic planes simultaneously. These planes are symmetrically inclined to the crystal surface. The beams are three times diffracted. After the first diffraction the beam is split. After the second diffraction the split beams become parallel. Finally, after the third diffraction the beams become convergent and may be used for imaging. The corresponding angular relations to obtain orthogonal beams are derived.

  14. ETTF - Extreme Temperature Translation Furnace experiment

    NASA Image and Video Library

    1996-09-23

    STS79-E-5275 (16 - 26 September 1996) --- Aboard the Spacehab double module in the Space Shuttle Atlantis' cargo bay, astronaut Jerome (Jay) Apt, mission specialist, checks a sample from the Extreme Temperature Translation Furnace (ETTF) experiment. The photograph was taken with the Electronic Still Camera (ESC).

  15. Post-stratification sampling in small area estimation (SAE) model for unemployment rate estimation by Bayes approach

    NASA Astrophysics Data System (ADS)

    Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang

    2016-02-01

    This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.

  16. A microfluidic platform for precision small-volume sample processing and its use to size separate biological particles with an acoustic microdevice [Precision size separation of biological particles in small-volume samples by an acoustic microfluidic system

    DOE PAGES

    Fong, Erika J.; Huang, Chao; Hamilton, Julie; ...

    2015-11-23

    Here, a major advantage of microfluidic devices is the ability to manipulate small sample volumes, thus reducing reagent waste and preserving precious sample. However, to achieve robust sample manipulation it is necessary to address device integration with the macroscale environment. To realize repeatable, sensitive particle separation with microfluidic devices, this protocol presents a complete automated and integrated microfluidic platform that enables precise processing of 0.15–1.5 ml samples using microfluidic devices. Important aspects of this system include modular device layout and robust fixtures resulting in reliable and flexible world to chip connections, and fully-automated fluid handling which accomplishes closed-loop sample collection,more » system cleaning and priming steps to ensure repeatable operation. Different microfluidic devices can be used interchangeably with this architecture. Here we incorporate an acoustofluidic device, detail its characterization, performance optimization, and demonstrate its use for size-separation of biological samples. By using real-time feedback during separation experiments, sample collection is optimized to conserve and concentrate sample. Although requiring the integration of multiple pieces of equipment, advantages of this architecture include the ability to process unknown samples with no additional system optimization, ease of device replacement, and precise, robust sample processing.« less

  17. Spitzer SAGE-Spec: Near infrared spectroscopy, dust shells, and cool envelopes in extreme Large Magellanic Cloud asymptotic giant branch stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, R. D.; Srinivasan, S.; Kemper, F.

    2014-11-01

    K-band spectra are presented for a sample of 39 Spitzer Infrared Spectrograph (IRS) SAGE-Spec sources in the Large Magellanic Cloud. The spectra exhibit characteristics in very good agreement with their positions in the near-infrared—Spitzer color-magnitude diagrams and their properties as deduced from the Spitzer IRS spectra. Specifically, the near-infrared spectra show strong atomic and molecular features representative of oxygen-rich and carbon-rich asymptotic giant branch stars, respectively. A small subset of stars was chosen from the luminous and red extreme ''tip'' of the color-magnitude diagram. These objects have properties consistent with dusty envelopes but also cool, carbon-rich ''stellar'' cores. Modest amountsmore » of dust mass loss combine with the stellar spectral energy distribution to make these objects appear extreme in their near-infrared and mid-infrared colors. One object in our sample, HV 915, a known post-asymptotic giant branch star of the RV Tau type, exhibits CO 2.3 μm band head emission consistent with previous work that demonstrates that the object has a circumstellar disk.« less

  18. DETAILED ABUNDANCES OF STARS WITH SMALL PLANETS DISCOVERED BY KEPLER. I. THE FIRST SAMPLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuler, Simon C.; Vaz, Zachary A.; Santrich, Orlando J. Katime

    2015-12-10

    We present newly derived stellar parameters and the detailed abundances of 19 elements of seven stars with small planets discovered by NASA's Kepler Mission. Each star, save one, has at least one planet with a radius ≤1.6 R{sub ⊕}, suggesting a primarily rocky composition. The stellar parameters and abundances are derived from high signal-to-noise ratio, high-resolution echelle spectroscopy obtained with the 10 m Keck I telescope and High Resolution Echelle Spectrometer using standard spectroscopic techniques. The metallicities of the seven stars range from −0.32 to +0.13 dex, with an average metallicity that is subsolar, supporting previous suggestions that, unlike Jupiter-typemore » giant planets, small planets do not form preferentially around metal-rich stars. The abundances of elements other than iron are in line with a population of Galactic disk stars, and despite our modest sample size, we find hints that the compositions of stars with small planets are similar to stars without known planets and with Neptune-size planets, but not to those of stars with giant planets. This suggests that the formation of small planets does not require exceptional host-star compositions and that small planets may be ubiquitous in the Galaxy. We compare our derived abundances (which have typical uncertainties of ≲0.04 dex) to the condensation temperature of the elements; a correlation between the two has been suggested as a possible signature of rocky planet formation. None of the stars demonstrate the putative rocky planet signature, despite at least three of the stars having rocky planets estimated to contain enough refractory material to produce the signature, if real. More detailed abundance analyses of stars known to host small planets are needed to verify our results and place ever more stringent constraints on planet formation models.« less

  19. Extreme Vertical Gusts in the Atmospheric Boundary Layer

    DTIC Science & Technology

    2015-07-01

    significant effect on the statistics of the rare, extreme gusts. In the lowest 5,000 ft, boundary layer effects make small to moderate vertical...4 2.4 Effects of Gust Shape ............................................................................................... 5... Definitions Adiabatic Lapse Rate The rate of change of temperature with altitude that would occur if a parcel of air was transported sufficiently

  20. Responsiveness of SF-36 and Lower Extremity Functional Scale for assessing outcomes in traumatic injuries of lower extremities.

    PubMed

    Pan, Shin-Liang; Liang, Huey-Wen; Hou, Wen-Hsuan; Yeh, Tian-Shin

    2014-11-01

    To assess the responsiveness of one generic questionnaire, Medical Outcomes Study Short Form-36 (SF-36), and one region-specific outcome measure, Lower Extremity Functional Scale (LEFS), in patients with traumatic injuries of lower extremities. A prospective and observational study of patients after traumatic injuries of lower extremities. Assessments were performed at baseline and 3 months later. In-patients and out-patients in two university hospitals in Taiwan. A convenience sample of 109 subjects were evaluated and 94 (86%) were followed. Not applicable. Assessments of responsiveness with distribution-based approach (effect size, standardized response mean [SRM], minimal detectable change) and anchor-based approach (receiver's operating curve analysis, ROC analysis). LEFS and physical component score (PCS) of SF-36 were all responsive to global improvement, with fair-to-good accuracy in discriminating between participants with and without improvement. The area under curve gained by ROC analysis for LEFS and SF-36 PCS was similar (0.65 vs. 0.70, p=0.26). Our findings revealed comparable responsiveness of LEFS and PCS of SF-36 in a sample of subjects with traumatic injuries of lower limbs. Either type of functional measure would be suitable for use in clinical trials where improvement in function was an endpoint of interest. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Determination of extremely low (236)U/(238)U isotope ratios in environmental samples by sector-field inductively coupled plasma mass spectrometry using high-efficiency sample introduction.

    PubMed

    Boulyga, Sergei F; Heumann, Klaus G

    2006-01-01

    A method by inductively coupled plasma mass spectrometry (ICP-MS) was developed which allows the measurement of (236)U at concentration ranges down to 3 x 10(-14)g g(-1) and extremely low (236)U/(238)U isotope ratios in soil samples of 10(-7). By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5,000 counts fg(-1) uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH(+)/U(+) down to a level of 10(-6). An abundance sensitivity of 3 x 10(-7) was observed for (236)U/(238)U isotope ratio measurements at mass resolution 4000. The detection limit for (236)U and the lowest detectable (236)U/(238)U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the (236)U/(238)U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the (235)U/(238)U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of (236)U in the upper 0-10 cm soil layers varied from 2 x 10(-9)g g(-1) within radioactive spots close to the Chernobyl NPP to 3 x 10(-13)g g(-1) on a sampling site located by >200 km from Chernobyl.

  2. Gravo-Aeroelastic Scaling for Extreme-Scale Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fingersh, Lee J; Loth, Eric; Kaminski, Meghan

    2017-06-09

    A scaling methodology is described in the present paper for extreme-scale wind turbines (rated at 10 MW or more) that allow their sub-scale turbines to capture their key blade dynamics and aeroelastic deflections. For extreme-scale turbines, such deflections and dynamics can be substantial and are primarily driven by centrifugal, thrust and gravity forces as well as the net torque. Each of these are in turn a function of various wind conditions, including turbulence levels that cause shear, veer, and gust loads. The 13.2 MW rated SNL100-03 rotor design, having a blade length of 100-meters, is herein scaled to the CART3more » wind turbine at NREL using 25% geometric scaling and blade mass and wind speed scaled by gravo-aeroelastic constraints. In order to mimic the ultralight structure on the advanced concept extreme-scale design the scaling results indicate that the gravo-aeroelastically scaled blades for the CART3 are be three times lighter and 25% longer than the current CART3 blades. A benefit of this scaling approach is that the scaled wind speeds needed for testing are reduced (in this case by a factor of two), allowing testing under extreme gust conditions to be much more easily achieved. Most importantly, this scaling approach can investigate extreme-scale concepts including dynamic behaviors and aeroelastic deflections (including flutter) at an extremely small fraction of the full-scale cost.« less

  3. A simple device to convert a small-animal PET scanner into a multi-sample tissue and injection syringe counter.

    PubMed

    Green, Michael V; Seidel, Jurgen; Choyke, Peter L; Jagoda, Elaine M

    2017-10-01

    We describe a simple fixture that can be added to the imaging bed of a small-animal PET scanner that allows for automated counting of multiple organ or tissue samples from mouse-sized animals and counting of injection syringes prior to administration of the radiotracer. The combination of imaging and counting capabilities in the same machine offers advantages in certain experimental settings. A polyethylene block of plastic, sculpted to mate with the animal imaging bed of a small-animal PET scanner, is machined to receive twelve 5-ml containers, each capable of holding an entire organ from a mouse-sized animal. In addition, a triangular cross-section slot is machined down the centerline of the block to secure injection syringes from 1-ml to 3-ml in size. The sample holder is scanned in PET whole-body mode to image all samples or in one bed position to image a filled injection syringe. Total radioactivity in each sample or syringe is determined from the reconstructed images of these objects using volume re-projection of the coronal images and a single region-of-interest for each. We tested the accuracy of this method by comparing PET estimates of sample and syringe activity with well counter and dose calibrator estimates of these same activities. PET and well counting of the same samples gave near identical results (in MBq, R 2 =0.99, slope=0.99, intercept=0.00-MBq). PET syringe and dose calibrator measurements of syringe activity in MBq were also similar (R 2 =0.99, slope=0.99, intercept=- 0.22-MBq). A small-animal PET scanner can be easily converted into a multi-sample and syringe counting device by the addition of a sample block constructed for that purpose. This capability, combined with live animal imaging, can improve efficiency and flexibility in certain experimental settings. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Extreme habitats as refuge from parasite infections? Evidence from an extremophile fish

    NASA Astrophysics Data System (ADS)

    Tobler, Michael; Schlupp, Ingo; García de León, Francisco J.; Glaubrecht, Matthias; Plath, Martin

    2007-05-01

    Living in extreme habitats typically requires costly adaptations of any organism tolerating these conditions, but very little is known about potential benefits that trade off these costs. We suggest that extreme habitats may function as refuge from parasite infections, since parasites can become locally extinct either directly, through selection by an extreme environmental parameter on free-living parasite stages, or indirectly, through selection on other host species involved in its life cycle. We tested this hypothesis in a small freshwater fish, the Atlantic molly ( Poecilia mexicana) that inhabits normal freshwaters as well as extreme habitats containing high concentrations of toxic hydrogen sulfide. Populations from such extreme habitats are significantly less parasitized by the trematode Uvulifer sp. than a population from a non-sulfidic habitat. We suggest that reduced parasite prevalence may be a benefit of living in sulfidic habitats.

  5. Measuring Blood Glucose Concentrations in Photometric Glucometers Requiring Very Small Sample Volumes.

    PubMed

    Demitri, Nevine; Zoubir, Abdelhak M

    2017-01-01

    Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.

  6. Inferring the anthropogenic contribution to local temperature extremes

    DOE PAGES

    Stone, Dáithí A.; Paciorek, Christopher J.; Prabhat, .; ...

    2013-03-19

    Here, in PNAS, Hansen et al. document an observed planet-wide increase in the frequency of extremely hot months and a decrease in the frequency of extremely cold months, consistent with earlier studies. This analysis is achieved through aggregation of gridded monthly temperature measurements from all over the planet. Such aggregation is advantageous in achieving statistical sampling power; however, it sacrifices regional specificity. Lastly, in that light, we find the conclusion of Hansen et al. that “the extreme summer climate anomalies in Texas in 2011, in Moscow in 2010, and in France in 2003 almost certainly would not have occurred inmore » the absence of global warming” to be unsubstantiated by their analysis.« less

  7. Inferring the anthropogenic contribution to local temperature extremes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stone, Dáithí A.; Paciorek, Christopher J.; Prabhat, .

    Here, in PNAS, Hansen et al. document an observed planet-wide increase in the frequency of extremely hot months and a decrease in the frequency of extremely cold months, consistent with earlier studies. This analysis is achieved through aggregation of gridded monthly temperature measurements from all over the planet. Such aggregation is advantageous in achieving statistical sampling power; however, it sacrifices regional specificity. Lastly, in that light, we find the conclusion of Hansen et al. that “the extreme summer climate anomalies in Texas in 2011, in Moscow in 2010, and in France in 2003 almost certainly would not have occurred inmore » the absence of global warming” to be unsubstantiated by their analysis.« less

  8. Technology perspectives in the future exploration of extreme environments

    NASA Astrophysics Data System (ADS)

    Cutts, J.; Balint, T.; Kolawa, El.; Peterson, C.

    2007-08-01

    Solar System exploration is driven by high priority science goals and objectives at diverse destinations, as described in the NRC Decadal Survey and in NASA's 2006 Solar System Exploration (SSE) Roadmap. Proposed missions to these targets encounter extreme environments, including high or low temperatures, high pressure, corrosion, high heat flux, radiation and thermal cycling. These conditions are often coupled, such as low temperature and high radiation at Europa; and high temperature and high pressure near the surface of Venus. Mitigation of these environmental conditions frequently reaches beyond technologies developed for terrestrial applications, for example, by the automotive and oil industries. Therefore, space agencies require dedicated technology developments to enable these future missions. Within NASA, proposed missions are divided into three categories. Competed small (Discovery class) and medium (New Frontiers class) missions are cost capped, thus limiting significant technology developments. Therefore, large (Flagship class) missions are required not only to tackle key science questions which can't be addressed by smaller missions, but also to develop mission enabling technologies that can feed forward to smaller missions as well. In a newly completed extreme environment technology assessment at NASA, we evaluated technologies from the current State of Practice (SoP) to advanced concepts for proposed missions over the next decades. Highlights of this report are discussed here, including systems architectures, such as hybrid systems; protection systems; high temperature electronics; power generation and storage; mobility technologies; sample acquisition and mechanisms; and the need to test these technologies in relevant environments. It is expected that the findings - documented in detail in NASA's Extreme Environments Technologies report - would help identifying future technology investment areas, and in turn enable or enhance planned SSE missions

  9. Inadequacy of Conventional Grab Sampling for Remediation Decision-Making for Metal Contamination at Small-Arms Ranges.

    PubMed

    Clausen, J L; Georgian, T; Gardner, K H; Douglas, T A

    2018-01-01

    Research shows grab sampling is inadequate for evaluating military ranges contaminated with energetics because of their highly heterogeneous distribution. Similar studies assessing the heterogeneous distribution of metals at small-arms ranges (SAR) are lacking. To address this we evaluated whether grab sampling provides appropriate data for performing risk analysis at metal-contaminated SARs characterized with 30-48 grab samples. We evaluated the extractable metal content of Cu, Pb, Sb, and Zn of the field data using a Monte Carlo random resampling with replacement (bootstrapping) simulation approach. Results indicate the 95% confidence interval of the mean for Pb (432 mg/kg) at one site was 200-700 mg/kg with a data range of 5-4500 mg/kg. Considering the U.S. Environmental Protection Agency screening level for lead is 400 mg/kg, the necessity of cleanup at this site is unclear. Resampling based on populations of 7 and 15 samples, a sample size more realistic for the area yielded high false negative rates.

  10. Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)

    NASA Astrophysics Data System (ADS)

    Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.

    2013-12-01

    We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.

  11. Computational discovery of extremal microstructure families

    PubMed Central

    Chen, Desai; Skouras, Mélina; Zhu, Bo; Matusik, Wojciech

    2018-01-01

    Modern fabrication techniques, such as additive manufacturing, can be used to create materials with complex custom internal structures. These engineered materials exhibit a much broader range of bulk properties than their base materials and are typically referred to as metamaterials or microstructures. Although metamaterials with extraordinary properties have many applications, designing them is very difficult and is generally done by hand. We propose a computational approach to discover families of microstructures with extremal macroscale properties automatically. Using efficient simulation and sampling techniques, we compute the space of mechanical properties covered by physically realizable microstructures. Our system then clusters microstructures with common topologies into families. Parameterized templates are eventually extracted from families to generate new microstructure designs. We demonstrate these capabilities on the computational design of mechanical metamaterials and present five auxetic microstructure families with extremal elastic material properties. Our study opens the way for the completely automated discovery of extremal microstructures across multiple domains of physics, including applications reliant on thermal, electrical, and magnetic properties. PMID:29376124

  12. Small Intestinal Infections.

    PubMed

    Munot, Khushboo; Kotler, Donald P

    2016-06-01

    Small intestinal infections are extremely common worldwide. They may be bacterial, viral, or parasitic in etiology. Most are foodborne or waterborne, with specific etiologies differing by region and with diverse pathophysiologies. Very young, very old, and immune-deficient individuals are the most vulnerable to morbidity or mortality from small intestinal infections. There have been significant advances in diagnostic sophistication with the development and early application of molecular diagnostic assays, though these tests have not become mainstream. The lack of rapid diagnoses combined with the self-limited nature of small intestinal infections has hampered the development of specific and effective treatments other than oral rehydration. Antibiotics are not indicated in the absence of an etiologic diagnosis, and not at all in the case of some infections.

  13. Intensification of hot extremes in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diffenbaugh, Noah; Ashfaq, Moetasim

    Governments are currently considering policies that will limit greenhouse gas concentrations, including negotiation of an international treaty to replace the expiring Kyoto Protocol. Existing mitigation targets have arisen primarily from political negotiations, and the ability of such policies to avoid dangerous impacts is still uncertain. Using a large suite of climate model experiments, we find that substantial intensification of hot extremes could occur within the next 3 decades, below the 2 C global warming target currently being considered by policy makers. We also find that the intensification of hot extremes is associated with a shift towards more anticyclonic atmospheric circulationmore » during the warm season, along with warm-season drying over much of the U.S. The possibility that intensification of hot extremes could result from relatively small increases in greenhouse gas concentrations suggests that constraining global warming to 2 C may not be sufficient to avoid dangerous climate change.« less

  14. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adekola, A.S.; Colaresi, J.; Douwen, J.

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. Themore » detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum

  15. Introducing the refined gravity hypothesis of extreme sexual size dimorphism

    PubMed Central

    2010-01-01

    Background Explanations for the evolution of female-biased, extreme Sexual Size Dimorphism (SSD), which has puzzled researchers since Darwin, are still controversial. Here we propose an extension of the Gravity Hypothesis (i.e., the GH, which postulates a climbing advantage for small males) that in conjunction with the fecundity hypothesis appears to have the most general power to explain the evolution of SSD in spiders so far. In this "Bridging GH" we propose that bridging locomotion (i.e., walking upside-down under own-made silk bridges) may be behind the evolution of extreme SSD. A biomechanical model shows that there is a physical constraint for large spiders to bridge. This should lead to a trade-off between other traits and dispersal in which bridging would favor smaller sizes and other selective forces (e.g. fecundity selection in females) would favor larger sizes. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. We predicted that both large males and females would show a lower propensity to bridge, and that SSD would be negatively correlated with sexual dimorphism in bridging propensity. To test these hypotheses we experimentally induced bridging in males and females of 13 species of spiders belonging to the two clades in which bridging locomotion has evolved independently and in which most of the cases of extreme SSD in spiders are found. Results We found that 1) as the degree of SSD increased and females became larger, females tended to bridge less relative to males, and that 2) smaller males and females show a higher propensity to bridge. Conclusions Physical constraints make bridging inefficient for large spiders. Thus, in species where bridging is a very common mode of locomotion, small males, by being more efficient at bridging, will be competitively superior and enjoy more mating opportunities. This "Bridging GH" helps to solve the controversial question of what keeps males small

  16. One Small Flare

    NASA Image and Video Library

    2018-02-15

    The sun's only visible active region sputtered and spurted and eventually unleashed a small (C-class) flare (Feb. 7, 2018). The flare appears as a brief, bright flash about mid-way through the half-day clip. Normally, we do not pay much attention to flares this small, but it was just about the only real solar activity over the past week as the sun is slowly approaching its quiet period of the 11-year solar cycle. These images were taken in a wavelength of extreme ultraviolet light. Movies are available at https://photojournal.jpl.nasa.gov/catalog/PIA22244

  17. Evidence for multidecadal variability in US extreme sea level records

    NASA Astrophysics Data System (ADS)

    Wahl, Thomas; Chambers, Don P.

    2015-03-01

    We analyze a set of 20 tide gauge records covering the contiguous United States (US) coastline and the period from 1929 to 2013 to identify long-term trends and multidecadal variations in extreme sea levels (ESLs) relative to changes in mean sea level (MSL). Different data sampling and analysis techniques are applied to test the robustness of the results against the selected methodology. Significant but small long-term trends in ESLs above/below MSL are found at individual sites along most coastline stretches, but are mostly confined to the southeast coast and the winter season when storm surges are primarily driven by extratropical cyclones. We identify six regions with broadly coherent and considerable multidecadal ESL variations unrelated to MSL changes. Using a quasi-nonstationary extreme value analysis, we show that the latter would have caused variations in design relevant return water levels (50-200 year return periods) ranging from ˜10 cm to as much as 110 cm across the six regions. The results raise questions as to the applicability of the "MSL offset method," assuming that ESL changes are primarily driven by changes in MSL without allowing for distinct long-term trends or low-frequency variations. Identifying the coherent multidecadal ESL variability is crucial in order to understand the physical driving factors. Ultimately, this information must be included into coastal design and adaptation processes.

  18. Preparing Monodisperse Macromolecular Samples for Successful Biological Small-Angle X-ray and Neutron Scattering Experiments

    PubMed Central

    Jeffries, Cy M.; Graewert, Melissa A.; Blanchet, Clément E.; Langley, David B.; Whitten, Andrew E.; Svergun, Dmitri I

    2017-01-01

    Small-angle X-ray and neutron scattering (SAXS and SANS) are techniques used to extract structural parameters and determine the overall structures and shapes of biological macromolecules, complexes and assemblies in solution. The scattering intensities measured from a sample contain contributions from all atoms within the illuminated sample volume including the solvent and buffer components as well as the macromolecules of interest. In order to obtain structural information, it is essential to prepare an exactly matched solvent blank so that background scattering contributions can be accurately subtracted from the sample scattering to obtain the net scattering from the macromolecules in the sample. In addition, sample heterogeneity caused by contaminants, aggregates, mismatched solvents, radiation damage or other factors can severely influence and complicate data analysis so it is essential that the samples are pure and monodisperse for the duration of the experiment. This Protocol outlines the basic physics of SAXS and SANS and reveals how the underlying conceptual principles of the techniques ultimately ‘translate’ into practical laboratory guidance for the production of samples of sufficiently high quality for scattering experiments. The procedure describes how to prepare and characterize protein and nucleic acid samples for both SAXS and SANS using gel electrophoresis, size exclusion chromatography and light scattering. Also included are procedures specific to X-rays (in-line size exclusion chromatography SAXS) and neutrons, specifically preparing samples for contrast matching/variation experiments and deuterium labeling of proteins. PMID:27711050

  19. Static, Mixed-Array Total Evaporation for Improved Quantitation of Plutonium Minor Isotopes in Small Samples

    NASA Astrophysics Data System (ADS)

    Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; Spencer, Khalil J.

    2016-06-01

    Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (<10-6) within already limited quantities of sample. Herein, we investigate the application of static, mixed array total evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.

  20. Static, Mixed-Array Total Evaporation for Improved Quantitation of Plutonium Minor Isotopes in Small Samples.

    PubMed

    Stanley, F E; Byerly, Benjamin L; Thomas, Mariam R; Spencer, Khalil J

    2016-06-01

    Actinide isotope measurements are a critical signature capability in the modern nuclear forensics "toolbox", especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (<10(-6)) within already limited quantities of sample. Herein, we investigate the application of static, mixed array total evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Results are presented for small sample (~20 ng) applications involving a well-known plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods. Graphical Abstract ᅟ.

  1. Taxonomic analysis of extremely halophilic archaea isolated from 56-years-old dead sea brine samples.

    PubMed

    Arahal, D R; Gutiérrez, M C; Volcani, B E; Ventosa, A

    2000-10-01

    A taxonomic study comprising both phenotypic and genotypic characterization, has been carried out on a total of 158 extremely halophilic aerobic archaeal strains. These strains were isolated from enrichments prepared from Dead Sea water samples dating from 1936 that were collected by B. E. Volcani for the demonstration of microbial life in the Dead Sea. The isolates were examined for 126 morphological, physiological, biochemical and nutritional tests. Numerical analysis of the data, by using the S(J) coefficient and UPGMA clustering method, showed that the isolates clustered into six phenons. Twenty-two out of the 158 strains used in this study were characterized previously (ARAHAL et al., 1996) and were placed into five phenotypic groups. The genotypic study included both the determination of the guanineplus-cytosine content of the DNA and DNA-DNA hybridization studies. For this purpose, representative strains from the six phenons were chosen. These groups were found to represent some members of three different genera - Haloarcula (phenons A, B, and C), Haloferax (phenons D and E) and Halobacterium (phenon F) - of the family Halobacteriaceae, some of them never reported to occur in the Dead Sea, such as Haloarcula hispanica, while Haloferax volcanii (phenons D and E) was described in the Dead Sea by studies carried out several decades later than Volcani's work.

  2. Statistic analysis of annual total ozone extremes for the period 1964-1988

    NASA Technical Reports Server (NTRS)

    Krzyscin, Janusz W.

    1994-01-01

    Annual extremes of total column amount of ozone (in the period 1964-1988) from a network of 29 Dobson stations have been examined using the extreme value analysis. The extremes have been calculated as the highest deviation of daily mean total ozone from its long-term monthly mean, normalized by the monthly standard deviations. The extremes have been selected from the direct-Sun total ozone observations only. The extremes resulting from abrupt changes in ozone (day to day changes greater than 20 percent) have not been considered. The ordered extremes (maxima in ascending way, minima in descending way) have been fitted to one of three forms of the Fisher-Tippet extreme value distribution by the nonlinear least square method (Levenberg-Marguard method). We have found that the ordered extremes from a majority of Dobson stations lie close to Fisher-Tippet type III. The extreme value analysis of the composite annual extremes (combined from averages of the annual extremes selected at individual stations) has shown that the composite maxima are fitted by the Fisher-Tippet type III and the composite minima by the Fisher-Tippet type I. The difference between the Fisher-Tippet types of the composite extremes seems to be related to the ozone downward trend. Extreme value prognoses for the period 1964-2014 (derived from the data taken at: all analyzed stations, the North American, and the European stations) have revealed that the prognostic extremes are close to the largest annual extremes in the period 1964-1988 and there are only small regional differences in the prognoses.

  3. Epidemiology of extremity fractures in the Netherlands.

    PubMed

    Beerekamp, M S H; de Muinck Keizer, R J O; Schep, N W L; Ubbink, D T; Panneman, M J M; Goslings, J C

    2017-07-01

    Insight in epidemiologic data of extremity fractures is relevant to identify people at risk. By analyzing age- and gender specific fracture incidence and treatment patterns we may adjust future policy, take preventive measures and optimize health care management. Current epidemiologic data on extremity fractures and their treatment are scarce, outdated or aiming at a small spectrum of fractures. The aim of this study was to assess trends in incidence and treatment of extremity fractures between 2004 and 2012 in relation to gender and age. We used a combination of national registries of patients aged ≥ 16 years with extremity fractures. Fractures were coded by the International Classification of Diseases (ICD) 10, and allocated to an anatomic region. ICD-10 codes were used for combining the data of the registries. Absolute numbers, incidences, number of patients treated in university hospitals and surgically treated patients were reported. A binary logistic regression was used to calculate trends during the study period. From 2004 to 2012 the Dutch population aged ≥16 years grew from 13,047,018 to 13,639,412 inhabitants, particularly in the higher age groups of 46 years and older. The absolute number of extremity fractures increased significantly from 129,188 to 176,129 (OR 1.308 [1.299-1.318]), except for forearm and lower leg fractures. Incidences increased significantly (3-4%) for wrist, hand/finger, hip/upper leg, ankle and foot/toe fractures. In contrast to the older age categories from 66 years and older, in younger age categories from 16 to 35 years, fractures of the extremities were more frequent in men than in women. Treatments gradually moved towards non-university hospitals for all except forearm fractures. Both relative and absolute numbers increased for surgical treatments of clavicle/shoulder, forearm, wrist and hand/finger fractures. Contrarily, lower extremity fractures showed an increase in non-surgical treatment, except for lower leg fractures

  4. The Robust Relationship Between Extreme Precipitation and Convective Organization in Idealized Numerical Modeling Simulations

    NASA Astrophysics Data System (ADS)

    Bao, Jiawei; Sherwood, Steven C.; Colin, Maxime; Dixit, Vishal

    2017-10-01

    The behavior of tropical extreme precipitation under changes in sea surface temperatures (SSTs) is investigated with the Weather Research and Forecasting Model (WRF) in three sets of idealized simulations: small-domain tropical radiative-convective equilibrium (RCE), quasi-global "aquapatch", and RCE with prescribed mean ascent from the tropical band in the aquapatch. We find that, across the variations introduced including SST, large-scale circulation, domain size, horizontal resolution, and convective parameterization, the change in the degree of convective organization emerges as a robust mechanism affecting extreme precipitation. Higher ratios of change in extreme precipitation to change in mean surface water vapor are associated with increases in the degree of organization, while lower ratios correspond to decreases in the degree of organization. The spread of such changes is much larger in RCE than aquapatch tropics, suggesting that small RCE domains may be unreliable for assessing the temperature-dependence of extreme precipitation or convective organization. When the degree of organization does not change, simulated extreme precipitation scales with surface water vapor. This slightly exceeds Clausius-Clapeyron (CC) scaling, because the near-surface air warms 10-25% faster than the SST in all experiments. Also for simulations analyzed here with convective parameterizations, there is an increasing trend of organization with SST.

  5. Precise Th/U-dating of small and heavily coated samples of deep sea corals

    NASA Astrophysics Data System (ADS)

    Lomitschka, Michael; Mangini, Augusto

    1999-07-01

    Marine carbonate skeletons like deep-sea corals are frequently coated with iron and manganese oxides/hydroxides which adsorb additional thorium and uranium out of the sea water. A new cleaning procedure has been developed to reduce this contamination. In this further cleaning step a solution of Na 2EDTA (Na 2H 2T B) and ascorbic acid is used which composition is optimised especially for samples of 20 mg of weight. It was first tested on aliquots of a reef-building coral which had been artificially contaminated with powdered ferromanganese nodule. Applied on heavily contaminated deep-sea corals (scleractinia), it reduced excess 230Th by another order of magnitude in addition to usual cleaning procedures. The measurement of at least three fractions of different contamination, together with an additional standard correction for contaminated carbonates results in Th/U-ages corrected for the authigenic component. A good agreement between Th/U- and 14C-ages can be achieved even for extremely coated corals.

  6. Sample Preparation and Extraction in Small Sample Volumes Suitable for Pediatric Clinical Studies: Challenges, Advances, and Experiences of a Bioanalytical HPLC-MS/MS Method Validation Using Enalapril and Enalaprilat

    PubMed Central

    Burckhardt, Bjoern B.; Laeer, Stephanie

    2015-01-01

    In USA and Europe, medicines agencies force the development of child-appropriate medications and intend to increase the availability of information on the pediatric use. This asks for bioanalytical methods which are able to deal with small sample volumes as the trial-related blood lost is very restricted in children. Broadly used HPLC-MS/MS, being able to cope with small volumes, is susceptible to matrix effects. The latter restrains the precise drug quantification through, for example, causing signal suppression. Sophisticated sample preparation and purification utilizing solid-phase extraction was applied to reduce and control matrix effects. A scale-up from vacuum manifold to positive pressure manifold was conducted to meet the demands of high-throughput within a clinical setting. Faced challenges, advances, and experiences in solid-phase extraction are exemplarily presented on the basis of the bioanalytical method development and validation of low-volume samples (50 μL serum). Enalapril, enalaprilat, and benazepril served as sample drugs. The applied sample preparation and extraction successfully reduced the absolute and relative matrix effect to comply with international guidelines. Recoveries ranged from 77 to 104% for enalapril and from 93 to 118% for enalaprilat. The bioanalytical method comprising sample extraction by solid-phase extraction was fully validated according to FDA and EMA bioanalytical guidelines and was used in a Phase I study in 24 volunteers. PMID:25873972

  7. Bidirectional reflectance distribution function of diffuse extreme ultraviolet scatterers and extreme ultraviolet baffle materials.

    PubMed

    Newell, M P; Keski-Kuha, R A

    1997-08-01

    Bidirectional reflectance distribution function (BRDF) measurements of a number of diffuse extreme ultraviolet (EUV) scatterers and EUV baffle materials have been performed with the Goddard EUV scatterometer. BRDF data are presented for white Spectralon SRS-99 at 121.6 nm; the data exhibit a non-Lambertian nature and a total hemispherical reflectance lower than 0.15. Data are also presented for an evaporated Cu black sample, a black Spectralon SRS-02 sample, and a Martin Optical Black sample at wavelengths of 58.4 and 121.6 nm and for angles of incidence of 15 degrees and 45 degrees. Overall Martin Optical Black exhibited the lowest BRDF characteristic, with a total hemispherical reflectance of the order of 0.01 and measured BRDF values as low as 2 x 10(-3) sr(-1).

  8. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    PubMed

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  9. The use of secondary ion mass spectrometry in forensic analyses of ultra-small samples

    NASA Astrophysics Data System (ADS)

    Cliff, John

    2010-05-01

    It is becoming increasingly important in forensic science to perform chemical and isotopic analyses on very small sample sizes. Moreover, in some instances the signature of interest may be incorporated in a vast background making analyses impossible by bulk methods. Recent advances in instrumentation make secondary ion mass spectrometry (SIMS) a powerful tool to apply to these problems. As an introduction, we present three types of forensic analyses in which SIMS may be useful. The causal organism of anthrax (Bacillus anthracis) chelates Ca and other metals during spore formation. Thus, the spores contain a trace element signature related to the growth medium that produced the organisms. Although other techniques have been shown to be useful in analyzing these signatures, the sample size requirements are generally relatively large. We have shown that time of flight SIMS (TOF-SIMS) combined with multivariate analysis, can clearly separate Bacillus sp. cultures prepared in different growth media using analytical spot sizes containing approximately one nanogram of spores. An important emerging field in forensic analysis is that of provenance of fecal pollution. The strategy of choice for these analyses-developing host-specific nucleic acid probes-has met with considerable difficulty due to lack of specificity of the probes. One potentially fruitful strategy is to combine in situ nucleic acid probing with high precision isotopic analyses. Bulk analyses of human and bovine fecal bacteria, for example, indicate a relative difference in d13C content of about 4 per mil. We have shown that sample sizes of several nanograms can be analyzed with the IMS 1280 with precisions capable of separating two per mil differences in d13C. The NanoSIMS 50 is capable of much better spatial resolution than the IMS 1280, albeit at a cost of analytical precision. Nevertheless we have documented precision capable of separating five per mil differences in d13C using analytical spots containing

  10. The extreme ultraviolet explorer

    NASA Technical Reports Server (NTRS)

    Bowyer, Stuart; Malina, Roger F.

    1990-01-01

    The Extreme Ultraviolet Explorer (EUVE) mission, currently scheduled for launch in September 1991, is described. The primary purpose of the mission is to survey the celestial sphere for astronomical sources of Extreme Ultraviolet (EUV) radiation. The survey will be accomplished with the use of three EUV telescopes, each sensitive to a different segment of the EUV band. A fourth telescope will perform a high sensitivity search of a limited sample of the sky in the shortest wavelength bands. The all sky survey will be carried out in the first six months of the mission and will be made in four bands, or colors. The second phase of the mission, conducted entirely by guest observers selected by NASA, will be devoted to spectroscopic observations of EUV sources. The performance of the instrument components is described. An end to end model of the mission, from a stellar source to the resulting scientific data, was constructed. Hypothetical data from astronomical sources processed through this model are shown.

  11. An extremely rare case of small-cell lung cancer harboring variant 2 of the EML4-ALK fusion gene.

    PubMed

    Toyokawa, Gouji; Takenoyama, Mitsuhiro; Taguchi, Kenichi; Toyozawa, Ryo; Inamasu, Eiko; Kojo, Miyako; Shiraishi, Yoshimasa; Morodomi, Yosuke; Takenaka, Tomoyoshi; Hirai, Fumihiko; Yamaguchi, Masafumi; Seto, Takashi; Shimokawa, Mototsugu; Ichinose, Yukito

    2013-09-01

    Anaplastic lymphoma kinase (ALK) fuses echinoderm microtubule-associated protein-like 4 (EML4) to acquire a transforming activity in lung adenocarcinomas. However, the presence of an EML4-ALK fusion gene in other lung cancer histologies is an extremely rare phenomenon. A 43-year-old female was referred to our department due to dyspnea on effort and left back pain. Computed tomography (CT) showed a large mass in the upper lobe of the left lung and a massive left pleural effusion, while a CT-guided needle biopsy confirmed a diagnosis of small-cell lung cancer (SCLC). Surprisingly, the tumor was genetically considered to harbor the EML4-ALK fusion gene (variant 2). Although the patient underwent two regimens of cytotoxic chemotherapy for SCLC, she died approximately seven months after the administration of first-line chemotherapy. Our analysis of 30 consecutive patients with SCLC for EML4-ALK revealed that two patients, including the current patient and a patient we previously reported, harbored the EML4-ALK fusion gene. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Study of sample drilling techniques for Mars sample return missions

    NASA Technical Reports Server (NTRS)

    Mitchell, D. C.; Harris, P. T.

    1980-01-01

    To demonstrate the feasibility of acquiring various surface samples for a Mars sample return mission the following tasks were performed: (1) design of a Mars rover-mounted drill system capable of acquiring crystalline rock cores; prediction of performance, mass, and power requirements for various size systems, and the generation of engineering drawings; (2) performance of simulated permafrost coring tests using a residual Apollo lunar surface drill, (3) design of a rock breaker system which can be used to produce small samples of rock chips from rocks which are too large to return to Earth, but too small to be cored with the Rover-mounted drill; (4)design of sample containers for the selected regolith cores, rock cores, and small particulate or rock samples; and (5) design of sample handling and transfer techniques which will be required through all phase of sample acquisition, processing, and stowage on-board the Earth return vehicle. A preliminary design of a light-weight Rover-mounted sampling scoop was also developed.

  13. MEASUREMENT OF SMALL MECHANICAL VIBRATIONS OF BRAIN TISSUE EXPOSED TO EXTREMELY-LOW-FREQUENCY ELECTRIC FIELDS

    EPA Science Inventory

    Electromagnetic fields can interact with biological tissue both electrically and mechanically. This study investigated the mechanical interaction between brain tissue and an extremely-low-frequency (ELF) electric field by measuring the resultant vibrational amplitude. The exposur...

  14. Detection of small molecules with a flow immunosensor

    NASA Technical Reports Server (NTRS)

    Kusterbeck, Anne W.; Ligler, Frances S.

    1991-01-01

    We describe the development of an easy-to-use sensor with widespread applications for detecting small molecules. The flow immunosensor can analyze discrete samples in under one minute or continuously monitor a flowing stream for the presence of specific analytes. This detection system is extremely specific, and achieves a level of sensitivity which meets or exceeds the detection limits reported for rival assays. Because the system is also compact, transportable, and automated, it has the potential to impact diverse areas. For example, the flow immunosensor has successfully detected drugs of abuse and explosives, and may well address many of the needs of the environmental community with respect to continuous monitoring for pollutants. Efforts are underway to engineer a portable device in the field.

  15. Small-Sample DIF Estimation Using Log-Linear Smoothing: A SIBTEST Application. Research Report. ETS RR-07-10

    ERIC Educational Resources Information Center

    Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J.

    2007-01-01

    The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…

  16. Climatic extremes improve predictions of spatial patterns of tree species

    USGS Publications Warehouse

    Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.

    2009-01-01

    Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.

  17. Static, mixed-array total evaporation for improved quantitation of plutonium minor isotopes in small samples

    DOE PAGES

    Stanley, F. E.; Byerly, Benjamin L.; Thomas, Mariam R.; ...

    2016-03-31

    Actinide isotope measurements are a critical signature capability in the modern nuclear forensics “toolbox”, especially when interrogating anthropogenic constituents in real-world scenarios. Unfortunately, established methodologies, such as traditional total evaporation via thermal ionization mass spectrometry, struggle to confidently measure low abundance isotope ratios (<10 -6) within already limited quantities of sample. Herein, we investigate the application of static, mixed array total evaporation techniques as a straightforward means of improving plutonium minor isotope measurements, which have been resistant to enhancement in recent years because of elevated radiologic concerns. Furthermore, results are presented for small sample (~20 ng) applications involving a well-knownmore » plutonium isotope reference material, CRM-126a, and compared with traditional total evaporation methods.« less

  18. Cannabis, motivation, and life satisfaction in an internet sample

    PubMed Central

    Barnwell, Sara Smucker; Earleywine, Mitch; Wilcox, Rand

    2006-01-01

    Although little evidence supports cannabis-induced amotivational syndrome, sources continue to assert that the drug saps motivation [1], which may guide current prohibitions. Few studies report low motivation in chronic users; another reveals that they have higher subjective wellbeing. To assess differences in motivation and subjective wellbeing, we used a large sample (N = 487) and strict definitions of cannabis use (7 days/week) and abstinence (never). Standard statistical techniques showed no differences. Robust statistical methods controlling for heteroscedasticity, non-normality and extreme values found no differences in motivation but a small difference in subjective wellbeing. Medical users of cannabis reporting health problems tended to account for a significant portion of subjective wellbeing differences, suggesting that illness decreased wellbeing. All p-values were above p = .05. Thus, daily use of cannabis does not impair motivation. Its impact on subjective wellbeing is small and may actually reflect lower wellbeing due to medical symptoms rather than actual consumption of the plant. PMID:16722561

  19. Quick Tips Guide for Small Manufacturing Businesses

    EPA Pesticide Factsheets

    Small manufacturing businesses can use this Quick Tips Guide to be better prepared for future extreme weather events. This guide discusses keeping good records, improving housekeeping procedures, and training employees.

  20. Sampling Mars: Analytical requirements and work to do in advance

    NASA Technical Reports Server (NTRS)

    Koeberl, Christian

    1988-01-01

    Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in

  1. Evidence for plant-derived xenomiRs based on a large-scale analysis of public small RNA sequencing data from human samples.

    PubMed

    Zhao, Qi; Liu, Yuanning; Zhang, Ning; Hu, Menghan; Zhang, Hao; Joshi, Trupti; Xu, Dong

    2018-01-01

    In recent years, an increasing number of studies have reported the presence of plant miRNAs in human samples, which resulted in a hypothesis asserting the existence of plant-derived exogenous microRNA (xenomiR). However, this hypothesis is not widely accepted in the scientific community due to possible sample contamination and the small sample size with lack of rigorous statistical analysis. This study provides a systematic statistical test that can validate (or invalidate) the plant-derived xenomiR hypothesis by analyzing 388 small RNA sequencing data from human samples in 11 types of body fluids/tissues. A total of 166 types of plant miRNAs were found in at least one human sample, of which 14 plant miRNAs represented more than 80% of the total plant miRNAs abundance in human samples. Plant miRNA profiles were characterized to be tissue-specific in different human samples. Meanwhile, the plant miRNAs identified from microbiome have an insignificant abundance compared to those from humans, while plant miRNA profiles in human samples were significantly different from those in plants, suggesting that sample contamination is an unlikely reason for all the plant miRNAs detected in human samples. This study also provides a set of testable synthetic miRNAs with isotopes that can be detected in situ after being fed to animals.

  2. Thermal neutron macroscopic absorption cross section measurement (theory, experiment and results) for small environmental samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czubek, J.A.; Drozdowicz, K.; Gabanska, B.

    Czubek`s method of measurement of the thermal neutron macroscopic absorption cross section of small samples has been developed at the Henryk Niewodniczanski Institute of Nuclear Physics in Krakow, Poland. Theoretical principles of the method have been elaborated in the one-velocity diffusion approach in which the thermal neutron parameters used have been averaged over a modified Maxwellian. In consecutive measurements the investigated sample is enveloped in shells of a known moderator of varying thickness and irradiated with a pulsed beam of fast neutrons. The neutrons are slowed-down in the system and a die-away rate of escaping thermal neutrons is measured. Themore » decay constant vs. thickness of the moderator creates the experimental curve. The absorption cross section of the unknown sample is found from the intersection of this curve with the theoretical one. The theoretical curve is calculated for the case when the dynamic material buckling of the inner sample is zero. The method does not use any reference absorption standard and is independent of the transport cross section of the measured sample. The volume of the sample is form of fluid or crushed material is about 170 cm{sup 3}. The standard deviation for the measured mass absorption cross section of rock samples is in the range of 4 divided by 20% of the measured value and for brines is of the order of 0.5%.« less

  3. Effect of Operating and Sampling Conditions on the Exhaust Gas Composition of Small-Scale Power Generators

    PubMed Central

    Smits, Marianne; Vanpachtenbeke, Floris; Horemans, Benjamin; De Wael, Karolien; Hauchecorne, Birger; Van Langenhove, Herman; Demeestere, Kristof; Lenaerts, Silvia

    2012-01-01

    Small stationary diesel engines, like in generator sets, have limited emission control measures and are therefore responsible for 44% of the particulate matter (PM) emissions in the United States. The diesel exhaust composition depends on operating conditions of the combustion engine. Furthermore, the measurements are influenced by the used sampling method. This study examines the effect of engine loading and exhaust gas dilution on the composition of small-scale power generators. These generators are used in different operating conditions than road-transport vehicles, resulting in different emission characteristics. Experimental data were obtained for gaseous volatile organic compounds (VOC) and PM mass concentration, elemental composition and nitrate content. The exhaust composition depends on load condition because of its effect on fuel consumption, engine wear and combustion temperature. Higher load conditions result in lower PM concentration and sharper edged particles with larger aerodynamic diameters. A positive correlation with load condition was found for K, Ca, Sr, Mn, Cu, Zn and Pb adsorbed on PM, elements that originate from lubricating oil or engine corrosion. The nitrate concentration decreases at higher load conditions, due to enhanced nitrate dissociation to gaseous NO at higher engine temperatures. Dilution on the other hand decreases PM and nitrate concentration and increases gaseous VOC and adsorbed metal content. In conclusion, these data show that operating and sampling conditions have a major effect on the exhaust gas composition of small-scale diesel generators. Therefore, care must be taken when designing new experiments or comparing literature results. PMID:22442670

  4. High-resolution stochastic generation of extreme rainfall intensity for urban drainage modelling applications

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2016-04-01

    Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.

  5. Radiative heat transfer in the extreme near field.

    PubMed

    Kim, Kyeongtae; Song, Bai; Fernández-Hurtado, Víctor; Lee, Woochul; Jeong, Wonho; Cui, Longji; Thompson, Dakotah; Feist, Johannes; Reid, M T Homer; García-Vidal, Francisco J; Cuevas, Juan Carlos; Meyhofer, Edgar; Reddy, Pramod

    2015-12-17

    Radiative transfer of energy at the nanometre length scale is of great importance to a variety of technologies including heat-assisted magnetic recording, near-field thermophotovoltaics and lithography. Although experimental advances have enabled elucidation of near-field radiative heat transfer in gaps as small as 20-30 nanometres (refs 4-6), quantitative analysis in the extreme near field (less than 10 nanometres) has been greatly limited by experimental challenges. Moreover, the results of pioneering measurements differed from theoretical predictions by orders of magnitude. Here we use custom-fabricated scanning probes with embedded thermocouples, in conjunction with new microdevices capable of periodic temperature modulation, to measure radiative heat transfer down to gaps as small as two nanometres. For our experiments we deposited suitably chosen metal or dielectric layers on the scanning probes and microdevices, enabling direct study of extreme near-field radiation between silica-silica, silicon nitride-silicon nitride and gold-gold surfaces to reveal marked, gap-size-dependent enhancements of radiative heat transfer. Furthermore, our state-of-the-art calculations of radiative heat transfer, performed within the theoretical framework of fluctuational electrodynamics, are in excellent agreement with our experimental results, providing unambiguous evidence that confirms the validity of this theory for modelling radiative heat transfer in gaps as small as a few nanometres. This work lays the foundations required for the rational design of novel technologies that leverage nanoscale radiative heat transfer.

  6. Method to determine 226Ra in small sediment samples by ultralow background liquid scintillation.

    PubMed

    Sanchez-Cabeza, Joan-Albert; Kwong, Laval Liong Wee; Betti, Maria

    2010-08-15

    (210)Pb dating of sediment cores is a widely used tool to reconstruct ecosystem evolution and historical pollution during the last century. Although (226)Ra can be determined by gamma spectrometry, this method shows severe limitations which are, among others, sample size requirements and counting times. In this work, we propose a new strategy based on the analysis of (210)Pb through (210)Po in equilibrium by alpha spectrometry, followed by the determination of (226)Ra (base or supported (210)Pb) without any further chemical purification by liquid scintillation and with a higher sample throughput. Although gamma spectrometry might still be required to determine (137)Cs as an independent tracer, the effort can then be focused only on those sections dated around 1963, when maximum activities are expected. In this work, we optimized the counting conditions, calibrated the system for changing quenching, and described the new method to determine (226)Ra in small sediment samples, after (210)Po determination, allowing a more precise determination of excess (210)Pb ((210)Pb(ex)). The method was validated with reference materials IAEA-384, IAEA-385, and IAEA-313.

  7. Small-Sample Adjustments for Tests of Moderators and Model Fit in Robust Variance Estimation in Meta-Regression

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Pustejovsky, James E.

    2015-01-01

    Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…

  8. Genetic and life-history consequences of extreme climate events

    PubMed Central

    Mangel, Marc; Jesensek, Dusan; Garza, John Carlos; Crivelli, Alain J.

    2017-01-01

    Climate change is predicted to increase the frequency and intensity of extreme climate events. Tests on empirical data of theory-based predictions on the consequences of extreme climate events are thus necessary to understand the adaptive potential of species and the overarching risks associated with all aspects of climate change. We tested predictions on the genetic and life-history consequences of extreme climate events in two populations of marble trout Salmo marmoratus that have experienced severe demographic bottlenecks due to flash floods. We combined long-term field and genotyping data with pedigree reconstruction in a theory-based framework. Our results show that after flash floods, reproduction occurred at a younger age in one population. In both populations, we found the highest reproductive variance in the first cohort born after the floods due to a combination of fewer parents and higher early survival of offspring. A small number of parents allowed for demographic recovery after the floods, but the genetic bottleneck further reduced genetic diversity in both populations. Our results also elucidate some of the mechanisms responsible for a greater prevalence of faster life histories after the extreme event. PMID:28148745

  9. Genetic and life-history consequences of extreme climate events.

    PubMed

    Vincenzi, Simone; Mangel, Marc; Jesensek, Dusan; Garza, John Carlos; Crivelli, Alain J

    2017-02-08

    Climate change is predicted to increase the frequency and intensity of extreme climate events. Tests on empirical data of theory-based predictions on the consequences of extreme climate events are thus necessary to understand the adaptive potential of species and the overarching risks associated with all aspects of climate change. We tested predictions on the genetic and life-history consequences of extreme climate events in two populations of marble trout Salmo marmoratus that have experienced severe demographic bottlenecks due to flash floods. We combined long-term field and genotyping data with pedigree reconstruction in a theory-based framework. Our results show that after flash floods, reproduction occurred at a younger age in one population. In both populations, we found the highest reproductive variance in the first cohort born after the floods due to a combination of fewer parents and higher early survival of offspring. A small number of parents allowed for demographic recovery after the floods, but the genetic bottleneck further reduced genetic diversity in both populations. Our results also elucidate some of the mechanisms responsible for a greater prevalence of faster life histories after the extreme event. © 2017 The Author(s).

  10. 33 CFR 80.105 - Calais, ME to Cape Small, ME.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...

  11. 33 CFR 80.105 - Calais, ME to Cape Small, ME.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...

  12. 33 CFR 80.105 - Calais, ME to Cape Small, ME.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...

  13. 33 CFR 80.105 - Calais, ME to Cape Small, ME.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...

  14. How extreme is extreme hourly precipitation?

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael; Dialynas, Yannis G.; Pappas, Christoforos

    2016-04-01

    The importance of accurate representation of precipitation at fine time scales (e.g., hourly), directly associated with flash flood events, is crucial in hydrological design and prediction. The upper part of a probability distribution, known as the distribution tail, determines the behavior of extreme events. In general, and loosely speaking, tails can be categorized in two families: the subexponential and the hyperexponential family, with the first generating more intense and more frequent extremes compared to the latter. In past studies, the focus has been mainly on daily precipitation, with the Gamma distribution being the most popular model. Here, we investigate the behaviour of tails of hourly precipitation by comparing the upper part of empirical distributions of thousands of records with three general types of tails corresponding to the Pareto, Lognormal, and Weibull distributions. Specifically, we use thousands of hourly rainfall records from all over the USA. The analysis indicates that heavier-tailed distributions describe better the observed hourly rainfall extremes in comparison to lighter tails. Traditional representations of the marginal distribution of hourly rainfall may significantly deviate from observed behaviours of extremes, with direct implications on hydroclimatic variables modelling and engineering design.

  15. The evolution of extreme precipitations in high resolution scenarios over France

    NASA Astrophysics Data System (ADS)

    Colin, J.; Déqué, M.; Somot, S.

    2009-09-01

    Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics

  16. Extreme river flow dependence in Northern Scotland

    NASA Astrophysics Data System (ADS)

    Villoria, M. Franco; Scott, M.; Hoey, T.; Fischbacher-Smith, D.

    2012-04-01

    Various methods for the spatial analysis of hydrologic data have been developed recently. Here we present results using the conditional probability approach proposed by Keef et al. [Appl. Stat. (2009): 58,601-18] to investigate spatial interdependence in extreme river flows in Scotland. This approach does not require the specification of a correlation function, being mostly suitable for relatively small geographical areas. The work is motivated by the Flood Risk Management Act (Scotland (2009)) which requires maps of flood risk that take account of spatial dependence in extreme river flow. The method is based on two conditional measures of spatial flood risk: firstly the conditional probability PC(p) that a set of sites Y = (Y 1,...,Y d) within a region C of interest exceed a flow threshold Qp at time t (or any lag of t), given that in the specified conditioning site X > Qp; and, secondly the expected number of sites within C that will exceed a flow Qp on average (given that X > Qp). The conditional probabilities are estimated using the conditional distribution of Y |X = x (for large x), which can be modeled using a semi-parametric approach (Heffernan and Tawn [Roy. Statist. Soc. Ser. B (2004): 66,497-546]). Once the model is fitted, pseudo-samples can be generated to estimate functionals of the joint tails of the distribution of (Y,X). Conditional return level plots were directly compared to traditional return level plots thus improving our understanding of the dependence structure of extreme river flow events. Confidence intervals were calculated using block bootstrapping methods (100 replicates). We report results from applying this approach to a set of four rivers (Dulnain, Lossie, Ewe and Ness) in Northern Scotland. These sites were chosen based on data quality, spatial location and catchment characteristics. The river Ness, being the largest (catchment size 1839.1km2) was chosen as the conditioning river. Both the Ewe (441.1km2) and Ness catchments have

  17. Ensemble reconstruction of spatio-temporal extreme low-flow events in France since 1871

    NASA Astrophysics Data System (ADS)

    Caillouet, Laurie; Vidal, Jean-Philippe; Sauquet, Eric; Devers, Alexandre; Graff, Benjamin

    2017-06-01

    The length of streamflow observations is generally limited to the last 50 years even in data-rich countries like France. It therefore offers too small a sample of extreme low-flow events to properly explore the long-term evolution of their characteristics and associated impacts. To overcome this limit, this work first presents a daily 140-year ensemble reconstructed streamflow dataset for a reference network of near-natural catchments in France. This dataset, called SCOPE Hydro (Spatially COherent Probabilistic Extended Hydrological dataset), is based on (1) a probabilistic precipitation, temperature, and reference evapotranspiration downscaling of the Twentieth Century Reanalysis over France, called SCOPE Climate, and (2) continuous hydrological modelling using SCOPE Climate as forcings over the whole period. This work then introduces tools for defining spatio-temporal extreme low-flow events. Extreme low-flow events are first locally defined through the sequent peak algorithm using a novel combination of a fixed threshold and a daily variable threshold. A dedicated spatial matching procedure is then established to identify spatio-temporal events across France. This procedure is furthermore adapted to the SCOPE Hydro 25-member ensemble to characterize in a probabilistic way unrecorded historical events at the national scale. Extreme low-flow events are described and compared in a spatially and temporally homogeneous way over 140 years on a large set of catchments. Results highlight well-known recent events like 1976 or 1989-1990, but also older and relatively forgotten ones like the 1878 and 1893 events. These results contribute to improving our knowledge of historical events and provide a selection of benchmark events for climate change adaptation purposes. Moreover, this study allows for further detailed analyses of the effect of climate variability and anthropogenic climate change on low-flow hydrology at the scale of France.

  18. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  19. Kinematic and neuromuscular relationships between lower extremity clinical movement assessments.

    PubMed

    Mauntel, Timothy C; Cram, Tyler R; Frank, Barnett S; Begalle, Rebecca L; Norcross, Marc F; Blackburn, J Troy; Padua, Darin A

    2018-06-01

    Lower extremity injuries have immediate and long-term consequences. Lower extremity movement assessments can assist with identifying individuals at greater injury risk and guide injury prevention interventions. Movement assessments identify similar movement characteristics and evidence suggests large magnitude kinematic relationships exist between movement patterns observed across assessments; however, the magnitude of the relationships for electromyographic (EMG) measures across movement assessments remains largely unknown. This study examined relationships between lower extremity kinematic and EMG measures during jump landings and single leg squats. Lower extremity three-dimensional kinematic and EMG data were sampled from healthy adults (males = 20, females = 20) during the movement assessments. Pearson correlations examined the relationships of the kinematic and EMG measures and paired samples t-tests compared mean kinematic and EMG measures between the assessments. Overall, significant moderate correlations were observed for lower extremity kinematic (r avg  = 0.41, r range  = 0.10-0.61) and EMG (r avg  = 0.47, r range  = 0.32-0.80) measures across assessments. Kinematic and EMG measures were greater during the jump landings. Jump landings and single leg squats place different demands on the body and necessitate different kinematic and EMG patterns, such that these measures are not highly correlated between assessments. Clinicians should, therefore, use multiple assessments to identify aberrant movement and neuromuscular control patterns so that comprehensive interventions can be implemented.

  20. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments.

    PubMed

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-11-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments.

  1. Patient Safety Outcomes in Small Urban and Small Rural Hospitals

    ERIC Educational Resources Information Center

    Vartak, Smruti; Ward, Marcia M.; Vaughn, Thomas E.

    2010-01-01

    Purpose: To assess patient safety outcomes in small urban and small rural hospitals and to examine the relationship of hospital and patient factors to patient safety outcomes. Methods: The Nationwide Inpatient Sample and American Hospital Association annual survey data were used for analyses. To increase comparability, the study sample was…

  2. Thermodynamics of extremal rotating thin shells in an extremal BTZ spacetime and the extremal black hole entropy

    NASA Astrophysics Data System (ADS)

    Lemos, José P. S.; Minamitsuji, Masato; Zaslavskii, Oleg B.

    2017-02-01

    In a (2 +1 )-dimensional spacetime with a negative cosmological constant, the thermodynamics and the entropy of an extremal rotating thin shell, i.e., an extremal rotating ring, are investigated. The outer and inner regions with respect to the shell are taken to be the Bañados-Teitelbom-Zanelli (BTZ) spacetime and the vacuum ground state anti-de Sitter spacetime, respectively. By applying the first law of thermodynamics to the extremal thin shell, one shows that the entropy of the shell is an arbitrary well-behaved function of the gravitational area A+ alone, S =S (A+). When the thin shell approaches its own gravitational radius r+ and turns into an extremal rotating BTZ black hole, it is found that the entropy of the spacetime remains such a function of A+, both when the local temperature of the shell at the gravitational radius is zero and nonzero. It is thus vindicated by this analysis that extremal black holes, here extremal BTZ black holes, have different properties from the corresponding nonextremal black holes, which have a definite entropy, the Bekenstein-Hawking entropy S (A+)=A/+4G , where G is the gravitational constant. It is argued that for extremal black holes, in particular for extremal BTZ black holes, one should set 0 ≤S (A+)≤A/+4G;i.e., the extremal black hole entropy has values in between zero and the maximum Bekenstein-Hawking entropy A/+4 G . Thus, rather than having just two entropies for extremal black holes, as previous results have debated, namely, 0 and A/+4 G , it is shown here that extremal black holes, in particular extremal BTZ black holes, may have a continuous range of entropies, limited by precisely those two entropies. Surely, the entropy that a particular extremal black hole picks must depend on past processes, notably on how it was formed. A remarkable relation between the third law of thermodynamics and the impossibility for a massive body to reach the velocity of light is also found. In addition, in the procedure, it

  3. Detailed Abundances of Stars with Small Planets Discovered by Kepler. I. The First Sample

    NASA Astrophysics Data System (ADS)

    Schuler, Simon C.; Vaz, Zachary A.; Katime Santrich, Orlando J.; Cunha, Katia; Smith, Verne V.; King, Jeremy R.; Teske, Johanna K.; Ghezzi, Luan; Howell, Steve B.; Isaacson, Howard

    2015-12-01

    We present newly derived stellar parameters and the detailed abundances of 19 elements of seven stars with small planets discovered by NASA's Kepler Mission. Each star, save one, has at least one planet with a radius ≤1.6 R⊕, suggesting a primarily rocky composition. The stellar parameters and abundances are derived from high signal-to-noise ratio, high-resolution echelle spectroscopy obtained with the 10 m Keck I telescope and High Resolution Echelle Spectrometer using standard spectroscopic techniques. The metallicities of the seven stars range from -0.32 to +0.13 dex, with an average metallicity that is subsolar, supporting previous suggestions that, unlike Jupiter-type giant planets, small planets do not form preferentially around metal-rich stars. The abundances of elements other than iron are in line with a population of Galactic disk stars, and despite our modest sample size, we find hints that the compositions of stars with small planets are similar to stars without known planets and with Neptune-size planets, but not to those of stars with giant planets. This suggests that the formation of small planets does not require exceptional host-star compositions and that small planets may be ubiquitous in the Galaxy. We compare our derived abundances (which have typical uncertainties of ≲0.04 dex) to the condensation temperature of the elements; a correlation between the two has been suggested as a possible signature of rocky planet formation. None of the stars demonstrate the putative rocky planet signature, despite at least three of the stars having rocky planets estimated to contain enough refractory material to produce the signature, if real. More detailed abundance analyses of stars known to host small planets are needed to verify our results and place ever more stringent constraints on planet formation models. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California

  4. Calibration techniques and results in the soft X-ray and extreme ultraviolet for components of the Extreme Ultraviolet Explorer Satellite

    NASA Technical Reports Server (NTRS)

    Malina, Roger F.; Jelinsky, Patrick; Bowyer, Stuart

    1986-01-01

    The calibration facilities and techniques for the Extreme Ultraviolet Explorer (EUVE) from 44 to 2500 A are described. Key elements include newly designed radiation sources and a collimated monochromatic EUV beam. Sample results for the calibration of the EUVE filters, detectors, gratings, collimators, and optics are summarized.

  5. Testing for scale-invariance in extreme events, with application to earthquake occurrence

    NASA Astrophysics Data System (ADS)

    Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.

    2009-04-01

    We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in

  6. Structural Extremes in a Cretaceous Dinosaur

    PubMed Central

    Sereno, Paul C.; Wilson, Jeffrey A.; Witmer, Lawrence M.; Whitlock, John A.; Maga, Abdoulaye; Ide, Oumarou; Rowe, Timothy A.

    2007-01-01

    Fossils of the Early Cretaceous dinosaur, Nigersaurus taqueti, document for the first time the cranial anatomy of a rebbachisaurid sauropod. Its extreme adaptations for herbivory at ground-level challenge current hypotheses regarding feeding function and feeding strategy among diplodocoids, the larger clade of sauropods that includes Nigersaurus. We used high resolution computed tomography, stereolithography, and standard molding and casting techniques to reassemble the extremely fragile skull. Computed tomography also allowed us to render the first endocast for a sauropod preserving portions of the olfactory bulbs, cerebrum and inner ear, the latter permitting us to establish habitual head posture. To elucidate evidence of tooth wear and tooth replacement rate, we used photographic-casting techniques and crown thin sections, respectively. To reconstruct its 9-meter postcranial skeleton, we combined and size-adjusted multiple partial skeletons. Finally, we used maximum parsimony algorithms on character data to obtain the best estimate of phylogenetic relationships among diplodocoid sauropods. Nigersaurus taqueti shows extreme adaptations for a dinosaurian herbivore including a skull of extremely light construction, tooth batteries located at the distal end of the jaws, tooth replacement as fast as one per month, an expanded muzzle that faces directly toward the ground, and hollow presacral vertebral centra with more air sac space than bone by volume. A cranial endocast provides the first reasonably complete view of a sauropod brain including its small olfactory bulbs and cerebrum. Skeletal and dental evidence suggests that Nigersaurus was a ground-level herbivore that gathered and sliced relatively soft vegetation, the culmination of a low-browsing feeding strategy first established among diplodocoids during the Jurassic. PMID:18030355

  7. Compression Strength of Sulfur Concrete Subjected to Extreme Cold

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.

    2008-01-01

    Sulfur concrete cubes were cycled between liquid nitrogen and room temperature to simulate extreme exposure conditions. Subsequent compression testing showed the strength of cycled samples to be roughly five times less than those non-cycled. Fracture surface examination showed de-bonding of the sulfur from the aggregate material in the cycled samples but not in those non-cycled. The large discrepancy found, between the samples is attributed to the relative thermal properties of the materials constituting the concrete.

  8. Very Low-Mass Stars with Extremely Low Metallicity in the Milky Way's Halo

    NASA Astrophysics Data System (ADS)

    Aoki, Wako; Beers, Timothy C.; Suda, Takuma; Honda, Satoshi; Lee, Young Sun

    2016-08-01

    Large surveys and follow-up spectroscopic studies in the past few decades have been providing chemical abundance data for a growing number of very metal-poor ([Fe/H] <-2) stars. Most of them are red giants or main-sequence turn-off stars having masses near 0.8 solar masses. Lower mass stars with extremely low metallicity ([Fe/H] <-3) are yet to be explored. Our high-resolution spectroscopic study for very metal-poor stars found with SDSS has identified four cool main-sequence stars with [Fe/H] <-2.5 among 137 objects (Aoki et al. 2013). The effective temperatures of these stars are 4500-5000 K, corresponding to a mass of around 0.5 solar masses. Our standard analysis of the high-resolution spectra based on 1D-LTE model atmospheres has obtained self-consistent chemical abundances for these objects, assuming small values of micro-turbulent velocities compared with giants and turn-off stars. The low temperature of the atmospheres of these objects enables us to measure their detailed chemical abundances. Interestingly, two of the four stars have extreme chemical-abundance patterns: one has the largest excesses of heavy neutron-capture elements associated with the r-process abundance pattern known to date (Aoki et al. 2010), and the other exhibits low abundances of the α-elements and odd-Z elements, suggested to be signatures of the yields of very massive stars (> 100 solar masses; Aoki et al. 2014). Although the sample size is still small, these results indicate the potential of very low-mass stars as probes to study the early stages of the Milky Way's halo formation.

  9. QNB: differential RNA methylation analysis for count-based small-sample sequencing data with a quad-negative binomial model.

    PubMed

    Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia

    2017-08-31

    As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.

  10. Evaluating morphometric body mass prediction equations with a juvenile human test sample: accuracy and applicability to small-bodied hominins.

    PubMed

    Walker, Christopher S; Yapuncich, Gabriel S; Sridhar, Shilpa; Cameron, Noël; Churchill, Steven E

    2018-02-01

    Body mass is an ecologically and biomechanically important variable in the study of hominin biology. Regression equations derived from recent human samples allow for the reasonable prediction of body mass of later, more human-like, and generally larger hominins from hip joint dimensions, but potential differences in hip biomechanics across hominin taxa render their use questionable with some earlier taxa (i.e., Australopithecus spp.). Morphometric prediction equations using stature and bi-iliac breadth avoid this problem, but their applicability to early hominins, some of which differ in both size and proportions from modern adult humans, has not been demonstrated. Here we use mean stature, bi-iliac breadth, and body mass from a global sample of human juveniles ranging in age from 6 to 12 years (n = 530 age- and sex-specific group annual means from 33 countries/regions) to evaluate the accuracy of several published morphometric prediction equations when applied to small humans. Though the body proportions of modern human juveniles likely differ from those of small-bodied early hominins, human juveniles (like fossil hominins) often differ in size and proportions from adult human reference samples and, accordingly, serve as a useful model for assessing the robustness of morphometric prediction equations. Morphometric equations based on adults systematically underpredict body mass in the youngest age groups and moderately overpredict body mass in the older groups, which fall in the body size range of adult Australopithecus (∼26-46 kg). Differences in body proportions, notably the ratio of lower limb length to stature, influence predictive accuracy. Ontogenetic changes in these body proportions likely influence the shift in prediction error (from under- to overprediction). However, because morphometric equations are reasonably accurate when applied to this juvenile test sample, we argue these equations may be used to predict body mass in small-bodied hominins

  11. Personality disorders as maladaptive, extreme variants of normal personality: borderline personality disorder and neuroticism in a substance using sample.

    PubMed

    Samuel, Douglas B; Carroll, Kathleen M; Rounsaville, Bruce J; Ball, Samuel A

    2013-10-01

    Although the current diagnostic manual conceptualizes personality disorders (PDs) as categorical entities, an alternative perspective is that PDs represent maladaptive extreme versions of the same traits that describe normal personality. Existing evidence indicates that normal personality traits, such as those assessed by the five-factor model (FFM), share a common structure and obtain reasonably predictable correlations with the PDs. However, very little research has investigated whether PDs are more extreme than normal personality traits. Utilizing item-response theory analyses, the authors of the current study extend previous research to demonstrate that the diagnostic criterion for borderline personality disorder and FFM neuroticism could be fit along a single latent dimension. Furthermore, the authors' findings indicate that the borderline criteria assessed the shared latent trait at a level that was more extreme (d = 1.11) than FFM neuroticism. This finding provides further evidence for dimensional understanding of personality pathology and suggests that a trait model in DSM-5 should span normal and abnormal personality functioning, but focus on the extremes of these common traits.

  12. Beyond simple small-angle X-ray scattering: developments in online complementary techniques and sample environments

    PubMed Central

    Bras, Wim; Koizumi, Satoshi; Terrill, Nicholas J

    2014-01-01

    Small- and wide-angle X-ray scattering (SAXS, WAXS) are standard tools in materials research. The simultaneous measurement of SAXS and WAXS data in time-resolved studies has gained popularity due to the complementary information obtained. Furthermore, the combination of these data with non X-ray based techniques, via either simultaneous or independent measurements, has advanced understanding of the driving forces that lead to the structures and morphologies of materials, which in turn give rise to their properties. The simultaneous measurement of different data regimes and types, using either X-rays or neutrons, and the desire to control parameters that initiate and control structural changes have led to greater demands on sample environments. Examples of developments in technique combinations and sample environment design are discussed, together with a brief speculation about promising future developments. PMID:25485128

  13. An influence of extremal edges on boundary extension.

    PubMed

    Hale, Ralph G; Brown, James M; McDunn, Benjamin A; Siddiqui, Aisha P

    2015-08-01

    Studies have shown that people consistently remember seeing more of a studied scene than was physically present (e.g., Intraub & Richardson Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 179-187, 1989). This scene memory error, known as boundary extension, has been suggested to occur due to an observer's failure to differentiate between the contributing sources of information, including the sensory input, amodal continuation beyond the view boundaries, and contextual associations with the main objects and depicted scene locations (Intraub, 2010). Here, "scenes" made of abstract shapes on random-dot backgrounds, previously shown to elicit boundary extension (McDunn, Siddiqui, & Brown Psychonomic Bulletin & Review, 21, 370-375, 2014), were compared with versions made with extremal edges (Palmer & Ghose Psychological Science, 19, 77-84, 2008) added to their borders, in order to examine how boundary extension is influenced when amodal continuation at the borders' view boundaries is manipulated in this way. Extremal edges were expected to reduce boundary extension as compared to the same scenes without them, because extremal edge boundaries explicitly indicate an image's end (i.e., they do not continue past the view boundary). A large and a small difference (16 % and 40 %) between the close and wide-angle views shown during the experiment were tested to examine the effects of both boundary extension and normalization with and without extremal edges. Images without extremal edges elicited typical boundary extension for the 16 % size change condition, whereas the 40 % condition showed signs of normalization. With extremal edges, a reduced amount of boundary extension occurred for the 16 % condition, and only normalization was found for the 40 % condition. Our findings support and highlight the importance of amodal continuation at the view boundaries as a component of boundary extension.

  14. Estimation of Logistic Regression Models in Small Samples. A Simulation Study Using a Weakly Informative Default Prior Distribution

    ERIC Educational Resources Information Center

    Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel

    2012-01-01

    In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…

  15. Focusing adaptive-optics for neutron spectroscopy at extreme conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simeoni, G. G., E-mail: ggsimeoni@outlook.com; Physics Department E13, Technical University of Munich, D-85748 Garching; Valicu, R. G.

    2015-12-14

    Neutron Spectroscopy employing extreme-conditions sample environments is nowadays a crucial tool for the understanding of fundamental scientific questions as well as for the investigation of materials and chemical-physical properties. For all these kinds of studies, an increased neutron flux over a small sample area is needed. The prototype of a focusing neutron guide component, developed and produced completely at the neutron source FRM II in Garching (Germany), has been installed at the time-of-flight (TOF) disc-chopper neutron spectrometer TOFTOF and came into routine-operation. The design is based on the compressed Archimedes' mirror concept for finite-size divergent sources. It represents a uniquemore » device combining the supermirror technology with Adaptive Optics, suitable for broad-bandwidth thermal-cold TOF neutron spectroscopy (here optimized for 1.4–10 Å). It is able to squeeze the beam cross section down to a square centimeter, with a more than doubled signal-to-background ratio, increased efficiency at high scattering angles, and improved symmetry of the elastic resolution function. We present a comparison between the simulated and measured beam cross sections, as well as the performance of the instrument within real experiments. This work intends to show the unprecedented opportunities achievable at already existing instruments, along with useful guidelines for the design and construction of next-generation neutron spectrometers.« less

  16. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method

  17. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    PubMed

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  18. Spatial variability of extreme rainfall at radar subpixel scale

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Marra, Francesco; Fatichi, Simone; Paschalis, Athanasios; Molnar, Peter; Burlando, Paolo

    2018-01-01

    Extreme rainfall is quantified in engineering practice using Intensity-Duration-Frequency curves (IDF) that are traditionally derived from rain-gauges and more recently also from remote sensing instruments, such as weather radars. These instruments measure rainfall at different spatial scales: rain-gauge samples rainfall at the point scale while weather radar averages precipitation on a relatively large area, generally around 1 km2. As such, a radar derived IDF curve is representative of the mean areal rainfall over a given radar pixel and neglects the within-pixel rainfall variability. In this study, we quantify subpixel variability of extreme rainfall by using a novel space-time rainfall generator (STREAP model) that downscales in space the rainfall within a given radar pixel. The study was conducted using a unique radar data record (23 years) and a very dense rain-gauge network in the Eastern Mediterranean area (northern Israel). Radar-IDF curves, together with an ensemble of point-based IDF curves representing the radar subpixel extreme rainfall variability, were developed fitting Generalized Extreme Value (GEV) distributions to annual rainfall maxima. It was found that the mean areal extreme rainfall derived from the radar underestimate most of the extreme values computed for point locations within the radar pixel (on average, ∼70%). The subpixel variability of rainfall extreme was found to increase with longer return periods and shorter durations (e.g. from a maximum variability of 10% for a return period of 2 years and a duration of 4 h to 30% for 50 years return period and 20 min duration). For the longer return periods, a considerable enhancement of extreme rainfall variability was found when stochastic (natural) climate variability was taken into account. Bounding the range of the subpixel extreme rainfall derived from radar-IDF can be of major importance for different applications that require very local estimates of rainfall extremes.

  19. Cell block samples from malignant pleural effusion might be valid alternative samples for anaplastic lymphoma kinase detection in patients with advanced non-small-cell lung cancer.

    PubMed

    Zhou, Jianya; Yao, Hongtian; Zhao, Jing; Zhang, Shumeng; You, Qihan; Sun, Ke; Zou, Yinying; Zhou, Caicun; Zhou, Jianying

    2015-06-01

    To evaluate the clinical value of cell block samples from malignant pleural effusion (MPE) as alternative samples to tumour tissue for anaplastic lymphoma kinase (ALK) detection in patients with advanced non-small-cell lung cancer (NSCLC). Fifty-two matched samples were eligible for analysis. ALK status was detected by Ventana immunohistochemistry (IHC) (with the D5F3 clone), reverse transcription polymerase chain reaction (RT-PCR) and fluorescence in-situ hybridization (FISH) in MPE cell block samples, and by FISH in tumour tissue block samples. In total, ALK FISH results were obtained for 52 tumour tissue samples and 41 MPE cell block samples. Eight cases (15.4%) were ALK-positive in tumour tissue samples by FISH, and among matched MPE cell block samples, five were ALK-positive by FISH, seven were ALK-positive by RT-PCR, and eight were ALK-positive by Ventana IHC. The ALK status concordance rates between tumour tissue and MPE cell block samples were 78.9% by FISH, 98.1% by RT-PCR, and 100% by Ventana IHC. In MPE cell block samples, the sensitivity and specificity of Ventana IHC (100% and 100%) and RT-PCR (87.5% and 100%) were higher than those of FISH (62.5% and 100%). Malignant pleural effusion cell block samples had a diagnostic performance for ALK detection in advanced NSCLC that was comparable to that of tumour tissue samples. MPE cell block samples might be valid alternative samples for ALK detection when tissue is not available. Ventana IHC could be the most suitable method for ALK detection in MPE cell block samples. © 2014 John Wiley & Sons Ltd.

  20. Perspectives on Extremes as a Climate Scientist and Farmer

    NASA Astrophysics Data System (ADS)

    Grotjahn, R.

    2016-12-01

    The speaker is both a climate scientist whose research emphasizes climate extremes and a small farmer in the most agriculturally productive region in the world. He will share some perspectives about the future of extremes over the United States as they relate to farming. General information will be drawn from the National Climate Assessment (NCA) published in 2014. Different weather-related quantities are useful for different commodities. While plant and animal production are time-integrative, extreme events can cause lasting harm long after the event is over. Animal production, including dairy, is sensitive to combinations of high heat and humidity; lasting impacts include suspended milk production, aborted fetuses, and increased mortality. The rice crop can be devastated by the wrong combination of wind and humidity just before harvest time. Extremes at the bud break, flowering, and nascent fruit stage and greatly reduce the fruit production for the year in tree crops. Saturated soils from heavy rainfall cause major losses to some crops (for example, by fostering pathogen growth), harm water delivery systems, and disrupt timing of field activities (primarily harvest).After an overview of some general issues relating to Agriculture, some extreme weather impacts on specific commodities (primarily dairy and specialty crops, some grains) will be highlighted including quantities relevant to agriculture. Example extreme events economic impacts will be summarized. If there is interest, issues related to water availability and management will be described. Projected extreme event changes over the US will be discussed. Some conclusions will be drawn about: future impacts and possible changes to farming (some are already occurring). Perspectives will be given on including the diverse range of quantities useful to agriculture when developing climate models. As time permits, some personal experiences with climate change and discussing it with fellow farmers will be shared.

  1. Three-dimensional nanoscale molecular imaging by extreme ultraviolet laser ablation mass spectrometry

    PubMed Central

    Kuznetsov, Ilya; Filevich, Jorge; Dong, Feng; Woolston, Mark; Chao, Weilun; Anderson, Erik H.; Bernstein, Elliot R.; Crick, Dean C.; Rocca, Jorge J.; Menoni, Carmen S.

    2015-01-01

    Analytical probes capable of mapping molecular composition at the nanoscale are of critical importance to materials research, biology and medicine. Mass spectral imaging makes it possible to visualize the spatial organization of multiple molecular components at a sample's surface. However, it is challenging for mass spectral imaging to map molecular composition in three dimensions (3D) with submicron resolution. Here we describe a mass spectral imaging method that exploits the high 3D localization of absorbed extreme ultraviolet laser light and its fundamentally distinct interaction with matter to determine molecular composition from a volume as small as 50 zl in a single laser shot. Molecular imaging with a lateral resolution of 75 nm and a depth resolution of 20 nm is demonstrated. These results open opportunities to visualize chemical composition and chemical changes in 3D at the nanoscale. PMID:25903827

  2. Light chain typing of immunoglobulins in small samples of biological material

    PubMed Central

    Rádl, J.

    1970-01-01

    A method is described for the typing of the light chains of immunoglobulins in small samples of sera or external secretions and without their previous isolation. It consists of immunoelectrophoresis in agar plates which contain specific antisera against one of the light chain types. All immunoglobulins of this type are thus selected by precipitation in the central area during the electrophoretic phase. Immunoglobulins of the opposite light chain type diffuse through the agar and react with the class specific antisera from the troughs. This results in the precipitin lines as in conventional immunoelectrophoresis. This technique has proved most useful for typing heterogenous or homogeneous immunoglobulins in normal and low concentration. The antisera used for incorporation in the agar should fulfil special requirements. They should contain a high level of antibodies against common surface determinants of the immunoglobulin light chains. The further possibilities of this immunoselection technique for typing different protein mixtures is discussed. ImagesFIG. 1FIG. 2FIG. 3FIG. 4FIG. 5FIG. 6 PMID:4098592

  3. Method for detection of long-lived radioisotopes in small biochemical samples

    DOEpatents

    Turteltaub, K.W.; Vogel, J.S.; Felton, J.S.; Gledhill, B.L.; Davis, J.C.

    1994-11-22

    Disclosed is a method for detection of long-lived radioisotopes in small biochemical samples, comprising: a. selecting a biological host in which radioisotopes are present in concentrations equal to or less than those in the ambient biosphere, b. preparing a long-lived radioisotope labeled reactive chemical specie, c. administering the chemical specie to the biologist host in doses sufficiently low to avoid significant overt damage to the biological system, d. allowing a period of time to elapse sufficient for dissemination and interaction of the chemical specie with the host throughout the biological system of the host, e. isolating a reacted fraction of the biological substance from the host in a manner sufficient to avoid contamination of the substance from extraneous sources, f. converting the fraction of biological substance by suitable means to a material which efficiently produces charged ions in at least one of several possible ion sources without introduction of significant isotopic fractionation, and, g. measuring the radioisotope concentration in the material by means of direct isotopic counting. 5 figs.

  4. Method for detection of long-lived radioisotopes in small biochemical samples

    DOEpatents

    Turteltaub, Kenneth W.; Vogel, John S.; Felton, James S.; Gledhill, Barton L.; Davis, Jay C.

    1994-01-01

    Disclosed is a method for detection of long-lived radioisotopes in small bio-chemical samples, comprising: a. selecting a biological host in which radioisotopes are present in concentrations equal to or less than those in the ambient biosphere, b. preparing a long-lived radioisotope labeled reactive chemical specie, c. administering said chemical specie to said biologist host in doses sufficiently low to avoid significant overt damage to the biological system thereof, d. allowing a period of time to elapse sufficient for dissemination and interaction of said chemical specie with said host throughout said biological system of said host, e. isolating a reacted fraction of the biological substance from said host in a manner sufficient to avoid contamination of said substance from extraneous sources, f. converting said fraction of biological substance by suitable means to a material which efficiently produces charged ions in at least one of several possible ion sources without introduction of significant isotopic fractionation, and, g. measuring the radioisotope concentration in said material by means of direct isotopic counting.

  5. Genetic background of extreme violent behavior

    PubMed Central

    Tiihonen, J; Rautiainen, M-R; Ollila, HM; Repo-Tiihonen, E; Virkkunen, M; Palotie, A; Pietiläinen, O; Kristiansson, K; Joukamaa, M; Lauerma, H; Saarela, J; Tyni, S; Vartiainen, H; Paananen, J; Goldman, D; Paunio, T

    2015-01-01

    In developed countries, the majority of all violent crime is committed by a small group of antisocial recidivistic offenders, but no genes have been shown to contribute to recidivistic violent offending or severe violent behavior, such as homicide. Our results, from two independent cohorts of Finnish prisoners, revealed that a monoamine oxidase A (MAOA) low-activity genotype (contributing to low dopamine turnover rate) as well as the CDH13 gene (coding for neuronal membrane adhesion protein) are associated with extremely violent behavior (at least 10 committed homicides, attempted homicides or batteries). No substantial signal was observed for either MAOA or CDH13 among non-violent offenders, indicating that findings were specific for violent offending, and not largely attributable to substance abuse or antisocial personality disorder. These results indicate both low monoamine metabolism and neuronal membrane dysfunction as plausible factors in the etiology of extreme criminal violent behavior, and imply that at least about 5–10% of all severe violent crime in Finland is attributable to the aforementioned MAOA and CDH13 genotypes. PMID:25349169

  6. Genetic background of extreme violent behavior.

    PubMed

    Tiihonen, J; Rautiainen, M-R; Ollila, H M; Repo-Tiihonen, E; Virkkunen, M; Palotie, A; Pietiläinen, O; Kristiansson, K; Joukamaa, M; Lauerma, H; Saarela, J; Tyni, S; Vartiainen, H; Paananen, J; Goldman, D; Paunio, T

    2015-06-01

    In developed countries, the majority of all violent crime is committed by a small group of antisocial recidivistic offenders, but no genes have been shown to contribute to recidivistic violent offending or severe violent behavior, such as homicide. Our results, from two independent cohorts of Finnish prisoners, revealed that a monoamine oxidase A (MAOA) low-activity genotype (contributing to low dopamine turnover rate) as well as the CDH13 gene (coding for neuronal membrane adhesion protein) are associated with extremely violent behavior (at least 10 committed homicides, attempted homicides or batteries). No substantial signal was observed for either MAOA or CDH13 among non-violent offenders, indicating that findings were specific for violent offending, and not largely attributable to substance abuse or antisocial personality disorder. These results indicate both low monoamine metabolism and neuronal membrane dysfunction as plausible factors in the etiology of extreme criminal violent behavior, and imply that at least about 5-10% of all severe violent crime in Finland is attributable to the aforementioned MAOA and CDH13 genotypes.

  7. Risk factors for lower extremity injury: a review of the literature

    PubMed Central

    Murphy, D; Connolly, D; Beynnon, B

    2003-01-01

    Prospective studies on risk factors for lower extremity injury are reviewed. Many intrinsic and extrinsic risk factors have been implicated; however, there is little agreement with respect to the findings. Future prospective studies are needed using sufficient sample sizes of males and females, including collection of exposure data, and using established methods for identifying and classifying injury severity to conclusively determine addtional risk factors for lower extremity injury. PMID:12547739

  8. Assessing differential gene expression with small sample sizes in oligonucleotide arrays using a mean-variance model.

    PubMed

    Hu, Jianhua; Wright, Fred A

    2007-03-01

    The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.

  9. Small scale structure on cosmic strings

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1989-01-01

    The current understanding of cosmic string evolution is discussed, and the focus placed on the question of small scale structure on strings, where most of the disagreements lie. A physical picture designed to put the role of the small scale structure into more intuitive terms is presented. In this picture it can be seen how the small scale structure can feed back in a major way on the overall scaling solution. It is also argued that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small scale structure, which argued in any case would be extremely valuable in filling the gaps in the present understanding of cosmic string evolution.

  10. Preparation of Protein Samples for NMR Structure, Function, and Small Molecule Screening Studies

    PubMed Central

    Acton, Thomas B.; Xiao, Rong; Anderson, Stephen; Aramini, James; Buchwald, William A.; Ciccosanti, Colleen; Conover, Ken; Everett, John; Hamilton, Keith; Huang, Yuanpeng Janet; Janjua, Haleema; Kornhaber, Gregory; Lau, Jessica; Lee, Dong Yup; Liu, Gaohua; Maglaqui, Melissa; Ma, Lichung; Mao, Lei; Patel, Dayaban; Rossi, Paolo; Sahdev, Seema; Shastry, Ritu; Swapna, G.V.T.; Tang, Yeufeng; Tong, Saichiu; Wang, Dongyan; Wang, Huang; Zhao, Li; Montelione, Gaetano T.

    2014-01-01

    In this chapter, we concentrate on the production of high quality protein samples for NMR studies. In particular, we provide an in-depth description of recent advances in the production of NMR samples and their synergistic use with recent advancements in NMR hardware. We describe the protein production platform of the Northeast Structural Genomics Consortium, and outline our high-throughput strategies for producing high quality protein samples for nuclear magnetic resonance (NMR) studies. Our strategy is based on the cloning, expression and purification of 6X-His-tagged proteins using T7-based Escherichia coli systems and isotope enrichment in minimal media. We describe 96-well ligation-independent cloning and analytical expression systems, parallel preparative scale fermentation, and high-throughput purification protocols. The 6X-His affinity tag allows for a similar two-step purification procedure implemented in a parallel high-throughput fashion that routinely results in purity levels sufficient for NMR studies (> 97% homogeneity). Using this platform, the protein open reading frames of over 17,500 different targeted proteins (or domains) have been cloned as over 28,000 constructs. Nearly 5,000 of these proteins have been purified to homogeneity in tens of milligram quantities (see Summary Statistics, http://nesg.org/statistics.html), resulting in more than 950 new protein structures, including more than 400 NMR structures, deposited in the Protein Data Bank. The Northeast Structural Genomics Consortium pipeline has been effective in producing protein samples of both prokaryotic and eukaryotic origin. Although this paper describes our entire pipeline for producing isotope-enriched protein samples, it focuses on the major updates introduced during the last 5 years (Phase 2 of the National Institute of General Medical Sciences Protein Structure Initiative). Our advanced automated and/or parallel cloning, expression, purification, and biophysical screening

  11. Correlation dimension and phase space contraction via extreme value theory

    NASA Astrophysics Data System (ADS)

    Faranda, Davide; Vaienti, Sandro

    2018-04-01

    We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.

  12. Fourier fringe analysis and its application to metrology of extreme physical phenomena: a review [Invited].

    PubMed

    Takeda, Mitsuo

    2013-01-01

    The paper reviews a technique for fringe analysis referred to as Fourier fringe analysis (FFA) or the Fourier transform method, with a particular focus on its application to metrology of extreme physical phenomena. Examples include the measurement of extremely small magnetic fields with subfluxon sensitivity by electron wave interferometry, subnanometer wavefront evaluation of projection optics for extreme UV lithography, the detection of sub-Ångstrom distortion of a crystal lattice, and the measurement of ultrashort optical pulses in the femotsecond to attosecond range, which show how the advantages of FFA are exploited in these cutting edge applications.

  13. Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package

    NASA Astrophysics Data System (ADS)

    Cheng, L.; AghaKouchak, A.; Gilleland, E.

    2013-12-01

    Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.

  14. Method and Apparatus for Automated Isolation of Nucleic Acids from Small Cell Samples

    NASA Technical Reports Server (NTRS)

    Sundaram, Shivshankar; Prabhakarpandian, Balabhaskar; Pant, Kapil; Wang, Yi

    2014-01-01

    RNA isolation is a ubiquitous need, driven by current emphasis on microarrays and miniaturization. With commercial systems requiring 100,000 to 1,000,000 cells for successful isolation, there is a growing need for a small-footprint, easy-to-use device that can harvest nucleic acids from much smaller cell samples (1,000 to 10,000 cells). The process of extraction of RNA from cell cultures is a complex, multi-step one, and requires timed, asynchronous operations with multiple reagents/buffers. An added complexity is the fragility of RNA (subject to degradation) and its reactivity to surface. A novel, microfluidics-based, integrated cartridge has been developed that can fully automate the complex process of RNA isolation (lyse, capture, and elute RNA) from small cell culture samples. On-cartridge cell lysis is achieved using either reagents or high-strength electric fields made possible by the miniaturized format. Traditionally, silica-based, porous-membrane formats have been used for RNA capture, requiring slow perfusion for effective capture. In this design, high efficiency capture/elution are achieved using a microsphere-based "microfluidized" format. Electrokinetic phenomena are harnessed to actively mix microspheres with the cell lysate and capture/elution buffer, providing important advantages in extraction efficiency, processing time, and operational flexibility. Successful RNA isolation was demonstrated using both suspension (HL-60) and adherent (BHK-21) cells. Novel features associated with this development are twofold. First, novel designs that execute needed processes with improved speed and efficiency were developed. These primarily encompass electric-field-driven lysis of cells. The configurations include electrode-containing constructs, or an "electrode-less" chip design, which is easy to fabricate and mitigates fouling at the electrode surface; and the "fluidized" extraction format based on electrokinetically assisted mixing and contacting of microbeads

  15. Evaluation of precipitation extremes over the Asian domain: observation and modelling studies

    NASA Astrophysics Data System (ADS)

    Kim, In-Won; Oh, Jaiho; Woo, Sumin; Kripalani, R. H.

    2018-04-01

    In this study, a comparison in the precipitation extremes as exhibited by the seven reference datasets is made to ascertain whether the inferences based on these datasets agree or they differ. These seven datasets, roughly grouped in three categories i.e. rain-gauge based (APHRODITE, CPC-UNI), satellite-based (TRMM, GPCP1DD) and reanalysis based (ERA-Interim, MERRA, and JRA55), having a common data period 1998-2007 are considered. Focus is to examine precipitation extremes in the summer monsoon rainfall over South Asia, East Asia and Southeast Asia. Measures of extreme precipitation include the percentile thresholds, frequency of extreme precipitation events and other quantities. Results reveal that the differences in displaying extremes among the datasets are small over South Asia and East Asia but large differences among the datasets are displayed over the Southeast Asian region including the maritime continent. Furthermore, precipitation data appear to be more consistent over East Asia among the seven datasets. Decadal trends in extreme precipitation are consistent with known results over South and East Asia. No trends in extreme precipitation events are exhibited over Southeast Asia. Outputs of the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulation data are categorized as high, medium and low-resolution models. The regions displaying maximum intensity of extreme precipitation appear to be dependent on model resolution. High-resolution models simulate maximum intensity of extreme precipitation over the Indian sub-continent, medium-resolution models over northeast India and South China and the low-resolution models over Bangladesh, Myanmar and Thailand. In summary, there are differences in displaying extreme precipitation statistics among the seven datasets considered here and among the 29 CMIP5 model data outputs.

  16. Systematic studies of small scintillators for new sampling calorimeter

    NASA Astrophysics Data System (ADS)

    Jacosalem, E. P.; Iba, S.; Nakajima, N.; Ono, H.; Sanchez, A. L. C.; Bacala, A. M.; Miyata, H.

    2007-12-01

    A new sampling calorimeter using very thin scintillators and the multi-pixel photon counter (MPPC) has been proposed to produce better position resolution for the international linear collider (ILC) experiment. As part of this R&D study, small plastic scintillators of different sizes, thickness and wrapping reflectors are systematically studied. The scintillation light due to beta rays from a collimated ^{90}Sr source are collected from the scintillator by wavelength-shifting (WLS) fiber and converted into electrical signals at the PMT. The wrapped scintillator that gives the best light yield is determined by comparing the measured pulse height of each 10 × 40 × 2 mm strip scintillator covered with 3M reflective mirror film, teflon, white paint, black tape, gold, aluminum and white paint+teflon. The pulse height dependence on position, length and thickness of the 3M reflective mirror film and teflon wrapped scintillators are measured. Results show that the 3M radiant mirror film-wrapped scintillator has the greatest light yield with an average of 9.2 photoelectrons. It is observed that light yield slightly increases with scintillator length, but increases to about 100% when WLS fiber diameter is increased from 1.0 mm to 1.6 mm. The position dependence measurement along the strip scintillator showed the uniformity of light transmission from the sensor to the PMT. A dip across the strip is observed which is 40% of the maximum pulse height. The block type scintillator pulse height, on the other hand, is found to be almost proportional to scintillator thickness.

  17. Bipolar vulnerability and extreme appraisals of internal states: a computerized ratings study.

    PubMed

    Dodd, Alyson L; Mansell, Warren; Morrison, Anthony P; Tai, Sara

    2011-01-01

    A recent integrative cognitive model proposed that multiple, extreme, personalized, positive and negative appraisals of internal states predispose to maintain and exacerbate bipolar symptoms. This study aimed to directly assess conviction in a range of positive and negative appraisals of internal states suggested by the model, by using a laboratory-based computerized task. In a student sample (n = 68), a history of hypomania was associated with more positive and less negative appraisals of internal states, and a history of depression was associated with more negative appraisals and less positive appraisals of internal states. The sample was then split into three groups for comparison: bipolar risk (n = 18), depression risk (n = 20) and controls (n  = 30). Relative to controls, the bipolar risk group made more extreme ratings of catastrophic appraisals of low activation states and tended to make more extreme ratings of appraisals of high activation states. The depression risk group scored higher on a range of negative appraisals of low activation states. These findings provide tentative support for the role of both positive and negative, extreme, personalized appraisals of internal states in hypomania and depression. Copyright © 2011 John Wiley & Sons, Ltd.

  18. Inpatient Rehabilitation Volume and Functional Outcomes in Stroke, Lower Extremity Fracture, and Lower Extremity Joint Replacement

    PubMed Central

    Graham, James E.; Deutsch, Anne; O’Connell, Ann A.; Karmarkar, Amol M.; Granger, Carl V.; Ottenbacher, Kenneth J.

    2013-01-01

    Background It is unclear if volume-outcome relationships exist in inpatient rehabilitation. Objectives Assess associations between facility volumes and two patient-centered outcomes in the three most common diagnostic groups in inpatient rehabilitation. Research Design We used hierarchical linear and generalized linear models to analyze administrative assessment data from patients receiving inpatient rehabilitation services for stroke (n=202,423), lower extremity fracture (n=132,194), or lower extremity joint replacement (n=148,068) between 2006 and 2008 in 717 rehabilitation facilities across the U.S. Facilities were assigned to quintiles based on average annual diagnosis-specific patient volumes. Measures Discharge functional status (FIM instrument) and probability of home discharge. Results Facility-level factors accounted for 6–15% of the variance in discharge FIM total scores and 3–5% of the variance in home discharge probability across the 3 diagnostic groups. We used the middle volume quintile (Q3) as the reference group for all analyses and detected small, but statistically significant (p < .01) associations with discharge functional status in all three diagnosis groups. Only the highest volume quintile (Q5) reached statistical significance, displaying higher functional status ratings than Q3 each time. The largest effect was observed in FIM total scores among fracture patients, with only a 3.6-point difference in Q5 and Q3 group means. Volume was not independently related to home discharge. Conclusions Outcome-specific volume effects ranged from small (functional status) to none (home discharge) in all three diagnostic groups. Patients with these conditions can be treated locally rather than at higher-volume regional centers. Further regionalization of inpatient rehabilitation services is not needed for these conditions. PMID:23579350

  19. Inpatient rehabilitation volume and functional outcomes in stroke, lower extremity fracture, and lower extremity joint replacement.

    PubMed

    Graham, James E; Deutsch, Anne; O'Connell, Ann A; Karmarkar, Amol M; Granger, Carl V; Ottenbacher, Kenneth J

    2013-05-01

    It is unclear if volume-outcome relationships exist in inpatient rehabilitation. Assess associations between facility volumes and 2 patient-centered outcomes in the 3 most common diagnostic groups in inpatient rehabilitation. We used hierarchical linear and generalized linear models to analyze administrative assessment data from patients receiving inpatient rehabilitation services for stroke (n=202,423), lower extremity fracture (n=132,194), or lower extremity joint replacement (n=148,068) between 2006 and 2008 in 717 rehabilitation facilities across the United States. Facilities were assigned to quintiles based on average annual diagnosis-specific patient volumes. Discharge functional status (FIM instrument) and probability of home discharge. Facility-level factors accounted for 6%-15% of the variance in discharge FIM total scores and 3%-5% of the variance in home discharge probability across the 3 diagnostic groups. We used the middle volume quintile (Q3) as the reference group for all analyses and detected small, but statistically significant (P<0.01) associations with discharge functional status in all 3 diagnosis groups. Only the highest volume quintile (Q5) reached statistical significance, displaying higher functional status ratings than Q3 each time. The largest effect was observed in FIM total scores among fracture patients, with only a 3.6-point difference in Q5 and Q3 group means. Volume was not independently related to home discharge. Outcome-specific volume effects ranged from small (functional status) to none (home discharge) in all 3 diagnostic groups. Patients with these conditions can be treated locally rather than at higher volume regional centers. Further regionalization of inpatient rehabilitation services is not needed for these conditions.

  20. Eating Problems at Age 6 Years in a Whole Population Sample of Extremely Preterm Children

    ERIC Educational Resources Information Center

    Samara, Muthanna; Johnson, Samantha; Lamberts, Koen; Marlow, Neil; Wolke, Dieter

    2010-01-01

    Aim: The aim of this study was to investigate the prevalence of eating problems and their association with neurological and behavioural disabilities and growth among children born extremely preterm (EPC) at age 6 years. Method: A standard questionnaire about eating was completed by parents of 223 children (125 males [56.1%], 98 females [43.9%])…

  1. Upper extremity pain and computer use among engineering graduate students.

    PubMed

    Schlossberg, Eric B; Morrow, Sandra; Llosa, Augusto E; Mamary, Edward; Dietrich, Peter; Rempel, David M

    2004-09-01

    The objective of this study was to investigate risk factors associated with persistent or recurrent upper extremity and neck pain among engineering graduate students. A random sample of 206 Electrical Engineering and Computer Science (EECS) graduate students at a large public university completed an online questionnaire. Approximately 60% of respondents reported upper extremity or neck pain attributed to computer use and reported a mean pain severity score of 4.5 (+/-2.2; scale 0-10). In a final logistic regression model, female gender, years of computer use, and hours of computer use per week were significantly associated with pain. The high prevalence of upper extremity pain reported by graduate students suggests a public health need to identify interventions that will reduce symptom severity and prevent impairment.

  2. Optical phased array configuration for an extremely large telescope.

    PubMed

    Meinel, Aden Baker; Meinel, Marjorie Pettit

    2004-01-20

    Extremely large telescopes are currently under consideration by several groups in several countries. Extrapolation of current technology up to 30 m indicates a cost of over dollars 1 billion. Innovative concepts are being explored to find significant cost reductions. We explore the concept of an Optical Phased Array (OPA) telescope. Each element of the OPA is a separate Cassegrain telescope. Collimated beams from the array are sent via an associated set of delay lines to a central beam combiner. This array of small telescope elements offers the possibility of starting with a low-cost array of a few rings of elements, adding structure and additional Cass elements until the desired diameter telescope is attained. We address the salient features of such an extremely large telescope and cost elements relative to more conventional options.

  3. Very Low Mass Stars with Extremely Low Metallicity in the Milky Way's Halo

    NASA Astrophysics Data System (ADS)

    Aoki, Wako; Beers, Timothy C.; Takuma, Suda; Honda, Satoshi; Lee, Young Sun

    2015-08-01

    Large surveys and follow-up spectroscopic studies in the past few decades have been providing chemical abundance data for a growing number of very metal-poor ([Fe/H] <-2) stars. Most of them are red giants or main-sequence turn-off stars having masses near 0.8 solar masses. Lower mass stars with extremely low metallicity ([Fe/H] <-3) have yet to be well explored. Our high-resolution spectroscopic study for very metal-poor stars found with SDSS has identified four cool main-sequence stars with [Fe/H] <-2.5 among 137 objects (Aoki et al. 2013, AJ, 145, 13). The effective temperatures of these stars are 4500--5000 K, corresponding to a mass of around 0.5 solar masses. Our standard analysis of the high-resolution spectra based on 1D-LTE model atmospheres have obtained self-consistent chemical abundances for these objects, assuming small values of micro-turbulent velocities compared with giants and turn-off stars. The low temperature of the atmospheres of these objects enables us to measure their detailed chemical abundances. Interestingly, two of the four stars have extreme chemical abundance patterns: one has the largest excesses of heavy neutron-capture elements associated with the r-process abundance pattern known to date (Aoki et al. 2010, ApJL 723, L201), and the other exhibits low abundances of the alpha-elements and odd-Z elements, suggested to be the signatures of the yields of very massive stars ( >100 solar masses; Aoki et al. 2014, Science 345, 912). Although the sample size is still small, these results indicate the potential of very low-mass stars as probes to study the early stages of the Milky Way's halo formation.

  4. EBUS-TBNA Provides Highest RNA Yield for Multiple Biomarker Testing from Routinely Obtained Small Biopsies in Non-Small Cell Lung Cancer Patients - A Comparative Study of Three Different Minimal Invasive Sampling Methods

    PubMed Central

    Schmid-Bindert, Gerald; Wang, Yongsheng; Jiang, Hongbin; Sun, Hui; Henzler, Thomas; Wang, Hao; Pilz, Lothar R.; Ren, Shengxiang; Zhou, Caicun

    2013-01-01

    Background Multiple biomarker testing is necessary to facilitate individualized treatment of lung cancer patients. More than 80% of lung cancers are diagnosed based on very small tumor samples. Often there is not enough tissue for molecular analysis. We compared three minimal invasive sampling methods with respect to RNA quantity for molecular testing. Methods 106 small biopsies were prospectively collected by three different methods forceps biopsy, endobronchial ultrasound (EBUS) guided transbronchial needle aspiration (TBNA), and CT-guided core biopsy. Samples were split into two halves. One part was formalin fixed and paraffin embedded for standard pathological evaluation. The other part was put in RNAlater for immediate RNA/DNA extraction. If the pathologist confirmed the diagnosis of non-small cell lung cancer(NSCLC), the following molecular markers were tested: EGFR mutation, ERCC1, RRM1 and BRCA1. Results Overall, RNA-extraction was possible in 101 out of 106 patients (95.3%). We found 49% adenocarcinomas, 38% squamouscarcinomas, and 14% non-otherwise-specified(NOS). The highest RNA yield came from endobronchial ultrasound guided needle aspiration, which was significantly higher than bronchoscopy (37.74±41.09 vs. 13.74±15.53 ng respectively, P = 0.005) and numerically higher than CT-core biopsy (37.74±41.09 vs. 28.72±44.27 ng respectively, P = 0.244). EGFR mutation testing was feasible in 100% of evaluable patients and its incidence was 40.8%, 7.9% and 14.3% in adenocarcinomas, squamouscarcinomas and NSCLC NOS subgroup respectively. There was no difference in the feasibility of molecular testing between the three sampling methods with feasibility rates for ERCC1, RRM1 and BRCA1 of 91%, 87% and 81% respectively. Conclusion All three methods can provide sufficient tumor material for multiple biomarkers testing from routinely obtained small biopsies in lung cancer patients. In our study EBUS guided needle aspiration provided the highest amount of

  5. Field Exploration and Life Detection Sampling Through Planetary Analogue Sampling (FELDSPAR).

    NASA Technical Reports Server (NTRS)

    Stockton, A.; Amador, E. S.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Gentry, D. M.; Kirby, J.; Jacobsen, M.; hide

    2017-01-01

    Exploration missions to Mars rely on rovers to perform analyses over small sampling areas; however, landing sites for these missions are selected based on large-scale, low-resolution remote data. The use of Earth analogue environments to estimate the multi-scale spatial distributions of key signatures of habitability can help ensure mission science goals are met. A main goal of FELDSPAR is to conduct field operations analogous to Mars sample return in its science, operations, and technology from landing site selection, to in-field sampling location selection, remote or stand-off analysis, in situ analysis, and home laboratory analysis. Lava fields and volcanic regions are relevant analogues to Martian landscapes due to desiccation, low nutrient availability, and temperature extremes. Operationally, many Icelandic lava fields are remote enough to require that field expeditions address several sampling constraints that are experienced in robotic exploration, including in situ and sample return missions. The Fimmvruhls lava field was formed by a basaltic effusive eruption associated with the 2010 Eyjafjallajkull eruption. Mlifellssandur is a recently deglaciated plain to the north of the Myrdalsjkull glacier. Holuhraun was formed by a 2014 fissure eruptions just north of the large Vatnajkull glacier. Dyngjusandur is an alluvial plain apparently kept barren by repeated mechanical weathering. Informed by our 2013 expedition, we collected samples in nested triangular grids every decade from the 10 cm scale to the 1 km scale (as permitted by the size of the site). Satellite imagery is available for older sites, and for Mlifellssandur, Holuhraun, and Dyngjusandur we obtained overhead imagery at 1 m to 200 m elevation. PanCam-style photographs were taken in the field by sampling personnel. In-field reflectance spectroscopy was also obtained with an ASD spectrometer in Dyngjusandur. All sites chosen were 'homogeneous' in apparent color, morphology, moisture, grain size, and

  6. Impact of the extreme 2009 wildfire Victoria the wettability of naturally highly water repellent soils

    NASA Astrophysics Data System (ADS)

    Doerr, Stefan H.; Shakesby, Richard A.; Sheridan, Gary J.; Lane, Patrick Nj; Smith, Hugh G.; Bell, Tina; Blake, William H.

    2010-05-01

    The recent catastrophic wildfires near Melbourne, which peaked on Feb. 7 2009, burned ca 400,000 ha and caused the tragic loss of 173 people. They occurred during unprecedented extreme fire weather where dry northerly winds gusting up to 100 km/h coincided with the highest temperatures ever recorded in this region. These conditions, combined with the very high biomass of mature eucalypt forests, very low fuel moisture conditions and steep slopes, generated extreme burning conditions. A rapid response project was launched under the NERC Urgency Scheme aimed at determining the effects of this extreme event on soil properties. Three replicate sites each were sampled for extremely high burn severity, high burn severity and unburnt control terrain, within mature mixed-species eucalypt forests near Marysville in April 2009. Ash and surface soil (0-2.5 cm and 2.5-5 cm) were collected at 20 sample grid points at each site. Here we report on outcomes from Water Drop Penetration Time (WDPT) tests carried out on soil samples to determine the impact of this extreme event on the wettability of a naturally highly water repellent soil. Field assessment suggested that the impact of this extreme wildfire on the soil was less than might be supposed given the extreme burn severity (indicated by the complete elimination of the ground vegetation). This was confirmed by the laboratory results. No major difference in WDPT was detected between (i) burned and control samples, and (ii) between surface and subsurface WDPT patterns, indicating that soil temperatures in the top 0-2.5 cm did not exceed ~200° C. Seedling germination in burned soil was reduced by at least 2/3 compared to the control samples, however, this reduction is indicative an only modest heat input into the soil. The limited heat input into the soil stands in stark contrast to the extreme burn severity (based on vegetation destruction parameters). We speculate that limited soil heating resulted perhaps from the unusually

  7. The Gulliver sample return mission to Deimos

    NASA Astrophysics Data System (ADS)

    Britt, D. T.; Robinson, M.; Gulliver Team

    The Martian moon Deimos presents a unique opportunity for a sample return mission. Deimos is spectrally analogous to type D asteroids, which are thought to be composed of highly primitive carbonaceous material that originated in the outer asteroid belt. It also is in orbit around Mars and has been accumulating material ejected from the Martian surface ever since the earliest periods of Martian history, over 4.4 Gyrs ago. There are a number of factors that make sample return from Deimos extremely attractive. It is Better: Deimos is a repository for two kinds of extremely significant and scientifically exciting ancient samples: (1) Primitive spectral D-type material that may have accreted in the outer asteroid belt and Trojan swarm. This material samples the composition of solar nebula well outside the zone of terrestrial planets and provides a direct sample of primitive material so common past 3 AU but so uncommon in the meteorite collection. (2) Ancient Mars, which could include the full range of Martian crustal and upper mantle material from the early differentiation and crustal-forming epoch as well as samples from the era of high volatile flux, thick atmosphere, and possible surface water. The Martian material on Deimos would be dominated by ejecta from the ancient crust of Mars, delivered during the Noachian Period of basin-forming impacts and heavy bombardment. It is Closer: Compared to other primitive D-type asteroids, Deimos is by far the most accessible. Because of its orbit around Mars, Deimos is far closer than any other D asteroid. It is Safer: Deimos is also by far the safest small body for sample return yet imaged. It is an order of magnitude less rocky than Eros and the NEAR-Shoemaker mission succeeded in landing on Eros with a spacecraft not designed for landing and proximity maneuvering. Because of Viking imagery we already know a great deal about the surface roughness of Deimos. It is known to be very smooth and have moderate topography and

  8. A passive guard for low thermal conductivity measurement of small samples by the hot plate method

    NASA Astrophysics Data System (ADS)

    Jannot, Yves; Degiovanni, Alain; Grigorova-Moutiers, Veneta; Godefroy, Justine

    2017-01-01

    Hot plate methods under steady state conditions are based on a 1D model to estimate the thermal conductivity, using measurements of the temperatures T 0 and T 1 of the two sides of the sample and of the heat flux crossing it. To be consistent with the hypothesis of the 1D heat flux, either a hot plate guarded apparatus is used, or the temperature is measured at the centre of the sample. On one hand the latter method can be used only if the ratio thickness/width of the sample is sufficiently low and on the other hand the guarded hot plate method requires large width samples (typical cross section of 0.6  ×  0.6 m2). That is why both methods cannot be used for low width samples. The method presented in this paper is based on an optimal choice of the temperatures T 0 and T 1 compared to the ambient temperature T a, enabling the estimation of the thermal conductivity with a centered hot plate method, by applying the 1D heat flux model. It will be shown that these optimal values do not depend on the size or on the thermal conductivity of samples (in the range 0.015-0.2 W m-1 K-1), but only on T a. The experimental results obtained validate the method for several reference samples for values of the ratio thickness/width up to 0.3, thus enabling the measurement of the thermal conductivity of samples having a small cross-section, down to 0.045  ×  0.045 m2.

  9. Multiple sessions of transcranial direct current stimulation and upper extremity rehabilitation in stroke: A review and meta-analysis.

    PubMed

    Tedesco Triccas, L; Burridge, J H; Hughes, A M; Pickering, R M; Desikan, M; Rothwell, J C; Verheyden, G

    2016-01-01

    To systematically review the methodology in particular treatment options and outcomes and the effect of multiple sessions of transcranial direct current stimulation (tDCS) with rehabilitation programmes for upper extremity recovery post stroke. A search was conducted for randomised controlled trials involving tDCS and rehabilitation for the upper extremity in stroke. Quality of included studies was analysed using the Modified Downs and Black form. The extent of, and effect of variation in treatment parameters such as anodal, cathodal and bi-hemispheric tDCS on upper extremity outcome measures of impairment and activity were analysed using meta-analysis. Nine studies (371 participants with acute, sub-acute and chronic stroke) were included. Different methodologies of tDCS and upper extremity intervention, outcome measures and timing of assessments were identified. Real tDCS combined with rehabilitation had a small non-significant effect of +0.11 (p=0.44) and +0.24 (p=0.11) on upper extremity impairments and activities at post-intervention respectively. Various tDCS methods have been used in stroke rehabilitation. The evidence so far is not statistically significant, but is suggestive of, at best, a small beneficial effect on upper extremity impairment. Future research should focus on which patients and rehabilitation programmes are likely to respond to different tDCS regimes. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. 7 CFR 201.42 - Small containers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Small containers. 201.42 Section 201.42 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.42 Small containers. In sampling seed in small containers that it is not practical to sample as required in § 201.41, a portion of one unopened container or...

  11. Probabilistic forecasting of extreme weather events based on extreme value theory

    NASA Astrophysics Data System (ADS)

    Van De Vyver, Hans; Van Schaeybroeck, Bert

    2016-04-01

    Extreme events in weather and climate such as high wind gusts, heavy precipitation or extreme temperatures are commonly associated with high impacts on both environment and society. Forecasting extreme weather events is difficult, and very high-resolution models are needed to describe explicitly extreme weather phenomena. A prediction system for such events should therefore preferably be probabilistic in nature. Probabilistic forecasts and state estimations are nowadays common in the numerical weather prediction community. In this work, we develop a new probabilistic framework based on extreme value theory that aims to provide early warnings up to several days in advance. We consider the combined events when an observation variable Y (for instance wind speed) exceeds a high threshold y and its corresponding deterministic forecasts X also exceeds a high forecast threshold y. More specifically two problems are addressed:} We consider pairs (X,Y) of extreme events where X represents a deterministic forecast, and Y the observation variable (for instance wind speed). More specifically two problems are addressed: Given a high forecast X=x_0, what is the probability that Y>y? In other words: provide inference on the conditional probability: [ Pr{Y>y|X=x_0}. ] Given a probabilistic model for Problem 1, what is the impact on the verification analysis of extreme events. These problems can be solved with bivariate extremes (Coles, 2001), and the verification analysis in (Ferro, 2007). We apply the Ramos and Ledford (2009) parametric model for bivariate tail estimation of the pair (X,Y). The model accommodates different types of extremal dependence and asymmetry within a parsimonious representation. Results are presented using the ensemble reforecast system of the European Centre of Weather Forecasts (Hagedorn, 2008). Coles, S. (2001) An Introduction to Statistical modelling of Extreme Values. Springer-Verlag.Ferro, C.A.T. (2007) A probability model for verifying deterministic

  12. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. The Climatology of Extreme Surge-Producing Extratropical Cyclones in Observations and Models

    NASA Astrophysics Data System (ADS)

    Catalano, A. J.; Broccoli, A. J.; Kapnick, S. B.

    2016-12-01

    Extreme coastal storms devastate heavily populated areas around the world by producing powerful winds that can create a large storm surge. Both tropical and extratropical cyclones (ETCs) occur over the northwestern Atlantic Ocean, and the risks associated with ETCs can be just as severe as those associated with tropical storms (e.g. high winds, storm surge). At The Battery in New York City, 17 of the 20 largest storm surge events were a consequence of extratropical cyclones (ETCs), which are more prevalent than tropical cyclones in the northeast region of the United States. Therefore, we analyze the climatology of ETCs that are capable of producing a large storm surge along the northeastern coast of the United States. For a historical analysis, water level data was collected from National Oceanic and Atmospheric Administration (NOAA) tide gauges at three separate locations (Sewell's Pt., VA, The Battery, NY, and Boston, MA). We perform a k-means cluster analysis of sea level pressure from the ECMWF 20th Century Reanalysis dataset (ERA-20c) to explore the natural sets of observed storms with similar characteristics. We then composite cluster results with features of atmospheric circulation to observe the influence of interannual and multidecadal variability such as the North Atlantic Oscillation. Since observational records contain a small number of well-documented ETCs, the capability of a high-resolution coupled climate model to realistically simulate such extreme coastal storms will also be assessed. Global climate models provide a means of simulating a much larger sample of extreme events, allowing for better resolution of the tail of the distribution. We employ a tracking algorithm to identify ETCs in a multi-century simulation under present-day conditions. Quantitative comparisons of cyclolysis, cyclogenesis, and cyclone densities of simulated ETCs and storms from recent history (using reanalysis products) are conducted.

  14. Quantifying the relationship between extreme air pollution events and extreme weather events

    NASA Astrophysics Data System (ADS)

    Zhang, Henian; Wang, Yuhang; Park, Tae-Won; Deng, Yi

    2017-05-01

    Extreme weather events can strongly affect surface air quality, which has become a major environmental factor to affect human health. Here, we examined the relationship between extreme ozone and PM2.5 (particular matter with an aerodynamic diameter less than 2.5 μm) events and the representative meteorological parameters such as daily maximum temperature (Tmax), minimum relative humidity (RHmin), and minimum wind speed (Vmin), using the location-specific 95th or 5th percentile threshold derived from historical reanalysis data (30 years for ozone and 10 years for PM2.5). We found that ozone and PM2.5 extremes were decreasing over the years, reflecting EPA's tightened standards and effort on reducing the corresponding precursor's emissions. Annual ozone and PM2.5 extreme days were highly correlated with Tmax and RHmin, especially in the eastern U.S. They were positively (negatively) correlated with Vmin in urban (rural and suburban) stations. The overlapping ratios of ozone extreme days with Tmax were fairly constant, about 32%, and tended to be high in fall and low in winter. Ozone extreme days were most sensitive to Tmax, then RHmin, and least sensitive to Vmin. The majority of ozone extremes occurred when Tmax was between 300 K and 320 K, RHmin was less than 40%, and Vmin was less than 3 m/s. The number of annual extreme PM2.5 days was highly positively correlated with the extreme RHmin/Tmax days, with correlation coefficient between PM2.5/RHmin highest in urban and suburban regions and the correlation coefficient between PM2.5/Tmax highest in rural area. Tmax has more impact on PM2.5 extreme over the eastern U.S. Extreme PM2.5 days were more likely to occur at low RH conditions in the central and southeastern U.S., especially during spring time, and at high RH conditions in the northern U.S. and the Great Plains. Most extreme PM2.5 events occurred when Tmax was between 300 K and 320 K and RHmin was between 10% and 50%. Extreme PM2.5 days usually occurred when

  15. Small Flare and a Coronal Mass Ejection

    NASA Image and Video Library

    2018-01-31

    The sun shot out a small coronal mass ejection that was also associated with a small flare (Jan. 22, 2018). The video, which covers about 5 hours, shows the burst of plasma as the magnetic loops break apart. Immediately the magnetic fields brighten intensely and begin to reorganize themselves in coils above the active region. The images were taken in a wavelength of extreme ultraviolet light. Videos are available at https://photojournal.jpl.nasa.gov/catalog/PIA22184

  16. Antibiotic Resistance in Animal and Environmental Samples Associated with Small-Scale Poultry Farming in Northwestern Ecuador

    PubMed Central

    Braykov, Nikolay P.; Eisenberg, Joseph N. S.; Grossman, Marissa; Zhang, Lixin; Vasco, Karla; Cevallos, William; Muñoz, Diana; Acevedo, Andrés; Moser, Kara A.; Marrs, Carl F.; Trostle, James; Trueba, Gabriel

    2016-01-01

    ABSTRACT The effects of animal agriculture on the spread of antibiotic resistance (AR) are cross-cutting and thus require a multidisciplinary perspective. Here we use ecological, epidemiological, and ethnographic methods to examine populations of Escherichia coli circulating in the production poultry farming environment versus the domestic environment in rural Ecuador, where small-scale poultry production employing nontherapeutic antibiotics is increasingly common. We sampled 262 “production birds” (commercially raised broiler chickens and laying hens) and 455 “household birds” (raised for domestic use) and household and coop environmental samples from 17 villages between 2010 and 2013. We analyzed data on zones of inhibition from Kirby-Bauer tests, rather than established clinical breakpoints for AR, to distinguish between populations of organisms. We saw significantly higher levels of AR in bacteria from production versus household birds; resistance to either amoxicillin-clavulanate, cephalothin, cefotaxime, and gentamicin was found in 52.8% of production bird isolates and 16% of household ones. A strain jointly resistant to the 4 drugs was exclusive to a subset of isolates from production birds (7.6%) and coop surfaces (6.5%) and was associated with a particular purchase site. The prevalence of AR in production birds declined with bird age (P < 0.01 for all antibiotics tested except tetracycline, sulfisoxazole, and trimethoprim-sulfamethoxazole). Farming status did not impact AR in domestic environments at the household or village level. Our results suggest that AR associated with small-scale poultry farming is present in the immediate production environment and likely originates from sources outside the study area. These outside sources might be a better place to target control efforts than local management practices. IMPORTANCE In developing countries, small-scale poultry farming employing antibiotics as growth promoters is being advanced as an

  17. Antibiotic Resistance in Animal and Environmental Samples Associated with Small-Scale Poultry Farming in Northwestern Ecuador.

    PubMed

    Braykov, Nikolay P; Eisenberg, Joseph N S; Grossman, Marissa; Zhang, Lixin; Vasco, Karla; Cevallos, William; Muñoz, Diana; Acevedo, Andrés; Moser, Kara A; Marrs, Carl F; Foxman, Betsy; Trostle, James; Trueba, Gabriel; Levy, Karen

    2016-01-01

    The effects of animal agriculture on the spread of antibiotic resistance (AR) are cross-cutting and thus require a multidisciplinary perspective. Here we use ecological, epidemiological, and ethnographic methods to examine populations of Escherichia coli circulating in the production poultry farming environment versus the domestic environment in rural Ecuador, where small-scale poultry production employing nontherapeutic antibiotics is increasingly common. We sampled 262 "production birds" (commercially raised broiler chickens and laying hens) and 455 "household birds" (raised for domestic use) and household and coop environmental samples from 17 villages between 2010 and 2013. We analyzed data on zones of inhibition from Kirby-Bauer tests, rather than established clinical breakpoints for AR, to distinguish between populations of organisms. We saw significantly higher levels of AR in bacteria from production versus household birds; resistance to either amoxicillin-clavulanate, cephalothin, cefotaxime, and gentamicin was found in 52.8% of production bird isolates and 16% of household ones. A strain jointly resistant to the 4 drugs was exclusive to a subset of isolates from production birds (7.6%) and coop surfaces (6.5%) and was associated with a particular purchase site. The prevalence of AR in production birds declined with bird age (P < 0.01 for all antibiotics tested except tetracycline, sulfisoxazole, and trimethoprim-sulfamethoxazole). Farming status did not impact AR in domestic environments at the household or village level. Our results suggest that AR associated with small-scale poultry farming is present in the immediate production environment and likely originates from sources outside the study area. These outside sources might be a better place to target control efforts than local management practices. IMPORTANCE In developing countries, small-scale poultry farming employing antibiotics as growth promoters is being advanced as an inexpensive source of

  18. Lower Extremity Reconstruction with Free Gracilis Flaps

    PubMed Central

    Nicoson, Michael C; Parikh, Rajiv P; Tung, Thomas H

    2017-01-01

    Background There have been significant advancements in lower extremity reconstruction over the last several decades, and the plastic surgeon’s armamentarium has grown to include free muscle and fasciocutaneous flaps along with local perforator and propeller flaps. While we have found a use for a variety of techniques for lower extremity reconstruction, the free gracilis has been our workhorse flap due to the ease of harvest, reliability, and low donor site morbidity. Methods This is a retrospective review of a single surgeon’s series of free gracilis flaps utilized for lower extremity reconstruction. Demographic information, comorbidities, outcomes and secondary procedures were analyzed. Results We identified 24 free gracilis flaps. The duration from injury to free flap coverage was 7 days or less in 6 patients, 8–30 days in 11 patients, 31–90 days in 4 patients, and > 90 days in 3 patients. There were 22 (92%) successful flaps and an overall limb salvage rate of 92%. There was one partial flap loss. Two flaps underwent incision and drainage in the operating room for infection. Two patients developed donor site hematomas. Four patients underwent secondary procedures for contouring. Our subset of pediatric patients had 100% flap survival and no secondary procedures at a mean 30 month follow up. Conclusions This study demonstrates the utility of the free gracilis flap in reconstruction of small to medium sized defects of the lower extremity. This flap has a high success rate and low donor site morbidity. Atrophy of the denervated muscle over time allows for good shoe fit, often obviating the need for secondary contouring procedures. PMID:28024305

  19. Advanced sampling techniques for hand-held FT-IR instrumentation

    NASA Astrophysics Data System (ADS)

    Arnó, Josep; Frunzi, Michael; Weber, Chris; Levy, Dustin

    2013-05-01

    FT-IR spectroscopy is the technology of choice to identify solid and liquid phase unknown samples. The challenging ConOps in emergency response and military field applications require a significant redesign of the stationary FT-IR bench-top instruments typically used in laboratories. Specifically, field portable units require high levels of resistance against mechanical shock and chemical attack, ease of use in restrictive gear, extreme reliability, quick and easy interpretation of results, and reduced size. In the last 20 years, FT-IR instruments have been re-engineered to fit in small suitcases for field portable use and recently further miniaturized for handheld operation. This article introduces the HazMatID™ Elite, a FT-IR instrument designed to balance the portability advantages of a handheld device with the performance challenges associated with miniaturization. In this paper, special focus will be given to the HazMatID Elite's sampling interfaces optimized to collect and interrogate different types of samples: accumulated material using the on-board ATR press, dispersed powders using the ClearSampler™ tool, and the touch-to-sample sensor for direct liquid sampling. The application of the novel sample swipe accessory (ClearSampler) to collect material from surfaces will be discussed in some detail. The accessory was tested and evaluated for the detection of explosive residues before and after detonation. Experimental results derived from these investigations will be described in an effort to outline the advantages of this technology over existing sampling methods.

  20. Nonstationarity in timing of extreme precipitation across China and impact of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Gu, Xihui; Zhang, Qiang; Singh, Vijay P.; Shi, Peijun

    2017-02-01

    This study examines the seasonality and nonstationarity in the timing of extreme precipitation obtained by annual maximum (AM) sampling and peak-over-threshold (POT) sampling techniques using circular statistics. Daily precipitation data from 728 stations with record length of at least 55 years across China were analyzed. In general, the average seasonality is subject mainly to summer season (June-July - August), which is potentially related to East Asian monsoon and Indian monsoon activities. The strength of precipitation seasonality varied across China with the highest strength being in northeast, north, and central-north China; whereas the weakest seasonality was found in southeast China. There are three seasonality types: circular uniform, reflective symmetric, and asymmetric. However, the circular uniform seasonality of extreme precipitation was not detected at stations across China. The asymmetric distribution was observed mainly in southeast China, and the reflective distribution of precipitation extremes was also identified the other regions besides the above-mentioned regions. Furthermore, a strong signal of nonstationarity in the seasonality was detected at half of the weather stations considered in the study, exhibiting a significant shift in the timing of extreme precipitation, and also significant trends in the average and strength of seasonality. Seasonal vapor flux and related delivery pathways and also tropical cyclones (TCs) are most probably the driving factors for the shifts or changes in the seasonality of extreme precipitation across China. Timing of precipitation extremes is closely related to seasonal shifts of floods and droughts and which means much for management of agricultural irrigation and water resources management. This study sheds new light on nonstationarity in timing of precipitation extremes which differs from existing ones which focused on precipitation extremes from perspective of magnitude and intensity.

  1. The Relation Among the Likelihood Ratio-, Wald-, and Lagrange Multiplier Tests and Their Applicability to Small Samples,

    DTIC Science & Technology

    1982-04-01

    S. (1979), "Conflict Among Criteria for Testing Hypothesis: Extension and Comments," Econometrica, 47, 203-207 Breusch , T. S. and Pagan , A. R. (1980...Savin, N. E. (1977), "Conflict Among Criteria for Testing Hypothesis in the Multivariate Linear Regression Model," Econometrica, 45, 1263-1278 Breusch , T...VNCLASSIFIED RAND//-6756NL U l~ I- THE RELATION AMONG THE LIKELIHOOD RATIO-, WALD-, AND LAGRANGE MULTIPLIER TESTS AND THEIR APPLICABILITY TO SMALL SAMPLES

  2. Cognitive Outcomes for Extremely Preterm/Extremely Low Birth Weight Children in Kindergarten

    PubMed Central

    Orchinik, Leah J.; Taylor, H. Gerry; Espy, Kimberly Andrews; Minich, Nori; Klein, Nancy; Sheffield, Tiffany; Hack, Maureen

    2012-01-01

    Our objectives were to examine cognitive outcomes for extremely preterm/extremely low birth weight (EPT/ELBW, gestational age <28 weeks and/or birth weight <1000 g) children in kindergarten and the associations of these outcomes with neonatal factors, early childhood neurodevelopmental impairment, and socioeconomic status (SES). The sample comprised a hospital-based 2001-2003 birth cohort of 148 EPT/ELBW children (mean birth weight 818 g; mean gestational age 26 weeks) and a comparison group of 111 term-born normal birth weight (NBW) classmate controls. Controlling for background factors, the EPT/ELBW group had pervasive deficits relative to the NBW group on a comprehensive test battery, with rates of cognitive deficits that were 3 to 6 times higher in the EPT/ELBW group. Deficits on a measure of response inhibition were found in 48% versus 10%, OR (95% CI) = 7.32 (3.32, 16.16), p <.001. Deficits on measures of executive function and motor and perceptual-motor abilities were found even when controlling for acquired verbal knowledge. Neonatal risk factors, early neurodevelopmental impairment, and lower SES were associated with higher rates of deficits within the EPT/ELBW group. The findings document both global and selective cognitive deficits in EPT/ELBW children at school entry and justify efforts at early identification and intervention. PMID:21923973

  3. The Impact of Air-Sea Interactions on the Representation of Tropical Precipitation Extremes

    NASA Astrophysics Data System (ADS)

    Hirons, L. C.; Klingaman, N. P.; Woolnough, S. J.

    2018-02-01

    The impacts of air-sea interactions on the representation of tropical precipitation extremes are investigated using an atmosphere-ocean-mixed-layer coupled model. The coupled model is compared to two atmosphere-only simulations driven by the coupled-model sea-surface temperatures (SSTs): one with 31 day running means (31 d), the other with a repeating mean annual cycle. This allows separation of the effects of interannual SST variability from those of coupled feedbacks on shorter timescales. Crucially, all simulations have a consistent mean state with very small SST biases against present-day climatology. 31d overestimates the frequency, intensity, and persistence of extreme tropical precipitation relative to the coupled model, likely due to excessive SST-forced precipitation variability. This implies that atmosphere-only attribution and time-slice experiments may overestimate the strength and duration of precipitation extremes. In the coupled model, air-sea feedbacks damp extreme precipitation, through negative local thermodynamic feedbacks between convection, surface fluxes, and SST.

  4. An emerging population of BL Lacs with extreme properties: towards a class of EBL and cosmic magnetic field probes?

    NASA Astrophysics Data System (ADS)

    Bonnoli, G.; Tavecchio, F.; Ghisellini, G.; Sbarrato, T.

    2015-07-01

    High-energy observations of extreme BL Lac objects, such as 1ES 0229+200 or 1ES 0347-121, recently focused interest both for blazar and jet physics and for the implication on the extragalactic background light and intergalactic magnetic field estimate. However, the number of these extreme highly peaked BL Lac objects (EHBL) is still rather small. Aiming at increase their number, we selected a group of EHBL candidates starting from the BL Lac sample of Plotkin et al. (2011), considering those undetected (or only barely detected) by the Large Area Telescope onboard Fermi and characterized by a high X-ray versus radio flux ratio. We assembled the multiwavelength spectral energy distribution of the resulting nine sources, profiting of publicly available archival observations performed by Swift, GALEX, and Fermi satellites, confirming their nature. Through a simple one-zone synchrotron self-Compton model we estimate the expected very high energy flux, finding that in the majority of cases it is within the reach of present generation of Cherenkov arrays or of the forthcoming Cherenkov Telescope Array.

  5. Thermal Evaluation of Fiber Bragg Gratings at Extreme Temperatures

    NASA Technical Reports Server (NTRS)

    Juergens, Jeffrey; Adamovsky, Grigory; Bhatt, Ramakrishna; Morscher, Gregory; Floyd, Bertram

    2005-01-01

    The development of integrated fiber optic sensors for use in aerospace health monitoring systems demands that the sensors be able to perform in extreme environments. In order to use fiber optic sensors effectively in an extreme environment one must have a thorough understanding of the sensor's capabilities, limitations, and performance under extreme environmental conditions. This paper reports on our current sensor evaluation examining the performance of freestanding fiber Bragg gratings (FBG) at extreme temperatures. While the ability of FBGs to survive at extreme temperatures has been established, their performance and long term survivability is not well documented. At extreme temperatures the grating structure would be expected to dissipate, degrading the sensors performance and eventually ceasing to return a detectable signal. The fiber jacket will dissipate leaving a brittle, unprotected fiber. For FBGs to be used in aerospace systems their performance and limitations need to be thoroughly understood at extreme temperatures. As the limits of the FBGs performance are pushed the long term survivability and performance of the sensor comes into question. We will not only examine the ability of FBGs to survive extreme temperatures but also look at their performance during many thermal cycles. This paper reports on test results of the performance of thermal cycling commercially available FBGs, at temperatures up to 1000 C, seen in aerospace applications. Additionally this paper will report on the performance of commercially available FBGs held at 1000 C for hundreds of hours. Throughout the evaluation process, various parameters of the FBGs performance were monitored and recorded. Several test samples were subjected to identical test conditions to allow for statistical analysis of the data. Test procedures, calibrations, referencing techniques, performance data, and interpretations and explanations of results are presented in the paper along with directions for

  6. The Extreme Ultraviolet Explorer Mission

    NASA Technical Reports Server (NTRS)

    Bowyer, S.; Malina, R. F.

    1991-01-01

    The Extreme Ultraviolet Explorer (EUVE) mission, currently scheduled from launch in September 1991, is described. The primary purpose of the mission is to survey the celestial sphere for astronomical sources of extreme ultraviolet (EUV) radiation with the use of three EUV telescope, each sensitive to a different segment of the EUV band. A fourth telescope is planned to perform a high-sensitivity search of a limited sample of the sky in the shortest wavelength bands. The all-sky survey is planned to be carried out in the first six months of the mission in four bands, or colors, 70-180 A, 170-250 A, 400-600 A, and 500-700 A. The second phase of the mission is devoted to spectroscopic observations of EUV sources. A high-efficiency grazing-incidence spectrometer using variable line-space gratings is planned to provide spectral data with about 1-A resolution. An end-to-end model of the mission, from a stellar source to the resulting scientific data, is presented. Hypothetical data from astronomical sources were processed through this model and are shown.

  7. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    PubMed

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  8. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    PubMed Central

    Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828

  9. Small population size of Pribilof Rock Sandpipers confirmed through distance-sampling surveys in Alaska

    USGS Publications Warehouse

    Ruthrauff, Daniel R.; Tibbitts, T. Lee; Gill, Robert E.; Dementyev, Maksim N.; Handel, Colleen M.

    2012-01-01

    The Rock Sandpiper (Calidris ptilocnemis) is endemic to the Bering Sea region and unique among shorebirds in the North Pacific for wintering at high latitudes. The nominate subspecies, the Pribilof Rock Sandpiper (C. p. ptilocnemis), breeds on four isolated islands in the Bering Sea and appears to spend the winter primarily in Cook Inlet, Alaska. We used a stratified systematic sampling design and line-transect method to survey the entire breeding range of this population during springs 2001-2003. Densities were up to four times higher on the uninhabited and more northerly St. Matthew and Hall islands than on St. Paul and St. George islands, which both have small human settlements and introduced reindeer herds. Differences in density, however, appeared to be more related to differences in vegetation than to anthropogenic factors, raising some concern for prospective effects of climate change. We estimated the total population at 19 832 birds (95% CI 17 853–21 930), ranking it among the smallest of North American shorebird populations. To determine the vulnerability of C. p. ptilocnemis to anthropogenic and stochastic environmental threats, future studies should focus on determining the amount of gene flow among island subpopulations, the full extent of the subspecies' winter range, and the current trajectory of this small population.

  10. Is climate change modifying precipitation extremes?

    NASA Astrophysics Data System (ADS)

    Montanari, Alberto; Papalexiou, Simon Michael

    2016-04-01

    The title of the present contribution is a relevant question that is frequently posed to scientists, technicians and managers of local authorities. Although several research efforts were recently dedicated to rainfall observation, analysis and modelling, the above question remains essentially unanswered. The question comes from the awareness that the frequency of floods and the related socio-economic impacts are increasing in many countries, and climate change is deemed to be the main trigger. Indeed, identifying the real reasons for the observed increase of flood risk is necessary in order to plan effective mitigation and adaptation strategies. While mitigation of climate change is an extremely important issue at the global level, at small spatial scales several other triggers may interact with it, therefore requiring different mitigation strategies. Similarly, the responsibilities of administrators are radically different at local and global scales. This talk aims to provide insights and information to address the question expressed by its title. High resolution and long term rainfall data will be presented, as well as an analysis of the frequency of their extremes and its progress in time. The results will provide pragmatic indications for the sake of better planning flood risk mitigation policies.

  11. A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events

    NASA Astrophysics Data System (ADS)

    Zorzetto, E.; Marani, M.

    2017-12-01

    The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.

  12. Miniaturized King furnace permits absorption spectroscopy of small samples

    NASA Technical Reports Server (NTRS)

    Ercoli, B.; Tompkins, F. S.

    1968-01-01

    Miniature King-type furnace, consisting of an inductively heated, small diameter tantalum tube supported in a radiation shield eliminates the disadvantages of the conventional furnace in obtaining absorption spectra of metal vapors.

  13. ELROI Extremely Low Resource Optical Identifier. A license plate for your satellite, and more.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, David

    ELROI (Extremely Low Resource Optical Identifier) is a license plate for your satellite; a small tag that flashes an optical identification code that can be read by a small telescope on the ground. The final version of the tag will be the size of a thick postage stamp and fully autonomous: you can attach it to everything that goes into space, including small cubesats and inert debris like rocket stages, and it will keep blinking even after the satellite is shut down, reliably identifying the object from launch until re-entry.

  14. Bias in fallout data from nuclear surface shot SMALL BOY: an evaluation of sample perturbation by sieve sizing. Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pascual, J.N.

    1967-06-26

    Evaluation of sample bias introduced by the mechanical sieving of Small Boy fallout samples for 10 minutes revealed the following: Up to 20% of the mass and 30% of the gamma-ray activity can be lost from the large-particle (greater than 1400 microns) fraction. The pan fraction (less than 44 microns) can gain in weight by as much as 79%, and in activity by as much as 44%. The gamma-ray spectra of the fractions were not noticeably altered by the process. Examination of unbiased pan fractions (before mechanical sieving) indicated bimodality of the mass-size distribution in a sample collected 9,200 feetmore » from ground zero, but not in a sample collected at 13,300 feet.« less

  15. Heat Shield for Extreme Entry Environment Technology (HEEET)

    NASA Technical Reports Server (NTRS)

    Venkatapathy, Ethiraj

    2017-01-01

    The Heat Shield for Extreme Entry Environment Technology (HEEET) project seeks to mature a game changing Woven Thermal Protection System (TPS) technology to enable in situ robotic science missions recommended by the NASA Research Council Planetary Science Decadal Survey committee. Recommended science missions include Venus probes and landers; Saturn and Uranus probes; and high-speed sample return missions.

  16. Spatial variation of statistical properties of extreme water levels along the eastern Baltic Sea

    NASA Astrophysics Data System (ADS)

    Pindsoo, Katri; Soomere, Tarmo; Rocha, Eugénio

    2016-04-01

    Most of existing projections of future extreme water levels rely on the use of classic generalised extreme value distributions. The choice to use a particular distribution is often made based on the absolute value of the shape parameter of the Generalise Extreme Value distribution. If this parameter is small, the Gumbel distribution is most appropriate while in the opposite case the Weibull or Frechet distribution could be used. We demonstrate that the alongshore variation in the statistical properties of numerically simulated high water levels along the eastern coast of the Baltic Sea is so large that the use of a single distribution for projections of extreme water levels is highly questionable. The analysis is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. The output of the Rossby Centre Ocean model is sampled with a resolution of 6 h and the output of the circulation model NEMO with a resolution of 1 h. As the maxima of water levels of subsequent years may be correlated in the Baltic Sea, we also employ maxima for stormy seasons. We provide a detailed analysis of spatial variation of the parameters of the family of extreme value distributions along an approximately 600 km long coastal section from the north-western shore of Latvia in the Baltic Proper until the eastern Gulf of Finland. The parameters are evaluated using maximum likelihood method and method of moments. The analysis also covers the entire Gulf of Riga. The core parameter of this family of distributions, the shape parameter of the Generalised Extreme Value distribution, exhibits extensive variation in the study area. Its values evaluated using the Hydrognomon software and maximum likelihood method, vary from about -0.1 near the north-western coast of Latvia in the Baltic Proper up to about 0.05 in the eastern Gulf of Finland. This parameter is very close to zero near Tallinn in the western Gulf of Finland. Thus, it is natural that the Gumbel

  17. Propulsion engineering study for small-scale Mars missions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, J.

    1995-09-12

    Rocket propulsion options for small-scale Mars missions are presented and compared, particularly for the terminal landing maneuver and for sample return. Mars landing has a low propulsive {Delta}v requirement on a {approximately}1-minute time scale, but at a high acceleration. High thrust/weight liquid rocket technologies, or advanced pulse-capable solids, developed during the past decade for missile defense, are therefore more appropriate for small Mars landers than are conventional space propulsion technologies. The advanced liquid systems are characterize by compact lightweight thrusters having high chamber pressures and short lifetimes. Blowdown or regulated pressure-fed operation can satisfy the Mars landing requirement, but hardwaremore » mass can be reduced by using pumps. Aggressive terminal landing propulsion designs can enable post-landing hop maneuvers for some surface mobility. The Mars sample return mission requires a small high performance launcher having either solid motors or miniature pump-fed engines. Terminal propulsion for 100 kg Mars landers is within the realm of flight-proven thruster designs, but custom tankage is desirable. Landers on a 10 kg scale also are feasible, using technology that has been demonstrated but not previously flown in space. The number of sources and the selection of components are extremely limited on this smallest scale, so some customized hardware is required. A key characteristic of kilogram-scale propulsion is that gas jets are much lighter than liquid thrusters for reaction control. The mass and volume of tanks for inert gas can be eliminated by systems which generate gas as needed from a liquid or a solid, but these have virtually no space flight history. Mars return propulsion is a major engineering challenge; earth launch is the only previously-solved propulsion problem requiring similar or greater performance.« less

  18. Carbon-14 dating of small samples by proportional counting.

    PubMed

    Harbottle, G; Sayre, E V; Stoenner, R W

    1979-11-09

    Conventional carbon-14 dating by means of gas proportional counters has been extended to samples containing as little as 10 milligrams of carbon. The accuracy of the dating procedure has been checked by dating sequoia tree-ring samples of the 1st century A.D. and B.C. and an oak tree-ring sample of the 19th century A.D.

  19. Time-varying Concurrent Risk of Extreme Droughts and Heatwaves in California

    NASA Astrophysics Data System (ADS)

    Sarhadi, A.; Diffenbaugh, N. S.; Ausin, M. C.

    2016-12-01

    Anthropogenic global warming has changed the nature and the risk of extreme climate phenomena such as droughts and heatwaves. The concurrent of these nature-changing climatic extremes may result in intensifying undesirable consequences in terms of human health and destructive effects in water resources. The present study assesses the risk of concurrent extreme droughts and heatwaves under dynamic nonstationary conditions arising from climate change in California. For doing so, a generalized fully Bayesian time-varying multivariate risk framework is proposed evolving through time under dynamic human-induced environment. In this methodology, an extreme, Bayesian, dynamic copula (Gumbel) is developed to model the time-varying dependence structure between the two different climate extremes. The time-varying extreme marginals are previously modeled using a Generalized Extreme Value (GEV) distribution. Bayesian Markov Chain Monte Carlo (MCMC) inference is integrated to estimate parameters of the nonstationary marginals and copula using a Gibbs sampling method. Modelled marginals and copula are then used to develop a fully Bayesian, time-varying joint return period concept for the estimation of concurrent risk. Here we argue that climate change has increased the chance of concurrent droughts and heatwaves over decades in California. It is also demonstrated that a time-varying multivariate perspective should be incorporated to assess realistic concurrent risk of the extremes for water resources planning and management in a changing climate in this area. The proposed generalized methodology can be applied for other stochastic nature-changing compound climate extremes that are under the influence of climate change.

  20. Microparticle sampling by electrowetting-actuated droplet sweeping.

    PubMed

    Zhao, Yuejun; Cho, Sung Kwon

    2006-01-01

    This paper describes a new microparticle sampler where particles can be efficiently swept from a solid surface and sampled into a liquid medium using moving droplets actuated by the electrowetting principle. We successfully demonstrate that super hydrophilic (2 microm and 7.9 microm diameter glass beads of about 14 degrees contact angle), intermediate hydrophilic (7.5 microm diameter polystyrene beads of about 70 degrees contact angle), and super hydrophobic (7.9 microm diameter Teflon-coated glass beads and 3 microm size PTFE particles of over 110 degrees contact angles) particles on a solid surface are picked up by electrowetting-actuated moving droplets. For the glass beads as well as the polystyrene beads, the sampling efficiencies are over 93%, in particular over 98% for the 7.9 microm glass beads. For the PTFE particles, however, the sampling efficiency is measured at around 70%, relatively lower than that of the glass and polystyrene beads. This is due mainly to the non-uniformity in particle size and the particle hydrophobicity. In this case, the collected particles staying (adsorbing) on the air-to-water interface hinder the droplet from advancing. This particle sampler requires an extremely small amount of liquid volume (about 500 nanoliters) and will thus be highly compatible and easily integrated with lab-on-a-chip systems for follow-up biological/chemical analyses.

  1. A new ultrasonic transducer sample cell for in situ small-angle scattering experiments

    NASA Astrophysics Data System (ADS)

    Gupta, Sudipta; Bleuel, Markus; Schneider, Gerald J.

    2018-01-01

    Ultrasound irradiation is a commonly used technique for nondestructive diagnostics or targeted destruction. We report on a new versatile sonication device that fits in a variety of standard sample environments for neutron and X-ray scattering instruments. A piezoelectric transducer permits measuring of the time-dependent response of the sample in situ during or after sonication. We use small-angle neutron scattering (SANS) to demonstrate the effect of a time-dependent perturbation on the structure factor of micelles formed from sodium dodecyl sulfate surfactant molecules. We observe a substantial change in the micellar structure during and after exposure to ultrasonic irradiation. We also observe a time-dependent relaxation to the equilibrium values of the unperturbed system. The strength of the perturbation of the structure factor depends systematically on the duration of sonication. The relaxation behavior can be well reproduced after multiple times of sonication. Accumulation of the recorded intensities of the different sonication cycles improves the signal-to-noise ratio and permits reaching very short relaxation times. In addition, we present SANS data for the micellar form factor on alkyl-poly (ethylene oxide) surfactant molecules irradiated by ultrasound. Due to the flexibility of our new in situ sonication device, different experiments can be performed, e.g., to explore molecular potentials in more detail by introducing a systematic time-dependent perturbation.

  2. Methane Leaks from Natural Gas Systems Follow Extreme Distributions.

    PubMed

    Brandt, Adam R; Heath, Garvin A; Cooley, Daniel

    2016-11-15

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4 ) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ∼15 000 measurements from 18 prior studies, we show that all available natural gas leakage data sets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of the total leakage volume. While prior studies used log-normal model distributions, we show that log-normal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of data sets to increase sample size is not recommended due to apparent deviation between sampled populations. Understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.

  3. Methane Leaks from Natural Gas Systems Follow Extreme Distributions

    DOE PAGES

    Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel

    2016-10-14

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less

  4. Methane Leaks from Natural Gas Systems Follow Extreme Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, Adam R.; Heath, Garvin A.; Cooley, Daniel

    Future energy systems may rely on natural gas as a low-cost fuel to support variable renewable power. However, leaking natural gas causes climate damage because methane (CH 4) has a high global warming potential. In this study, we use extreme-value theory to explore the distribution of natural gas leak sizes. By analyzing ~15,000 measurements from 18 prior studies, we show that all available natural gas leakage datasets are statistically heavy-tailed, and that gas leaks are more extremely distributed than other natural and social phenomena. A unifying result is that the largest 5% of leaks typically contribute over 50% of themore » total leakage volume. While prior studies used lognormal model distributions, we show that lognormal functions poorly represent tail behavior. Our results suggest that published uncertainty ranges of CH 4 emissions are too narrow, and that larger sample sizes are required in future studies to achieve targeted confidence intervals. Additionally, we find that cross-study aggregation of datasets to increase sample size is not recommended due to apparent deviation between sampled populations. Finally, understanding the nature of leak distributions can improve emission estimates, better illustrate their uncertainty, allow prioritization of source categories, and improve sampling design. Also, these data can be used for more effective design of leak detection technologies.« less

  5. The Immune Landscape of Non-Small Cell Lung Cancer: Utility of Cytologic and Histologic Samples Obtained Through Minimally Invasive Pulmonary Procedures.

    PubMed

    Beattie, Jason; Yarmus, Lonny; Wahidi, Momen M; Rivera, M Patricia; Gilbert, Christopher; Maldonado, Fabien; Czarnecka, Kasia; Argento, Angela; Chen, Alexander; Herth, Felix; Sterman, Daniel H

    2018-05-14

    The success of immune checkpoint inhibitors and the discovery of useful biomarkers to predict response to these agents is shifting much of the focus of personalized care for non-small cell lung cancer towards harnessing the immune response. With further advancement, more effective immunotherapy options will emerge along with more useful biomarkers. Paradoxically, minimally invasive small biopsy and cytology specimens have become the primary method for diagnosis of patients with advanced disease, as well for initial diagnosis and staging in earlier stage disease. For the benefit of these patients, we will continue to learn how to do more with less. In this perspective, we review aspects of immunobiology that underlie the current state of the art of existing and emerging immunologic biomarkers that hold potential to enhance the care of patients with non-small cell lung cancer. We address practical considerations for acquiring patient samples that accurately reflect disease immune status. We also propose a paradigm shift wherein the most important sample types that need to be proven in pioneering basic science and translation work and subsequent clinical trials are the specimens most often obtained clinically.

  6. Temporal development of extreme precipitation in Germany projected by EURO-CORDEX simulations

    NASA Astrophysics Data System (ADS)

    Brendel, Christoph; Deutschländer, Thomas

    2017-04-01

    identified. For instance, the frequency of extreme precipitation events more than triples in the most extreme scenario. Regional differences are rather small with the largest increase in northern Germany, particularly in coastal regions and the weakest increase in the most southern parts of Germany.

  7. Optimising the quantification of cytokines present at low concentrations in small human mucosal tissue samples using Luminex assays☆

    PubMed Central

    Staples, Emily; Ingram, Richard James Michael; Atherton, John Christopher; Robinson, Karen

    2013-01-01

    Sensitive measurement of multiple cytokine profiles from small mucosal tissue biopsies, for example human gastric biopsies obtained through an endoscope, is technically challenging. Multiplex methods such as Luminex assays offer an attractive solution but standard protocols are not available for tissue samples. We assessed the utility of three commercial Luminex kits (VersaMAP, Bio-Plex and MILLIPLEX) to measure interleukin-17A (IL-17) and interferon-gamma (IFNγ) concentrations in human gastric biopsies and we optimised preparation of mucosal samples for this application. First, we assessed the technical performance, limits of sensitivity and linear dynamic ranges for each kit. Next we spiked human gastric biopsies with recombinant IL-17 and IFNγ at a range of concentrations (1.5 to 1000 pg/mL) and assessed kit accuracy for spiked cytokine recovery and intra-assay precision. We also evaluated the impact of different tissue processing methods and extraction buffers on our results. Finally we assessed recovery of endogenous cytokines in unspiked samples. In terms of sensitivity, all of the kits performed well within the manufacturers' recommended standard curve ranges but the MILLIPLEX kit provided most consistent sensitivity for low cytokine concentrations. In the spiking experiments, the MILLIPLEX kit performed most consistently over the widest range of concentrations. For tissue processing, manual disruption provided significantly improved cytokine recovery over automated methods. Our selected kit and optimised protocol were further validated by measurement of relative cytokine levels in inflamed and uninflamed gastric mucosa using Luminex and real-time polymerase chain reaction. In summary, with proper optimisation Luminex kits (and for IL-17 and IFNγ the MILLIPLEX kit in particular) can be used for the sensitive detection of cytokines in mucosal biopsies. Our results should help other researchers seeking to quantify multiple low concentration cytokines in

  8. High-accuracy measurements of N2O concentration and site-specific nitrogen isotopes in small or high concentration samples

    NASA Astrophysics Data System (ADS)

    Palmer, M. R.; Arata, C.; Huang, K.

    2014-12-01

    Nitrous oxide (N2O) gas is among the major contributors to global warming and ozone depletion in stratosphere. Quantitative estimate of N­2O production in various pathways and N­2O fluxes across different reservoirs is the key to understanding the role of N­2O in the global change. To achieve this goal, accurate and concurrent measurement of both N2O concentration ([N2O]) and its site-specific isotopic composition (SP-δ15N), namely δ15Nα and δ15Nβ, is desired. Recent developments in Cavity Ring-Down Spectroscopy (CRDS) have enabled high precision measurements of [N2O] and SP-δ15N of a continuous gas flow. However, many N­­2O samples are discrete with limited volume (< 500 ml), and/or high [N2O] (> 2 ppm), and are not suitable for direct measurements by CRDS. Here we present results of a Small Sample Isotope Module 2 (SSIM2) which is coupled to and automatically coordinated with a Picarro isotopic N2O CRDS analyzer to handle and measure high concentration and/or small volume samples. The SSIM2 requires 20 ml of sample per analysis, and transfers the sample to the CRDS for high precision measurement. When the sample injection is < 20 ml, a zero gas is optionally filled to make up the volume. We used the SSIM2 to dilute high [N2O] samples and < 20 ml samples, and tested the effect of dilution on the measured SP-δ15N. In addition, we employed and tested a newly developed double injection method for samples adequate for two 20 ml injections. After the SSIM2 and the CRDS cavity was primed with the first injection, the second injection, which has negligible dilution of the sample, can be accurately measured for both [N2O] and SP-δ15N. Results of these experiments indicate that the precision of SSIM2-CRDS is similar to that of the continuous measurements using the CRDS alone, and that dilution has minimal effect on SP-δ15N, as along as the [N2O] is > 300 ppb after dilution. Overall, the precision of SP-δ15N measured using the SSIM2 is < 0.5 ‰.

  9. Small amount of water induced preparation of several morphologies for InBO3:Eu3+ phosphor via a facile boric acid flux method and their luminescent properties

    NASA Astrophysics Data System (ADS)

    Ding, Wen; Liang, Pan; Liu, Zhi-Hong

    2017-05-01

    Four kinds of morphologies for InBO3:Eu3+ phosphor have been prepared via a facile boric acid flux method only by adjusting the small amount of added water. The prepared samples have been characterized by XRD, FT-IR, and SEM. It was found that the size and morphology of the samples could be effectively controlled by adjusting reaction temperature, reaction time, especially the small amount of added water, which plays an extremely critical role in the controlling morphology. The possible growth mechanisms for microsphere and flower-like morphologies were further discussed on the basis of time-dependent experiments. Furthermore, the luminescence properties of prepared InBO3:Eu3+ samples have been investigated by photoluminescence (PL) spectra. The results show that the InBO3:Eu3+ phosphors show strong orange emissions under ultraviolet excitation at 237 nm. The monodisperse microsphere sample possesses the highest PL intensity among above four morphologies, which can be used as a potential orange luminescent material.

  10. X-shooter Finds an Extremely Primitive Star

    NASA Astrophysics Data System (ADS)

    Caffau, E.; Bonifacio, P.; François, P.; Sbordone, L.; Monaco, L.; Spite, M.; Spite, F.; Ludwig, H.-G.; Cayrel, R.; Zaggia, S.; Hammer, F.; Randich, S.; Molaro, P.; Hill, V.

    2011-12-01

    Low-mass extremely metal-poor (EMP) stars hold the fossil record of the chemical composition of the early phases of the Universe in their atmospheres. Chemical analysis of such objects provides important constraints on these early phases. EMP stars are rather rare objects: to dig them out, large amounts of data have to be considered. We have analysed stars from the Sloan Digital Sky Survey using an automatic procedure and selected a sample of good candidate EMP stars, which we observed with the spectrographs X-shooter and UVES. We could confirm the low metallicity of our sample of stars, and we succeeded in finding a record metal-poor star.

  11. Time-of-flight Extreme Environment Diffractometer at the Helmholtz-Zentrum Berlin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokhnenko, Oleksandr, E-mail: prokhnenko@helmholtz-berlin.de; Stein, Wolf-Dieter; Bleif, Hans-Jürgen

    2015-03-15

    The Extreme Environment Diffractometer (EXED) is a new neutron time-of-flight instrument at the BER II research reactor at the Helmholtz-Zentrum Berlin, Germany. Although EXED is a special-purpose instrument, its early construction made it available for users as a general-purpose diffractometer. In this respect, EXED became one of the rare examples, where the performance of a time-of-flight diffractometer at a continuous source can be characterized. In this paper, we report on the design and performance of EXED with an emphasis on the unique instrument capabilities. The latter comprise variable wavelength resolution and wavelength band, control of the incoming beam divergence, themore » possibility to change the angular positions of detectors and their distance to the sample, and use of event recording and offline histogramming. These features combined make EXED easily tunable to the requirements of a particular problem, from conventional diffraction to small angle neutron scattering. The instrument performance is demonstrated by several reference measurements and user experiments.« less

  12. Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data

    PubMed Central

    Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.

    2013-01-01

    Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946

  13. Detection and attribution of extreme weather disasters

    NASA Astrophysics Data System (ADS)

    Huggel, Christian; Stone, Dáithí; Hansen, Gerrit

    2014-05-01

    Single disasters related to extreme weather events have caused loss and damage on the order of up to tens of billions US dollars over the past years. Recent disasters fueled the debate about whether and to what extent these events are related to climate change. In international climate negotiations disaster loss and damage is now high on the agenda, and related policy mechanisms have been discussed or are being implemented. In view of funding allocation and effective risk reduction strategies detection and attribution to climate change of extreme weather events and disasters is a key issue. Different avenues have so far been taken to address detection and attribution in this context. Physical climate sciences have developed approaches, among others, where variables that are reasonably sampled over climatically relevant time periods and related to the meteorological characteristics of the extreme event are examined. Trends in these variables (e.g. air or sea surface temperatures) are compared between observations and climate simulations with and without anthropogenic forcing. Generally, progress has been made in recent years in attribution of changes in the chance of some single extreme weather events to anthropogenic climate change but there remain important challenges. A different line of research is primarily concerned with losses related to the extreme weather events over time, using disaster databases. A growing consensus is that the increase in asset values and in exposure are main drivers of the strong increase of economic losses over the past several decades, and only a limited number of studies have found trends consistent with expectations from climate change. Here we propose a better integration of existing lines of research in detection and attribution of extreme weather events and disasters by applying a risk framework. Risk is thereby defined as a function of the probability of occurrence of an extreme weather event, and the associated consequences

  14. Small Twisting Prominence

    NASA Image and Video Library

    2018-01-12

    A small prominence rose up above the sun, appeared to twist around for several hours, and then began to send some streams of plasma back into the sun (Jan. 3-4, 2018). The action, observed in a wavelength of extreme ultraviolet light, lasted just about one day. Prominences like this one are quite common. In fact, there were several over the past few days. For a sense of scale, the prominence reached up more than several times the size of Earth. Movies are available at https://photojournal.jpl.nasa.gov/catalog/PIA22198

  15. Effects of Extreme Events on Arsenic Cycling in Salt Marshes

    NASA Astrophysics Data System (ADS)

    Northrup, Kristy; Capooci, Margaret; Seyfferth, Angelia L.

    2018-03-01

    Extreme events such as storm surges, intense precipitation, and supermoons cause anomalous and large fluctuations in water level in tidal salt marshes, which impacts the sediment biogeochemistry that dictates arsenic (As) cycling. In addition to changes in water level, which impacts soil redox potential, these extreme events may also change salinity due to freshwater inputs from precipitation or saltwater inputs due to surge. It is currently unknown how As mobility in tidal salt marshes will be impacted by extreme events, as fluctuations in salinity and redox potential may act synergistically to mobilize As. To investigate impacts of extreme events on As cycling in tidal salt marshes, we conducted a combined laboratory and field investigation. We monitored pore water and soil samples before, during, and after two extreme events: a supermoon lunar eclipse followed by a storm surge and precipitation induced by Hurricane Joaquin in fall 2015 at the St. Jones Reserve in Dover, Delaware, a representative tidal salt marsh in the Mid-Atlantic United States. We also conducted soil incubations of marsh sediments in batch and in flow-through experiments in which redox potential and/or salinity were manipulated. Field investigations showed that pore water As was inversely proportional to redox potential. During the extreme events, a distinct pulse of As was observed in the pore water with maximum salinity. Combined field and laboratory investigations revealed that this As pulse is likely due to rapid changes in salinity. These results have implications for As mobility in the face of extreme weather variability.

  16. A Capillary Absorption Spectrometer for Stable Carbon Isotope Ratio (13C/12C) Analysis in Very Small Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, James F.; Sams, Robert L.; Blake, Thomas A.

    2012-02-06

    A capillary absorption spectrometer (CAS) suitable for IR laser isotope analysis of small CO{sub 2} samples is presented. The system employs a continuous-wave (cw) quantum cascade laser to study nearly adjacent rovibrational transitions of different isotopologues of CO{sub 2} near 2307 cm{sup -1} (4.34 {mu}m). This initial CAS system can achieve relative isotopic precision of about 10 ppm {sup 13}C, or {approx}1{per_thousand} (per mil in delta notation relative to Vienna Pee Dee Belemnite) with 20-100 picomoles of entrained sample within the hollow waveguide for CO{sub 2} concentrations {approx}400 to 750 ppm. Isotopic analyses of such gas fills in a 1-mmmore » ID hollow waveguide of 0.8 m overall physical path length can be carried out down to {approx}2 Torr. Overall {sup 13}C/{sup 12}C ratios can be calibrated to {approx}2{per_thousand} accuracy with diluted CO{sub 2} standards. A novel, low-cost method to reduce cw-fringing noise resulting from multipath distortions in the hollow waveguide is presented, which allows weak absorbance features to be studied at the few ppm level (peak-to-rms) after 1,000 scans are co-added in {approx}10 sec. The CAS is meant to work directly with converted CO{sub 2} samples from a Laser Ablation-Catalytic-Combustion (LA CC) micro-sampler to provide {sup 13}C/{sup 12}C ratios of small biological isolates with spatial resolutions {approx}50 {mu}m.« less

  17. Investigation of the Effect of Small Hardening Spots Created on the Sample Surface by Laser Complex with Solid-State Laser

    NASA Astrophysics Data System (ADS)

    Nozdrina, O.; Zykov, I.; Melnikov, A.; Tsipilev, V.; Turanov, S.

    2018-03-01

    This paper describes the results of an investigation of the effect of small hardening spots (about 1 mm) created on the surface of a sample by laser complex with solid-state laser. The melted area of the steel sample is not exceed 5%. Steel microhardness change in the region subjected to laser treatment is studied. Also there is a graph of the deformation of samples dependence on the tension. As a result, the yield plateau and plastic properties changes were detected. The flow line was tracked in the series of speckle photographs. As a result we can see how mm surface inhomogeneity can influence on the deformation and strength properties of steel.

  18. Sampling versus systematic full lymphatic dissection in surgical treatment of non-small cell lung cancer.

    PubMed

    Koulaxouzidis, Georgios; Karagkiouzis, Grigorios; Konstantinou, Marios; Gkiozos, Ioannis; Syrigos, Konstantinos

    2013-04-22

    The extent of mediastinal lymph node assessment during surgery for non-small cell cancer remains controversial. Different techniques are used, ranging from simple visual inspection of the unopened mediastinum to an extended bilateral lymph node dissection. Furthermore, different terms are used to define these techniques. Sampling is the removal of one or more lymph nodes under the guidance of pre-operative findings. Systematic (full) nodal dissection is the removal of all mediastinal tissue containing the lymph nodes systematically within anatomical landmarks. A Medline search was conducted to identify articles in the English language that addressed the role of mediastinal lymph node resection in the treatment of non-small cell lung cancer. Opinions as to the reasons for favoring full lymphatic dissection include complete resection, improved nodal staging and better local control due to resection of undetected micrometastasis. Arguments against routine full lymphatic dissection are increased morbidity, increase in operative time, and lack of evidence of improved survival. For complete resection of non-small cell lung cancer, many authors recommend a systematic nodal dissection as the standard approach during surgery, and suggest that this provides both adequate nodal staging and guarantees complete resection. Whether extending the lymph node dissection influences survival or recurrence rate is still not known. There are valid arguments in favor in terms not only of an improved local control but also of an improved long-term survival. However, the impact of lymph node dissection on long-term survival should be further assessed by large-scale multicenter randomized trials.

  19. Assessing changes in extreme convective precipitation from a damage perspective

    NASA Astrophysics Data System (ADS)

    Schroeer, K.; Tye, M. R.

    2016-12-01

    Projected increases in high-intensity short-duration convective precipitation are expected even in regions that are likely to become more arid. Such high intensity precipitation events can trigger hazardous flash floods, debris flows and landslides that put people and local assets at risk. However, the assessment of local scale precipitation extremes is hampered by its high spatial and temporal variability. In addition to which, not only are extreme events rare, but such small scale events are likely to be underreported where they don't coincide with the observation network. Rather than focus solely on the convective precipitation, understanding the characteristics of these extremes which drive damage may be more effective to assess future risks. Two sources of data are used in this study. First, sub-daily precipitation observations over the Southern Alps enable an examination of seasonal and regional patterns in high-intensity convective precipitation and their relationship with weather types. Secondly, reports of private loss and damage on a household scale are used to identify which events are most damaging, or what conditions potentially enhance the vulnerability to these extremes.This study explores the potential added value from including recorded loss and damage data to understand the risks from summertime convective precipitation events. By relating precipitation generating weather types to the severity of damage we hope to develop a mechanism to assess future risks. A further benefit would be to identify from damage reports the likely occurrence of precipitation extremes where no direct observations are available and use this information to validate remotely sensed observations.

  20. Crowdsourcing for large-scale mosquito (Diptera: Culicidae) sampling

    USDA-ARS?s Scientific Manuscript database

    Sampling a cosmopolitan mosquito (Diptera: Culicidae) species throughout its range is logistically challenging and extremely resource intensive. Mosquito control programmes and regional networks operate at the local level and often conduct sampling activities across much of North America. A method f...

  1. Validated low-volume aldosterone immunoassay tailored to GCLP-compliant investigations in small sample volumes.

    PubMed

    Schaefer, J; Burckhardt, B B; Tins, J; Bartel, A; Laeer, S

    2017-12-01

    Heart failure is well investigated in adults, but data in children is lacking. To overcome this shortage of reliable data, appropriate bioanalytical assays are required. Development and validation of a bioanalytical assay for the determination of aldosterone concentrations in small sample volumes applicable to clinical studies under Good Clinical Laboratory Practice. An immunoassay was developed based on a commercially available enzyme-linked immunosorbent assay and validated according to current bioanalytical guidelines of the EMA and FDA. The assay (range 31.3-1000 pg/mL [86.9-2775 pmol/L]) is characterized by a between-run accuracy from - 3.8% to - 0.8% and a between-run imprecision ranging from 4.9% to 8.9% (coefficient of variation). For within-run accuracy, the relative error was between - 11.1% and + 9.0%, while within-run imprecision ranged from 1.2% to 11.8% (CV). For parallelism and dilutional linearity, the relative error of back-calculated concentrations varied from - 14.1% to + 8.4% and from - 7.4% to + 10.5%, respectively. The immunoassay is compliant with the bioanalytical guidelines of the EMA and FDA and allows accurate and precise aldosterone determinations. As the assay can run low-volume samples, it is especially valuable for pediatric investigations.

  2. Assessment of Vulnerability to Extreme Flash Floods in Design Storms

    PubMed Central

    Kim, Eung Seok; Choi, Hyun Il

    2011-01-01

    There has been an increase in the occurrence of sudden local flooding of great volume and short duration caused by heavy or excessive rainfall intensity over a small area, which presents the greatest potential danger threat to the natural environment, human life, public health and property, etc. Such flash floods have rapid runoff and debris flow that rises quickly with little or no advance warning to prevent flood damage. This study develops a flash flood index through the average of the same scale relative severity factors quantifying characteristics of hydrographs generated from a rainfall-runoff model for the long-term observed rainfall data in a small ungauged study basin, and presents regression equations between rainfall characteristics and the flash flood index. The aim of this study is to develop flash flood index-duration-frequency relation curves by combining the rainfall intensity-duration-frequency relation and the flash flood index from probability rainfall data in order to evaluate vulnerability to extreme flash floods in design storms. This study is an initial effort to quantify the flash flood severity of design storms for both existing and planned flood control facilities to cope with residual flood risks due to extreme flash floods that have ocurred frequently in recent years. PMID:21845165

  3. Assessment of vulnerability to extreme flash floods in design storms.

    PubMed

    Kim, Eung Seok; Choi, Hyun Il

    2011-07-01

    There has been an increase in the occurrence of sudden local flooding of great volume and short duration caused by heavy or excessive rainfall intensity over a small area, which presents the greatest potential danger threat to the natural environment, human life, public health and property, etc. Such flash floods have rapid runoff and debris flow that rises quickly with little or no advance warning to prevent flood damage. This study develops a flash flood index through the average of the same scale relative severity factors quantifying characteristics of hydrographs generated from a rainfall-runoff model for the long-term observed rainfall data in a small ungauged study basin, and presents regression equations between rainfall characteristics and the flash flood index. The aim of this study is to develop flash flood index-duration-frequency relation curves by combining the rainfall intensity-duration-frequency relation and the flash flood index from probability rainfall data in order to evaluate vulnerability to extreme flash floods in design storms. This study is an initial effort to quantify the flash flood severity of design storms for both existing and planned flood control facilities to cope with residual flood risks due to extreme flash floods that have ocurred frequently in recent years.

  4. Evidence of population resistance to extreme low flows in a fluvial-dependent fish species

    USGS Publications Warehouse

    Katz, Rachel A.; Freeman, Mary C.

    2015-01-01

    Extreme low streamflows are natural disturbances to aquatic populations. Species in naturally intermittent streams display adaptations that enhance persistence during extreme events; however, the fate of populations in perennial streams during unprecedented low-flow periods is not well-understood. Biota requiring swift-flowing habitats may be especially vulnerable to flow reductions. We estimated the abundance and local survival of a native fluvial-dependent fish species (Etheostoma inscriptum) across 5 years encompassing historic low flows in a sixth-order southeastern USA perennial river. Based on capturemark-recapture data, the study shoal may have acted as a refuge during severe drought, with increased young-of-the-year (YOY) recruitment and occasionally high adult immigration. Contrary to expectations, summer and autumn survival rates (30 days) were not strongly depressed during low-flow periods, despite 25%-80% reductions in monthly discharge. Instead, YOY survival increased with lower minimum discharge and in response to small rain events that increased low-flow variability. Age-1+ fish showed the opposite pattern, with survival decreasing in response to increasing low-flow variability. Results from this population dynamics study of a small fish in a perennial river suggest that fluvial-dependent species can be resistant to extreme flow reductions through enhanced YOY recruitment and high survival

  5. Extreme Environments Rig

    NASA Image and Video Library

    2013-08-13

    The Glenn Extreme Environment Chamber (GEER) simulates the extreme conditions found in space and tests many devices that will explore Venus to see if they can withstand the punishing environment and temperatures over 800 degrees F.

  6. Flood frequency estimates and documented and potential extreme peak discharges in Oklahoma

    USGS Publications Warehouse

    Tortorelli, Robert L.; McCabe, Lan P.

    2001-01-01

    Knowledge of the magnitude and frequency of floods is required for the safe and economical design of highway bridges, culverts, dams, levees, and other structures on or near streams; and for flood plain management programs. Flood frequency estimates for gaged streamflow sites were updated, documented extreme peak discharges for gaged and miscellaneous measurement sites were tabulated, and potential extreme peak discharges for Oklahoma streamflow sites were estimated. Potential extreme peak discharges, derived from the relation between documented extreme peak discharges and contributing drainage areas, can provide valuable information concerning the maximum peak discharge that could be expected at a stream site. Potential extreme peak discharge is useful in conjunction with flood frequency analysis to give the best evaluation of flood risk at a site. Peak discharge and flood frequency for selected recurrence intervals from 2 to 500 years were estimated for 352 gaged streamflow sites. Data through 1999 water year were used from streamflow-gaging stations with at least 8 years of record within Oklahoma or about 25 kilometers into the bordering states of Arkansas, Kansas, Missouri, New Mexico, and Texas. These sites were in unregulated basins, and basins affected by regulation, urbanization, and irrigation. Documented extreme peak discharges and associated data were compiled for 514 sites in and near Oklahoma, 352 with streamflow-gaging stations and 162 at miscellaneous measurements sites or streamflow-gaging stations with short record, with a total of 671 measurements.The sites are fairly well distributed statewide, however many streams, large and small, have never been monitored. Potential extreme peak-discharge curves were developed for streamflow sites in hydrologic regions of the state based on documented extreme peak discharges and the contributing drainage areas. Two hydrologic regions, east and west, were defined using 98 degrees 15 minutes longitude as the

  7. Purifying Nucleic Acids from Samples of Extremely Low Biomass

    NASA Technical Reports Server (NTRS)

    La Duc, Myron; Osman, Shariff; Venkateswaran, Kasthuri

    2008-01-01

    A new method is able to circumvent the bias to which one commercial DNA extraction method falls prey with regard to the lysing of certain types of microbial cells, resulting in a truncated spectrum of microbial diversity. By prefacing the protocol with glass-bead-beating agitation (mechanically lysing a much more encompassing array of cell types and spores), the resulting microbial diversity detection is greatly enhanced. In preliminary studies, a commercially available automated DNA extraction method is effective at delivering total DNA yield, but only the non-hardy members of the bacterial bisque were represented in clone libraries, suggesting that this method was ineffective at lysing the hardier cell types. To circumvent such a bias in cells, yet another extraction method was devised. In this technique, samples are first subjected to a stringent bead-beating step, and then are processed via standard protocols. Prior to being loaded into extraction vials, samples are placed in micro-centrifuge bead tubes containing 50 micro-L of commercially produced lysis solution. After inverting several times, tubes are agitated at maximum speed for two minutes. Following agitation, tubes are centrifuged at 10,000 x g for one minute. At this time, the aqueous volumes are removed from the bead tubes and are loaded into extraction vials to be further processed via extraction regime. The new method couples two independent methodologies in such as way as to yield the highest concentration of PCR-amplifiable DNA with consistent and reproducible results and with the most accurate and encompassing report of species richness.

  8. Exploratory Factor Analysis with Small Sample Sizes

    ERIC Educational Resources Information Center

    de Winter, J. C. F.; Dodou, D.; Wieringa, P. A.

    2009-01-01

    Exploratory factor analysis (EFA) is generally regarded as a technique for large sample sizes ("N"), with N = 50 as a reasonable absolute minimum. This study offers a comprehensive overview of the conditions in which EFA can yield good quality results for "N" below 50. Simulations were carried out to estimate the minimum required "N" for different…

  9. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  10. Large-scale drivers of local precipitation extremes in convection-permitting climate simulations

    NASA Astrophysics Data System (ADS)

    Chan, Steven C.; Kendon, Elizabeth J.; Roberts, Nigel M.; Fowler, Hayley J.; Blenkinsop, Stephen

    2016-04-01

    The Met Office 1.5-km UKV convective-permitting models (CPM) is used to downscale present-climate and RCP8.5 60-km HadGEM3 GCM simulations. Extreme UK hourly precipitation intensities increase with local near-surface temperatures and humidity; for temperature, the simulated increase rate for the present-climate simulation is about 6.5% K**-1, which is consistent with observations and theoretical expectations. While extreme intensities are higher in the RCP8.5 simulation as higher temperatures are sampled, there is a decline at the highest temperatures due to circulation and relative humidity changes. Extending the analysis to the broader synoptic scale, it is found that circulation patterns, as diagnosed by MSLP or circulation type, play an increased role in the probability of extreme precipitation in the RCP8.5 simulation. Nevertheless for both CPM simulations, vertical instability is the principal driver for extreme precipitation.

  11. Outliers and Extremes: Dragon-Kings or Dragon-Fools?

    NASA Astrophysics Data System (ADS)

    Schertzer, D. J.; Tchiguirinskaia, I.; Lovejoy, S.

    2012-12-01

    Geophysics seems full of monsters like Victor Hugo's Court of Miracles and monstrous extremes have been statistically considered as outliers with respect to more normal events. However, a characteristic magnitude separating abnormal events from normal ones would be at odd with the generic scaling behaviour of nonlinear systems, contrary to "fat tailed" probability distributions and self-organized criticality. More precisely, it can be shown [1] how the apparent monsters could be mere manifestations of a singular measure mishandled as a regular measure. Monstrous fluctuations are the rule, not outliers and they are more frequent than usually thought up to the point that (theoretical) statistical moments can easily be infinite. The empirical estimates of the latter are erratic and diverge with sample size. The corresponding physics is that intense small scale events cannot be smoothed out by upscaling. However, based on a few examples, it has also been argued [2] that one should consider "genuine" outliers of fat tailed distributions so monstrous that they can be called "dragon-kings". We critically analyse these arguments, e.g. finite sample size and statistical estimates of the largest events, multifractal phase transition vs. more classical phase transition. We emphasize the fact that dragon-kings are not needed in order that the largest events become predictable. This is rather reminiscent of the Feast of Fools picturesquely described by Victor Hugo. [1] D. Schertzer, I. Tchiguirinskaia, S. Lovejoy et P. Hubert (2010): No monsters, no miracles: in nonlinear sciences hydrology is not an outlier! Hydrological Sciences Journal, 55 (6) 965 - 979. [2] D. Sornette (2009): Dragon-Kings, Black Swans and the Prediction of Crises. International Journal of Terraspace Science and Engineering 1(3), 1-17.

  12. Statistical trend analysis and extreme distribution of significant wave height from 1958 to 1999 - an application to the Italian Seas

    NASA Astrophysics Data System (ADS)

    Martucci, G.; Carniel, S.; Chiggiato, J.; Sclavo, M.; Lionello, P.; Galati, M. B.

    2010-06-01

    The study is a statistical analysis of sea states timeseries derived using the wave model WAM forced by the ERA-40 dataset in selected areas near the Italian coasts. For the period 1 January 1958 to 31 December 1999 the analysis yields: (i) the existence of a negative trend in the annual- and winter-averaged sea state heights; (ii) the existence of a turning-point in late 80's in the annual-averaged trend of sea state heights at a site in the Northern Adriatic Sea; (iii) the overall absence of a significant trend in the annual-averaged mean durations of sea states over thresholds; (iv) the assessment of the extreme values on a time-scale of thousand years. The analysis uses two methods to obtain samples of extremes from the independent sea states: the r-largest annual maxima and the peak-over-threshold. The two methods show statistical differences in retrieving the return values and more generally in describing the significant wave field. The r-largest annual maxima method provides more reliable predictions of the extreme values especially for small return periods (<100 years). Finally, the study statistically proves the existence of decadal negative trends in the significant wave heights and by this it conveys useful information on the wave climatology of the Italian seas during the second half of the 20th century.

  13. Fiberoptic characteristics for extreme operating environments

    NASA Technical Reports Server (NTRS)

    Delcher, R. C.

    1992-01-01

    Fiberoptics could offer several major benefits for cryogenic liquid-fueled rocket engines, including lightning immunity, weight reduction, and the possibility of implementing a number of new measurements for engine condition monitoring. The technical feasibility of using fiberoptics in the severe environments posed by cryogenic liquid-fueled rocket engines was determined. The issues of importance and subsequent requirements for this use of fiberoptics were compiled. These included temperature ranges, moisture embrittlement succeptability, and the ability to withstand extreme shock and vibration levels. Different types of optical fibers were evaluated and several types of optical fibers' ability to withstand use in cryogenic liquid-fueled rocket engines was demonstrated through environmental testing of samples. This testing included: cold-bend testing, moisture embrittlement testing, temperature cycling, temperature extremes testing, vibration testing, and shock testing. Three of five fiber samples withstood the tests to a level proving feasibility, and two of these remained intact in all six of the tests. A fiberoptic bundle was also tested, and completed testing without breakage. Preliminary cabling and harnessing for fiber protection was also demonstrated. According to cable manufacturers, the successful -300 F cold bend, vibration, and shock tests are the first instance of any major fiberoptic cable testing below roughly -55 F. This program has demonstrated the basic technical feasibility of implementing optical fibers on cryogenic liquid-fueled rocket engines, and a development plan is included highlighting requirements and issues for such an implementation.

  14. Extreme sensory processing patterns show a complex association with depression, and impulsivity, alexithymia, and hopelessness.

    PubMed

    Serafini, Gianluca; Gonda, Xenia; Canepa, Giovanna; Pompili, Maurizio; Rihmer, Zoltan; Amore, Mario; Engel-Yeger, Batya

    2017-03-01

    The involvement of extreme sensory processing patterns, impulsivity, alexithymia, and hopelessness was hypothesized to contribute to the complex pathophysiology of major depression and bipolar disorder. However, the nature of the relation between these variables has not been thoroughly investigated. This study aimed to explore the association between extreme sensory processing patterns, impulsivity, alexithymia, depression, and hopelessness. We recruited 281 euthymic participants (mean age=47.4±12.1) of which 62.3% with unipolar major depression and 37.7% with bipolar disorder. All participants completed the Adolescent/Adult Sensory Profile (AASP), Toronto Alexithymia Scale (TAS-20), second version of the Beck Depression Inventory (BDI-II), Barratt Impulsivity Scale (BIS), and Beck Hopelessness Scale (BHS). Lower registration of sensory input showed a significant correlation with depression, impulsivity, attentional/motor impulsivity, and alexithymia. It was significantly more frequent among participants with elevated hopelessness, and accounted for 22% of the variance in depression severity, 15% in greater impulsivity, 36% in alexithymia, and 3% in hopelessness. Elevated sensory seeking correlated with enhanced motor impulsivity and decreased non-planning impulsivity. Higher sensory sensitivity and sensory avoiding correlated with depression, impulsivity, and alexithymia. The study was limited by the relatively small sample size and cross-sectional nature of the study. Furthermore, only self-report measures that may be potentially biased by social desirability were used. Extreme sensory processing patterns, impulsivity, alexithymia, depression, and hopelessness may show a characteristic pattern in patients with major affective disorders. The careful assessment of sensory profiles may help in developing targeted interventions and improve functional/adaptive strategies. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  16. Extremely preterm infants small for gestational age are at risk for motor impairment at 3 years corrected age.

    PubMed

    Kato, Takeshi; Mandai, Tsurue; Iwatani, Sota; Koda, Tsubasa; Nagasaka, Miwako; Fujita, Kaori; Kurokawa, Daisuke; Yamana, Keiji; Nishida, Kosuke; Taniguchi-Ikeda, Mariko; Tanimura, Kenji; Deguchi, Masashi; Yamada, Hideto; Iijima, Kazumoto; Morioka, Ichiro

    2016-02-01

    Few studies have targeted psychomotor development and associated perinatal risk factors in Japanese very low birth weight (VLBW) infants who are severely small for gestational age (SGA). A single-center study was conducted in 104 Japanese VLBW infants who were born preterm, due to maternal, umbilical cord, or placental abnormalities, between 2000 and 2007. Psychomotor development as a developmental quotient (DQ) was assessed using the Kyoto Scale of Psychological Development at 3 years corrected age. Severely SGA was defined as birth weight or length below -2 standard deviation values of the mean values at the same gestation. VLBW infants were divided into 2 subgroups based on gestational age at birth: ⩾28 weeks (n=64) and <28 weeks (n=40). DQs of infants with severe SGA were compared with those of infants who were appropriate for gestational age (AGA). Factors associated with developmental disabilities in VLBW infants with severe SGA (n=23) were determined. In the group born at ⩾28 weeks gestation, infants with severe SGA had normal DQ values and did not significantly differ from those with AGA. However, in the group born at <28 weeks gestation, severe SGA infants had significantly lower postural-motor DQ values than AGA infants. Gestational age <28 weeks was an independent factor for low postural-motor DQ, regardless of the cause of severe SGA or pregnancy termination. Extremely preterm newborns with severe SGA are at risk of motor developmental disability at age 3 years. Copyright © 2015 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  17. Extremal entanglement witnesses

    NASA Astrophysics Data System (ADS)

    Hansen, Leif Ove; Hauge, Andreas; Myrheim, Jan; Sollid, Per Øyvind

    2015-02-01

    We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus TrΩρ≥0 when Ω is a witness and ρ is a pure product state, ρ=ψψ† with ψ=ϕ⊗χ. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value f(ϕ,χ)=TrΩρ as a function of vectors ϕ and χ is a positive semidefinite biquadratic form. Every zero of f(ϕ,χ) imposes strong real-linear constraints on f and Ω. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of f(ϕ,χ) is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the

  18. Sample environment for in situ synchrotron corrosion studies of materials in extreme environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elbakhshwan, Mohamed S.; Gill, Simerjeet K.; Motta, Arthur T.

    A new in situ sample environment has been designed and developed to study the interfacial interactions of nuclear cladding alloys with high temperature steam. The sample environment is particularly optimized for synchrotron X-ray diffraction (XRD) studies for in situ structural analysis. The sample environment is highly corrosion resistant and can be readily adapted for steam environments. The in situ sample environment design complies with G2 ASTM standards for studying corrosion in zirconium and its alloys and offers remote temperature and pressure monitoring during the in situ data collection. The use of the in situ sample environment is exemplified by monitoringmore » the oxidation of metallic zirconium during exposure to steam at 350°C. Finally, the in situ sample environment provides a powerful tool for fundamental understanding of corrosion mechanisms by elucidating the substoichiometric oxide phases formed during early stages of corrosion, which can provide a better understanding the oxidation process.« less

  19. Sample environment for in situ synchrotron corrosion studies of materials in extreme environments

    DOE PAGES

    Elbakhshwan, Mohamed S.; Gill, Simerjeet K.; Motta, Arthur T.; ...

    2016-10-25

    A new in situ sample environment has been designed and developed to study the interfacial interactions of nuclear cladding alloys with high temperature steam. The sample environment is particularly optimized for synchrotron X-ray diffraction (XRD) studies for in situ structural analysis. The sample environment is highly corrosion resistant and can be readily adapted for steam environments. The in situ sample environment design complies with G2 ASTM standards for studying corrosion in zirconium and its alloys and offers remote temperature and pressure monitoring during the in situ data collection. The use of the in situ sample environment is exemplified by monitoringmore » the oxidation of metallic zirconium during exposure to steam at 350°C. Finally, the in situ sample environment provides a powerful tool for fundamental understanding of corrosion mechanisms by elucidating the substoichiometric oxide phases formed during early stages of corrosion, which can provide a better understanding the oxidation process.« less

  20. High-resolution X-ray diffraction with no sample preparation

    PubMed Central

    Turner, S. M. R.; Degryse, P.; Shortland, A. J.

    2017-01-01

    It is shown that energy-dispersive X-ray diffraction (EDXRD) implemented in a back-reflection geometry is extremely insensitive to sample morphology and positioning even in a high-resolution configuration. This technique allows high-quality X-ray diffraction analysis of samples that have not been prepared and is therefore completely non-destructive. The experimental technique was implemented on beamline B18 at the Diamond Light Source synchrotron in Oxfordshire, UK. The majority of the experiments in this study were performed with pre-characterized geological materials in order to elucidate the characteristics of this novel technique and to develop the analysis methods. Results are presented that demonstrate phase identification, the derivation of precise unit-cell parameters and extraction of microstructural information on unprepared rock samples and other sample types. A particular highlight was the identification of a specific polytype of a muscovite in an unprepared mica schist sample, avoiding the time-consuming and difficult preparation steps normally required to make this type of identification. The technique was also demonstrated in application to a small number of fossil and archaeological samples. Back-reflection EDXRD implemented in a high-resolution configuration shows great potential in the crystallographic analysis of cultural heritage artefacts for the purposes of scientific research such as provenancing, as well as contributing to the formulation of conservation strategies. Possibilities for moving the technique from the synchrotron into museums are discussed. The avoidance of the need to extract samples from high-value and rare objects is a highly significant advantage, applicable also in other potential research areas such as palaeontology, and the study of meteorites and planetary materials brought to Earth by sample-return missions. PMID:28660862

  1. Development of synchrotron X-ray micro-tomography under extreme conditions of pressure and temperature.

    PubMed

    Álvarez-Murga, M; Perrillat, J P; Le Godec, Y; Bergame, F; Philippe, J; King, A; Guignot, N; Mezouar, M; Hodeau, J L

    2017-01-01

    X-ray tomography is a non-destructive three-dimensional imaging/microanalysis technique selective to a wide range of properties such as density, chemical composition, chemical states and crystallographic structure with extremely high sensitivity and spatial resolution. Here the development of in situ high-pressure high-temperature micro-tomography using a rotating module for the Paris-Edinburgh cell combined with synchrotron radiation is described. By rotating the sample chamber by 360°, the limited angular aperture of ordinary high-pressure cells is surmounted. Such a non-destructive high-resolution probe provides three-dimensional insight on the morphological and structural evolution of crystalline as well as amorphous phases during high pressure and temperature treatment. To demonstrate the potentials of this new experimental technique the compression behavior of a basalt glass is investigated by X-ray absorption tomography, and diffraction/scattering tomography imaging of the structural changes during the polymerization of C 60 molecules under pressure is performed. Small size and weight of the loading frame and rotating module means that this apparatus is portable, and can be readily installed on most synchrotron facilities to take advantage of the diversity of three-dimensional imaging techniques available at beamlines. This experimental breakthrough should open new ways for in situ imaging of materials under extreme pressure-temperature-stress conditions, impacting diverse areas in physics, chemistry, geology or materials sciences.

  2. Transport Functions Dominate the SAR11 Metaproteome at Low-Nutrient Extremes in the Sargasso Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowell, Sarah M.; Wilhelm, Larry; Norbeck, Angela D.

    2009-01-01

    The northwestern Sargasso Sea is part of the North Atlantic subtropical oceanic gyre that is characterized as seasonally oligotrophic with pronounced stratification in the summer and autumn. Essentially a marine desert, the biological productivity of this region is reduced during stratified periods as a result of low concentrations of phosphorous and nitrogen in the euphotic zone. To better understand the mechanisms of microbial survival in this oligotrophic environment, we used capillary LC-tandem mass spectrometry to study the composition of microbial proteomes in surface samples collected in September 2005. A total of 2279 peptides that mapped to 236 SAR11 proteins, andmore » 3208 peptides that mapped to 404 Synechococcus proteins, were detected. Mass spectra from SAR11 periplasmic binding proteins accounted for a disproportionately large fraction of the peptides detected, consistent with observations that these extremely small cells devote a large proportion of their volume to periplasm. Abundances were highest for periplasmic substrate-binding proteins for phosphate, amino acids, phosphonate, sugars, and spermidine. Although the data showed that a large fraction of microbial protein synthesis in the Sargasso Sea is devoted to inorganic and organic nutrient acquisition, the proteomes of both SAR11 and Synechococcus also indicated that these populations were actively growing. Our findings support the view that competition for multiple nutrients in oligotrophic systems is extreme but sufficient to sustain microbial community activity.« less

  3. Atmospheric rivers and past hydrometeorological extremes: Challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Sodemann, Harald

    2017-04-01

    Atmospheric rivers are a key term for describing water vapour transport in extratropical regions. The concept has become particularly valuable for linking meteorological process understanding with research focused on the impacts of heavy precipitation. Atmospheric rivers are narrow, elongated features of high integrated water vapour and water vapour flux can lead to severe precipitation and flooding if moisture is extracted efficiently. The orographic rises at the West Coast of the United States and Western Norway are regions where Atmospheric Rivers are one of the prime mechanisms for moisture delivery and precipitation extremes in the present climate. Due to the small horizontal scales of some of the processes climate models are challenged to represent this important transport process between mid-latitudes and the subtropics faithfully. Recent aircraft data and regional tracer model studies provide new insight into the formation and moisture transport mechanisms. In this study I review the concept and pertinent processes of Atmospheric Rivers, thereby focusing on caveats, challenges and opportunities for understanding past hydrometeorological extremes.

  4. Overrepresentation of extreme events in decision making reflects rational use of cognitive resources.

    PubMed

    Lieder, Falk; Griffiths, Thomas L; Hsu, Ming

    2018-01-01

    People's decisions and judgments are disproportionately swayed by improbable but extreme eventualities, such as terrorism, that come to mind easily. This article explores whether such availability biases can be reconciled with rational information processing by taking into account the fact that decision makers value their time and have limited cognitive resources. Our analysis suggests that to make optimal use of their finite time decision makers should overrepresent the most important potential consequences relative to less important, put potentially more probable, outcomes. To evaluate this account, we derive and test a model we call utility-weighted sampling. Utility-weighted sampling estimates the expected utility of potential actions by simulating their outcomes. Critically, outcomes with more extreme utilities have a higher probability of being simulated. We demonstrate that this model can explain not only people's availability bias in judging the frequency of extreme events but also a wide range of cognitive biases in decisions from experience, decisions from description, and memory recall. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. An Overview of 2014 SBIR Phase I and Phase II Materials Structures for Extreme Environments

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung D.; Steele, Gynelle C.; Morris, Jessica R.

    2015-01-01

    NASA's Small Business Innovation Research (SBIR) program focuses on technological innovation by investing in development of innovative concepts and technologies to help NASA mission directorates address critical research needs for Agency programs. This report highlights nine of the innovative SBIR 2014 Phase I and Phase II projects that emphasize one of NASA Glenn Research Center's six core competencies-Materials and Structures for Extreme Environments. The technologies cover a wide spectrum of applications such as high temperature environmental barrier coating systems, deployable space structures, solid oxide fuel cells, and self-lubricating hard coatings for extreme temperatures. Each featured technology describes an innovation, technical objective, and highlights NASA commercial and industrial applications. This report provides an opportunity for NASA engineers, researchers, and program managers to learn how NASA SBIR technologies could help their programs and projects, and lead to collaborations and partnerships between the small SBIR companies and NASA that would benefit both.

  6. Recommended protocols for sampling macrofungi

    Treesearch

    Gregory M. Mueller; John Paul Schmit; Sabine M. Hubndorf Leif Ryvarden; Thomas E. O' Dell; D. Jean Lodge; Patrick R. Leacock; Milagro Mata; Loengrin Umania; Qiuxin (Florence) Wu; Daniel L. Czederpiltz

    2004-01-01

    This chapter discusses several issues regarding reommended protocols for sampling macrofungi: Opportunistic sampling of macrofungi, sampling conspicuous macrofungi using fixed-size, sampling small Ascomycetes using microplots, and sampling a fixed number of downed logs.

  7. Kindergarten classroom functioning of extremely preterm/extremely low birth weight children.

    PubMed

    Wong, Taylor; Taylor, H Gerry; Klein, Nancy; Espy, Kimberly A; Anselmo, Marcia G; Minich, Nori; Hack, Maureen

    2014-12-01

    Cognitive, behavioral, and learning problems are evident in extremely preterm/extremely low birth weight (EPT/ELBW, <28 weeks gestational age or <1000 g) children by early school age. However, we know little about how they function within the classroom once they start school. To determine how EPT/ELBW children function in kindergarten classrooms compared to termborn normal birth weight (NBW) classmates and identify factors related to difficulties in classroom functioning. A 2001-2003 birth cohort of 111 EPT/ELBW children and 110 NBW classmate controls were observed in regular kindergarten classrooms during a 1-hour instructional period using a time-sample method. The groups were compared on frequencies of individual teacher attention, competing or offtask behaviors, task management/preparation, and academic responding. Regression analysis was also conducted within the EPT/ELBW group to examine associations of these measures with neonatal and developmental risk factors, kindergarten neuropsychological and behavioral assessments, and classroom characteristics. The EPT/ELBW group received more individual teacher attention and was more often off-task than the NBW controls. Poorer classroom functioning in the EPT/ELBW group was associated with higher neonatal and developmental risk, poorer executive function skills, more negative teaching ratings of behavior and learning progress, and classroom characteristics. EPT/ELBW children require more teacher support and are less able to engage in instructional activities than their NBW classmates. Associations of classroom functioning with developmental history and cognitive and behavioral traits suggest that these factors may be useful in identifying the children most in need of special educational interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Extreme ultraviolet reflectivity studies of gold on glass and metal substrates

    NASA Technical Reports Server (NTRS)

    Jelinsky, Sharon R.; Malina, Roger F.; Jelinsky, Patrick

    1988-01-01

    The paper reports measurements of the extreme ultraviolet reflectivity of gold from 44 to 920 A at grazing incidence. Gold was deposited using vacuum evaporation and electroplating on substrates of glass and polished nickel, respectively. Measurements are also presented of the extreme ultraviolet reflectivity of electroless nickel in the same wavelength region, where one of the polished nickel substrates was used as a sample. Derived optical constants for evaporated and electroplated gold and electroless nickel are presented. Additional studies of the effects of various contaminants on the EUV reflectivity are also reported. The variations of the optical constants are discussed in terms of density variations, surface roughness and contamination effects. These results ae reported as part of studies for the Extreme Ultraviolet Explorer satellite program to determine acceptance criteria for the EUV optics, contamination budgets and calibration plans.

  9. Effect of Lower Extremity Stretching Exercises on Balance in Geriatric Population.

    PubMed

    Reddy, Ravi Shankar; Alahmari, Khalid A

    2016-07-01

    The purpose of this study was to find "Effect of lower extremity stretching exercises on balance in the geriatric population. 60 subjects (30 male and 30 female) participated in the study. The subjects underwent 10 weeks of lower limb stretching exercise program. Pre and post 10 weeks stretching exercise program, the subjects were assessed for balance, using single limb stance time in seconds and berg balance score. These outcome measures were analyzed. Pre and post lower extremity stretching on balance was analyzed using paired t test. Of 60 subjects 50 subjects completed the stretching exercise program. Paired sample t test analysis showed a significant improvement in single limb stance time (eyes open and eyes closed) (p<0.001) and berg balance score (p<0.001). Lower extremity stretching exercises enhances balance in the geriatric population and thereby reduction in the number of falls.

  10. Adaptation of micro-diffusion method for the analysis of (15) N natural abundance of ammonium in samples with small volume.

    PubMed

    Zhang, Shasha; Fang, Yunting; Xi, Dan

    2015-07-30

    There are several preparation methods for the measurement of the nitrogen (N) isotopic composition of ammonium (NH4 (+) ) in different types of samples (freshwater, saltwater and soil extracts). The diffusion method is the most popular and it involves NH4 (+) in solutions being released under alkaline conditions and then immediately trapped by an acidified filter. However, the traditional preparation is designed for samples with large volume and relatively high N concentrations. The performance of diffusion for small-volume samples (e.g., a few milliliters) remains unknown. We examined the overall performance of micro-diffusion on 5 mL samples on varying the incubation time, temperature and initial NH4 (+) concentration. The trapped ammonia was chemically converted into nitrous oxide (N2 O) with hypobromite and hydroxylamine in sequence. The produced N2 O was analyzed by a commercially available purge and cryogenic trap system coupled to an isotope ratio mass spectrometer. We found that diffusion can be complete with no more than 7 days of treatment at 37 °C. Increasing the temperature to 50 °C and the incubation time to 11 days did not improve the overall performance. There were no significant differences in the overall performance during diffusion with NH4 (+) concentrations from 15 to 60 μM. The blank size was relatively large, and the N contamination might come from the reagents especially KCl salts. The method presented here combines micro-diffusion and hypobromite oxidation and hydroxylamine reduction. It is suitable for samples with small volume and low NH4 (+) concentrations. Our study demonstrates that the NH4 (+) concentrations in samples can be as low as 15 μM, and a volume of 5 mL is sufficient for this method. We suggest that this method can be used for the routine determination of (15) N/(14) N for either natural abundance or (15) N-enriched NH4 (+) . Copyright © 2015 John Wiley & Sons, Ltd.

  11. Extremal black holes in dynamical Chern-Simons gravity

    NASA Astrophysics Data System (ADS)

    McNees, Robert; Stein, Leo C.; Yunes, Nicolás

    2016-12-01

    Rapidly rotating black hole (BH) solutions in theories beyond general relativity (GR) play a key role in experimental gravity, as they allow us to compute observables in extreme spacetimes that deviate from the predictions of GR. Such solutions are often difficult to find in beyond-general-relativity theories due to the inclusion of additional fields that couple to the metric nonlinearly and non-minimally. In this paper, we consider rotating BH solutions in one such theory, dynamical Chern-Simons (dCS) gravity, where the Einstein-Hilbert action is modified by the introduction of a dynamical scalar field that couples to the metric through the Pontryagin density. We treat dCS gravity as an effective field theory and work in the decoupling limit, where corrections are treated as small perturbations from GR. We perturb about the maximally rotating Kerr solution, the so-called extremal limit, and develop mathematical insight into the analysis techniques needed to construct solutions for generic spin. First we find closed-form, analytic expressions for the extremal scalar field, and then determine the trace of the metric perturbation, giving both in terms of Legendre decompositions. Retaining only the first three and four modes in the Legendre representation of the scalar field and the trace, respectively, suffices to ensure a fidelity of over 99% relative to full numerical solutions. The leading-order mode in the Legendre expansion of the trace of the metric perturbation contains a logarithmic divergence at the extremal Kerr horizon, which is likely to be unimportant as it occurs inside the perturbed dCS horizon. The techniques employed here should enable the construction of analytic, closed-form expressions for the scalar field and metric perturbations on a background with arbitrary rotation.

  12. Comparative study of soft thermal printing and lamination of dry thick photoresist films for the uniform fabrication of polymer MOEMS on small-sized samples

    NASA Astrophysics Data System (ADS)

    Abada, S.; Salvi, L.; Courson, R.; Daran, E.; Reig, B.; Doucet, J. B.; Camps, T.; Bardinal, V.

    2017-05-01

    A method called ‘soft thermal printing’ (STP) was developed to ensure the optimal transfer of 50 µm-thick dry epoxy resist films (DF-1050) on small-sized samples. The aim was the uniform fabrication of high aspect ratio polymer-based MOEMS (micro-optical-electrical-mechanical system) on small and/or fragile samples, such as GaAs. The printing conditions were optimized, and the resulting thickness uniformity profiles were compared to those obtained via lamination and SU-8 standard spin-coating. Under the best conditions tested, STP and lamination produced similar results, with a maximum deviation to the central thickness of 3% along the sample surface, compared to greater than 40% for SU-8 spin-coating. Both methods were successfully applied to the collective fabrication of DF1050-based MOEMS designed for the dynamic focusing of VCSELs (vertical-cavity surface-emitting lasers). Similar, efficient electro-thermo-mechanical behaviour was obtained in both cases.

  13. Heatshield for Extreme Entry Environment Technology (HEEET)

    NASA Technical Reports Server (NTRS)

    Venkatapathy, E.; Ellerby, D.; Stackpoole, M..; Peterson, K.; Gage, P.; Beerman, A.; Blosser, M.; Chinnapongse, R.; Dillman, R.; Feldman, J.; hide

    2013-01-01

    Heat-shield for Extreme Entry Technology (HEEET) project is based on the 3-D Woven TPS, an emerging innovative and game changing technology funded by SMD and STMD to fill the ablative TPS gap that exists currently for reaching the depths of Saturn and Venus. Woven TPS technology will address the challenges currently faced by the Venus, Saturn, and higher speed sample return mission Science community due to lack of availability of the only TPS, namely Carbon Phenolic and enable the Science community to move forward with proposals in this decade with Woven TPS. This presentation describes the approach in maturing the technology in the next three years enabling NF-4 mission proposers to address the challenges of Venus, Saturn or higher speed sample return missions.

  14. An Incremental Type-2 Meta-Cognitive Extreme Learning Machine.

    PubMed

    Pratama, Mahardhika; Zhang, Guangquan; Er, Meng Joo; Anavatti, Sreenatha

    2017-02-01

    Existing extreme learning algorithm have not taken into account four issues: 1) complexity; 2) uncertainty; 3) concept drift; and 4) high dimensionality. A novel incremental type-2 meta-cognitive extreme learning machine (ELM) called evolving type-2 ELM (eT2ELM) is proposed to cope with the four issues in this paper. The eT2ELM presents three main pillars of human meta-cognition: 1) what-to-learn; 2) how-to-learn; and 3) when-to-learn. The what-to-learn component selects important training samples for model updates by virtue of the online certainty-based active learning method, which renders eT2ELM as a semi-supervised classifier. The how-to-learn element develops a synergy between extreme learning theory and the evolving concept, whereby the hidden nodes can be generated and pruned automatically from data streams with no tuning of hidden nodes. The when-to-learn constituent makes use of the standard sample reserved strategy. A generalized interval type-2 fuzzy neural network is also put forward as a cognitive component, in which a hidden node is built upon the interval type-2 multivariate Gaussian function while exploiting a subset of Chebyshev series in the output node. The efficacy of the proposed eT2ELM is numerically validated in 12 data streams containing various concept drifts. The numerical results are confirmed by thorough statistical tests, where the eT2ELM demonstrates the most encouraging numerical results in delivering reliable prediction, while sustaining low complexity.

  15. Microhabitats reduce animal's exposure to climate extremes.

    PubMed

    Scheffers, Brett R; Edwards, David P; Diesmos, Arvin; Williams, Stephen E; Evans, Theodore A

    2014-02-01

    Extreme weather events, such as unusually hot or dry conditions, can cause death by exceeding physiological limits, and so cause loss of population. Survival will depend on whether or not susceptible organisms can find refuges that buffer extreme conditions. Microhabitats offer different microclimates to those found within the wider ecosystem, but do these microhabitats effectively buffer extreme climate events relative to the physiological requirements of the animals that frequent them? We collected temperature data from four common microhabitats (soil, tree holes, epiphytes, and vegetation) located from the ground to canopy in primary rainforests in the Philippines. Ambient temperatures were monitored from outside of each microhabitat and from the upper forest canopy, which represent our macrohabitat controls. We measured the critical thermal maxima (CTmax ) of frog and lizard species, which are thermally sensitive and inhabit our microhabitats. Microhabitats reduced mean temperature by 1-2 °C and reduced the duration of extreme temperature exposure by 14-31 times. Microhabitat temperatures were below the CTmax of inhabitant frogs and lizards, whereas macrohabitats consistently contained lethal temperatures. Microhabitat temperatures increased by 0.11-0.66 °C for every 1 °C increase in macrohabitat temperature, and this nonuniformity in temperature change influenced our forecasts of vulnerability for animal communities under climate change. Assuming uniform increases of 6 °C, microhabitats decreased the vulnerability of communities by up to 32-fold, whereas under nonuniform increases of 0.66 to 3.96 °C, microhabitats decreased the vulnerability of communities by up to 108-fold. Microhabitats have extraordinary potential to buffer climate and likely reduce mortality during extreme climate events. These results suggest that predicted changes in distribution due to mortality and habitat shifts that are derived from macroclimatic samples and that assume

  16. Extreme Events in Urban Streams Leading to Extreme Temperatures in Birmingham, UK

    NASA Astrophysics Data System (ADS)

    Rangecroft, S.; Croghan, D.; Van Loon, A.; Sadler, J. P.; Hannah, D. M.

    2016-12-01

    Extreme flows and high water temperature events act as critical stressors on the ecological health of rivers. Urban headwater streams are considered particularly vulnerable to the effects of these extreme events. Despite this, such catchments remain poorly characterised and the effect of differences in land use is rarely quantified, especially in relation to water temperature. Thus a key research gap has emerged in understanding the patterns of water temperature during extreme events within contrasting urban, headwater catchments. We studied the headwaters of two bordering urban catchments of contrasting land use within Birmingham, UK. To characterise response to extreme events, precipitation and flow were analysed for the period of 1970-2016. To analyse the effects of extreme events on water temperature, 10 temperature loggers recording at 15 minute intervals were placed within each catchment covering a range of land use for the period May 2016 - present. During peak over threshold flood events higher average peaks were observed in the less urbanised catchment; however highest maximum flow peaks took place in the more densely urbanised catchment. Very similar average drought durations were observed between the two catchments with average flow drought durations of 27 days in the most urbanised catchment, and 29 in the less urbanised catchment. Flashier water temperature regimes were observed within the more urbanised catchment and increases of up to 5 degrees were apparent within 30 minutes during certain storms at the most upstream sites. Only in the most extreme events did the more densely urban stream appear more susceptible to both extreme high flows and extreme water temperature events, possibly resultant from overland flow emerging as the dominant flow pathway during intense precipitation events. Water temperature surges tended to be highly spatially variable indicating the importance of local land use. During smaller events, water temperature was less

  17. Split Hopkinson resonant bar test for sonic-frequency acoustic velocity and attenuation measurements of small, isotropic geological samples.

    PubMed

    Nakagawa, Seiji

    2011-04-01

    Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver-the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 [ordinal indicator, masculine]C, and concurrently with x-ray CT imaging. The described split Hopkinson resonant bar test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples, and a natural rock sample. © 2011 American Institute of Physics

  18. Clustering on very small scales from a large sample of confirmed quasar pairs: does quasar clustering track from Mpc to kpc scales?

    NASA Astrophysics Data System (ADS)

    Eftekharzadeh, S.; Myers, A. D.; Hennawi, J. F.; Djorgovski, S. G.; Richards, G. T.; Mahabal, A. A.; Graham, M. J.

    2017-06-01

    We present the most precise estimate to date of the clustering of quasars on very small scales, based on a sample of 47 binary quasars with magnitudes of g < 20.85 and proper transverse separations of ˜25 h-1 kpc. Our sample of binary quasars, which is about six times larger than any previous spectroscopically confirmed sample on these scales, is targeted using a kernel density estimation (KDE) technique applied to Sloan Digital Sky Survey (SDSS) imaging over most of the SDSS area. Our sample is 'complete' in that all of the KDE target pairs with 17.0 ≲ R ≲ 36.2 h-1 kpc in our area of interest have been spectroscopically confirmed from a combination of previous surveys and our own long-slit observational campaign. We catalogue 230 candidate quasar pairs with angular separations of <8 arcsec, from which our binary quasars were identified. We determine the projected correlation function of quasars (\\bar{W}_p) in four bins of proper transverse scale over the range 17.0 ≲ R ≲ 36.2 h-1 kpc. The implied small-scale quasar clustering amplitude from the projected correlation function, integrated across our entire redshift range, is A = 24.1 ± 3.6 at ˜26.6 h-1 kpc. Our sample is the first spectroscopically confirmed sample of quasar pairs that is sufficiently large to study how quasar clustering evolves with redshift at ˜25 h-1 kpc. We find that empirical descriptions of how quasar clustering evolves with redshift at ˜25 h-1 Mpc also adequately describe the evolution of quasar clustering at ˜25 h-1 kpc.

  19. Collective dynamics of 'small-world' networks.

    PubMed

    Watts, D J; Strogatz, S H

    1998-06-04

    Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.

  20. Urine sample (image)

    MedlinePlus

    A "clean-catch" urine sample is performed by collecting the sample of urine in midstream. Men or boys should wipe clean the head ... water and rinse well. A small amount of urine should initially fall into the toilet bowl before ...

  1. Survival of extreme opinions

    NASA Astrophysics Data System (ADS)

    Hsu, Jiann-wien; Huang, Ding-wei

    2009-12-01

    We study the survival of extreme opinions in various processes of consensus formation. All the opinions are treated equally and subjected to the same rules of changing. We investigate three typical models to reach a consensus in each case: (A) personal influence, (B) influence from surroundings, and (C) influence to surroundings. Starting with uniformly distributed random opinions, our calculated results show that the extreme opinions can survive in both models (A) and (B), but not in model (C). We obtain a conclusion that both personal influence and passive adaptation to the environment are not sufficient enough to eradicate all the extreme opinions. Only the active persuasion to change the surroundings eliminates the extreme opinions completely.

  2. Reliability of the mangled extremity severity score in combat-related upper and lower extremity injuries.

    PubMed

    Ege, Tolga; Unlu, Aytekin; Tas, Huseyin; Bek, Dogan; Turkan, Selim; Cetinkaya, Aytac

    2015-01-01

    Decision of limb salvage or amputation is generally aided with several trauma scoring systems such as the mangled extremity severity score (MESS). However, the reliability of the injury scores in the settling of open fractures due to explosives and missiles is challenging. Mortality and morbidity of the extremity trauma due to firearms are generally associated with time delay in revascularization, injury mechanism, anatomy of the injured site, associated injuries, age and the environmental circumstance. The purpose of the retrospective study was to evaluate the extent of extremity injuries due to ballistic missiles and to detect the reliability of mangled extremity severity score (MESS) in both upper and lower extremities. Between 2004 and 2014, 139 Gustillo Anderson Type III open fractures of both the upper and lower extremities were enrolled in the study. Data for patient age, fire arm type, transporting time from the field to the hospital (and the method), injury severity scores, MESS scores, fracture types, amputation levels, bone fixation methods and postoperative infections and complications retrieved from the two level-2 trauma center's data base. Sensitivity, specificity, positive and negative predictive values of the MESS were calculated to detect the ability in deciding amputation in the mangled limb. Amputation was performed in 39 extremities and limb salvage attempted in 100 extremities. The mean followup time was 14.6 months (range 6-32 months). In the amputated group, the mean MESS scores for upper and lower extremity were 8.8 (range 6-11) and 9.24 (range 6-11), respectively. In the limb salvage group, the mean MESS scores for upper and lower extremities were 5.29 (range 4-7) and 5.19 (range 3-8), respectively. Sensitivity of MESS in upper and lower extremities were calculated as 80% and 79.4% and positive predictive values detected as 55.55% and 83.3%, respectively. Specificity of MESS score for upper and lower extremities was 84% and 86.6%; negative

  3. Instability of Poiseuille flow at extreme Mach numbers: linear analysis and simulations.

    PubMed

    Xie, Zhimin; Girimaji, Sharath S

    2014-04-01

    We develop the perturbation equations to describe instability evolution in Poiseuille flow at the limit of very high Mach numbers. At this limit the equation governing the flow is the pressure-released Navier-Stokes equation. The ensuing semianalytical solution is compared against simulations performed using the gas-kinetic method (GKM), resulting in excellent agreement. A similar comparison between analytical and computational results of small perturbation growth is performed at the incompressible (zero Mach number) limit, again leading to excellent agreement. The study accomplishes two important goals: it (i) contrasts the small perturbation evolution in Poiseuille flows at extreme Mach numbers and (ii) provides important verification of the GKM simulation scheme.

  4. ABRF-MARG RESEARCH STUDY: EVALUATION OF SMALL SAMPLE NUCLEIC ACID AMPLIFICATION TECHNOLOGIES FOR GENE EXPRESSION PROFILING

    EPA Science Inventory

    Microarrays have had a significant impact on many areas of biology. However, there are still many fertile research areas that would benefit from microarray analysis but are limited by the amount of biological material that can be obtained (e.g. samples obtained by small biopsy, f...

  5. Sample types applied for molecular diagnosis of therapeutic management of advanced non-small cell lung cancer in the precision medicine.

    PubMed

    Han, Yanxi; Li, Jinming

    2017-10-26

    In this era of precision medicine, molecular biology is becoming increasingly significant for the diagnosis and therapeutic management of non-small cell lung cancer. The specimen as the primary element of the whole testing flow is particularly important for maintaining the accuracy of gene alteration testing. Presently, the main sample types applied in routine diagnosis are tissue and cytology biopsies. Liquid biopsies are considered as the most promising alternatives when tissue and cytology samples are not available. Each sample type possesses its own strengths and weaknesses, pertaining to the disparity of sampling, preparation and preservation procedures, the heterogeneity of inter- or intratumors, the tumor cellularity (percentage and number of tumor cells) of specimens, etc., and none of them can individually be a "one size to fit all". Therefore, in this review, we summarized the strengths and weaknesses of different sample types that are widely used in clinical practice, offered solutions to reduce the negative impact of the samples and proposed an optimized strategy for choice of samples during the entire diagnostic course. We hope to provide valuable information to laboratories for choosing optimal clinical specimens to achieve comprehensive functional genomic landscapes and formulate individually tailored treatment plans for NSCLC patients that are in advanced stages.

  6. Dynamically-downscaled projections of changes in temperature extremes over China

    NASA Astrophysics Data System (ADS)

    Guo, Junhong; Huang, Guohe; Wang, Xiuquan; Li, Yongping; Lin, Qianguo

    2018-02-01

    In this study, likely changes in extreme temperatures (including 16 indices) over China in response to global warming throughout the twenty-first century are investigated through the PRECIS regional climate modeling system. The PRECIS experiment is conducted at a spatial resolution of 25 km and is driven by a perturbed-physics ensemble to reflect spatial variations and model uncertainties. Simulations of present climate (1961-1990) are compared with observations to validate the model performance in reproducing historical climate over China. Results indicate that the PRECIS demonstrates reasonable skills in reproducing the spatial patterns of observed extreme temperatures over the most regions of China, especially in the east. Nevertheless, the PRECIS shows a relatively poor performance in simulating the spatial patterns of extreme temperatures in the western mountainous regions, where its driving GCM exhibits more uncertainties due to lack of insufficient observations and results in more errors in climate downscaling. Future spatio-temporal changes of extreme temperature indices are then analyzed for three successive periods (i.e., 2020s, 2050s and 2080s). The projected changes in extreme temperatures by PRECIS are well consistent with the results of the major global climate models in both spatial and temporal patterns. Furthermore, the PRECIS demonstrates a distinct superiority in providing more detailed spatial information of extreme indices. In general, all extreme indices show similar changes in spatial pattern: large changes are projected in the north while small changes are projected in the south. In contrast, the temporal patterns for all indices vary differently over future periods: the warm indices, such as SU, TR, WSDI, TX90p, TN90p and GSL are likely to increase, while the cold indices, such as ID, FD, CSDI, TX10p and TN10p, are likely to decrease with time in response to global warming. Nevertheless, the magnitudes of changes in all indices tend to

  7. Evaluation of the Biological Sampling Kit (BiSKit) for Large-Area Surface Sampling

    PubMed Central

    Buttner, Mark P.; Cruz, Patricia; Stetzenbach, Linda D.; Klima-Comba, Amy K.; Stevens, Vanessa L.; Emanuel, Peter A.

    2004-01-01

    Current surface sampling methods for microbial contaminants are designed to sample small areas and utilize culture analysis. The total number of microbes recovered is low because a small area is sampled, making detection of a potential pathogen more difficult. Furthermore, sampling of small areas requires a greater number of samples to be collected, which delays the reporting of results, taxes laboratory resources and staffing, and increases analysis costs. A new biological surface sampling method, the Biological Sampling Kit (BiSKit), designed to sample large areas and to be compatible with testing with a variety of technologies, including PCR and immunoassay, was evaluated and compared to other surface sampling strategies. In experimental room trials, wood laminate and metal surfaces were contaminated by aerosolization of Bacillus atrophaeus spores, a simulant for Bacillus anthracis, into the room, followed by settling of the spores onto the test surfaces. The surfaces were sampled with the BiSKit, a cotton-based swab, and a foam-based swab. Samples were analyzed by culturing, quantitative PCR, and immunological assays. The results showed that the large surface area (1 m2) sampled with the BiSKit resulted in concentrations of B. atrophaeus in samples that were up to 10-fold higher than the concentrations obtained with the other methods tested. A comparison of wet and dry sampling with the BiSKit indicated that dry sampling was more efficient (efficiency, 18.4%) than wet sampling (efficiency, 11.3%). The sensitivities of detection of B. atrophaeus on metal surfaces were 42 ± 5.8 CFU/m2 for wet sampling and 100.5 ± 10.2 CFU/m2 for dry sampling. These results demonstrate that the use of a sampling device capable of sampling larger areas results in higher sensitivity than that obtained with currently available methods and has the advantage of sampling larger areas, thus requiring collection of fewer samples per site. PMID:15574898

  8. Repeated Small Bowel Obstruction Caused by Chestnut Ingestion without the Formation of Phytobezoars.

    PubMed

    Satake, Ryu; Chinda, Daisuke; Shimoyama, Tadashi; Satake, Miwa; Oota, Rie; Sato, Satoshi; Yamai, Kiyonori; Hachimori, Hisashi; Okamoto, Yutaka; Yamada, Kyogo; Matsuura, Osamu; Hashizume, Tadashi; Soma, Yasushi; Fukuda, Shinsaku

    2016-01-01

    A small number of cases of small bowel obstruction caused by foods without the formation of phytobezoars have been reported. Repeated small bowel obstruction due to the ingestion of the same food is extremely rare. We present the case of 63-year-old woman who developed small bowel obstruction twice due to the ingestion of chestnuts without the formation of phytobezoars. This is the first reported case of repeated small bowel obstruction caused by chestnut ingestion. Careful interviews are necessary to determine the meal history of elderly patients and psychiatric patients.

  9. Future Extreme Event Vulnerability in the Rural Northeastern United States

    NASA Astrophysics Data System (ADS)

    Winter, J.; Bowen, F. L.; Partridge, T.; Chipman, J. W.

    2017-12-01

    Future climate change impacts on humans will be determined by the convergence of evolving physical climate and socioeconomic systems. Of particular concern is the intersection of extreme events and vulnerable populations. Rural areas of the Northeastern United States have experienced increased temperature and precipitation extremes, especially over the past three decades, and face unique challenges due to their physical isolation, natural resources dependent economies, and high poverty rates. To explore the impacts of future extreme events on vulnerable, rural populations in the Northeast, we project extreme events and vulnerability indicators to identify where changes in extreme events and vulnerable populations coincide. Specifically, we analyze future (2046-2075) maximum annual daily temperature, minimum annual daily temperature, maximum annual daily precipitation, and maximum consecutive dry day length for Representative Concentration Pathways (RCP) 4.5 and 8.5 using four global climate models (GCM) and a gridded observational dataset. We then overlay those projections with estimates of county-level population and relative income for 2060 to calculate changes in person-events from historical (1976-2005), with a focus on Northeast counties that have less than 250,000 people and are in the bottom income quartile. We find that across the rural Northeast for RCP4.5, heat person-events per year increase tenfold, far exceeding decreases in cold person-events and relatively small changes in precipitation and drought person-events. Counties in the bottom income quartile have historically (1976-2005) experienced a disproportionate number of heat events, and counties in the bottom two income quartiles are projected to experience a greater heat event increase by 2046-2075 than counties in the top two income quartiles. We further explore the relative contributions of event frequency, population, and income changes to the total and geographic distribution of climate change

  10. Rasch validation of the Arabic version of the lower extremity functional scale.

    PubMed

    Alnahdi, Ali H

    2018-02-01

    The purpose of this study was to examine the internal construct validity of the Arabic version of the Lower Extremity Functional Scale (20-item Arabic LEFS) using Rasch analysis. Patients (n = 170) with lower extremity musculoskeletal dysfunction were recruited. Rasch analysis of 20-item Arabic LEFS was performed. Once the initial Rasch analysis indicated that the 20-item Arabic LEFS did not fit the Rasch model, follow-up analyses were conducted to improve the fit of the scale to the Rasch measurement model. These modifications included removing misfitting individuals, changing item scoring structure, removing misfitting items, addressing bias caused by response dependency between items and differential item functioning (DIF). Initial analysis indicated deviation of the 20-item Arabic LEFS from the Rasch model. Disordered thresholds in eight items and response dependency between six items were detected with the scale as a whole did not meet the requirement of unidimensionality. Refinements led to a 15-item Arabic LEFS that demonstrated excellent internal consistency (person separation index [PSI] = 0.92) and satisfied all the requirement of the Rasch model. Rasch analysis did not support the 20-item Arabic LEFS as a unidimensional measure of lower extremity function. The refined 15-item Arabic LEFS met all the requirement of the Rasch model and hence is a valid objective measure of lower extremity function. The Rasch-validated 15-item Arabic LEFS needs to be further tested in an independent sample to confirm its fit to the Rasch measurement model. Implications for Rehabilitation The validity of the 20-item Arabic Lower Extremity Functional Scale to measure lower extremity function is not supported. The 15-item Arabic version of the LEFS is a valid measure of lower extremity function and can be used to quantify lower extremity function in patients with lower extremity musculoskeletal disorders.

  11. The Effects of Small Sample Size on Identifying Polytomous DIF Using the Liu-Agresti Estimator of the Cumulative Common Odds Ratio

    ERIC Educational Resources Information Center

    Carvajal, Jorge; Skorupski, William P.

    2010-01-01

    This study is an evaluation of the behavior of the Liu-Agresti estimator of the cumulative common odds ratio when identifying differential item functioning (DIF) with polytomously scored test items using small samples. The Liu-Agresti estimator has been proposed by Penfield and Algina as a promising approach for the study of polytomous DIF but no…

  12. Pushing precipitation to the extremes in distributed experiments: Recommendations for simulating wet and dry years

    USGS Publications Warehouse

    Knapp, Alan K.; Avolio, Meghan L.; Beier, Claus; Carroll, Charles J.W.; Collins, Scott L.; Dukes, Jeffrey S.; Fraser, Lauchlan H.; Griffin-Nolan, Robert J.; Hoover, David L.; Jentsch, Anke; Loik, Michael E.; Phillips, Richard P.; Post, Alison K.; Sala, Osvaldo E.; Slette, Ingrid J.; Yahdjian, Laura; Smith, Melinda D.

    2017-01-01

    Intensification of the global hydrological cycle, ranging from larger individual precipitation events to more extreme multiyear droughts, has the potential to cause widespread alterations in ecosystem structure and function. With evidence that the incidence of extreme precipitation years (defined statistically from historical precipitation records) is increasing, there is a clear need to identify ecosystems that are most vulnerable to these changes and understand why some ecosystems are more sensitive to extremes than others. To date, opportunistic studies of naturally occurring extreme precipitation years, combined with results from a relatively small number of experiments, have provided limited mechanistic understanding of differences in ecosystem sensitivity, suggesting that new approaches are needed. Coordinated distributed experiments (CDEs) arrayed across multiple ecosystem types and focused on water can enhance our understanding of differential ecosystem sensitivity to precipitation extremes, but there are many design challenges to overcome (e.g., cost, comparability, standardization). Here, we evaluate contemporary experimental approaches for manipulating precipitation under field conditions to inform the design of ‘Drought-Net’, a relatively low-cost CDE that simulates extreme precipitation years. A common method for imposing both dry and wet years is to alter each ambient precipitation event. We endorse this approach for imposing extreme precipitation years because it simultaneously alters other precipitation characteristics (i.e., event size) consistent with natural precipitation patterns. However, we do not advocate applying identical treatment levels at all sites – a common approach to standardization in CDEs. This is because precipitation variability varies >fivefold globally resulting in a wide range of ecosystem-specific thresholds for defining extreme precipitation years. For CDEs focused on precipitation extremes, treatments should be based

  13. Pushing precipitation to the extremes in distributed experiments: recommendations for simulating wet and dry years.

    PubMed

    Knapp, Alan K; Avolio, Meghan L; Beier, Claus; Carroll, Charles J W; Collins, Scott L; Dukes, Jeffrey S; Fraser, Lauchlan H; Griffin-Nolan, Robert J; Hoover, David L; Jentsch, Anke; Loik, Michael E; Phillips, Richard P; Post, Alison K; Sala, Osvaldo E; Slette, Ingrid J; Yahdjian, Laura; Smith, Melinda D

    2017-05-01

    Intensification of the global hydrological cycle, ranging from larger individual precipitation events to more extreme multiyear droughts, has the potential to cause widespread alterations in ecosystem structure and function. With evidence that the incidence of extreme precipitation years (defined statistically from historical precipitation records) is increasing, there is a clear need to identify ecosystems that are most vulnerable to these changes and understand why some ecosystems are more sensitive to extremes than others. To date, opportunistic studies of naturally occurring extreme precipitation years, combined with results from a relatively small number of experiments, have provided limited mechanistic understanding of differences in ecosystem sensitivity, suggesting that new approaches are needed. Coordinated distributed experiments (CDEs) arrayed across multiple ecosystem types and focused on water can enhance our understanding of differential ecosystem sensitivity to precipitation extremes, but there are many design challenges to overcome (e.g., cost, comparability, standardization). Here, we evaluate contemporary experimental approaches for manipulating precipitation under field conditions to inform the design of 'Drought-Net', a relatively low-cost CDE that simulates extreme precipitation years. A common method for imposing both dry and wet years is to alter each ambient precipitation event. We endorse this approach for imposing extreme precipitation years because it simultaneously alters other precipitation characteristics (i.e., event size) consistent with natural precipitation patterns. However, we do not advocate applying identical treatment levels at all sites - a common approach to standardization in CDEs. This is because precipitation variability varies >fivefold globally resulting in a wide range of ecosystem-specific thresholds for defining extreme precipitation years. For CDEs focused on precipitation extremes, treatments should be based on

  14. Energy calibration issues in nuclear resonant vibrational spectroscopy: observing small spectral shifts and making fast calibrations.

    PubMed

    Wang, Hongxin; Yoda, Yoshitaka; Dong, Weibing; Huang, Songping D

    2013-09-01

    The conventional energy calibration for nuclear resonant vibrational spectroscopy (NRVS) is usually long. Meanwhile, taking NRVS samples out of the cryostat increases the chance of sample damage, which makes it impossible to carry out an energy calibration during one NRVS measurement. In this study, by manipulating the 14.4 keV beam through the main measurement chamber without moving out the NRVS sample, two alternative calibration procedures have been proposed and established: (i) an in situ calibration procedure, which measures the main NRVS sample at stage A and the calibration sample at stage B simultaneously, and calibrates the energies for observing extremely small spectral shifts; for example, the 0.3 meV energy shift between the 100%-(57)Fe-enriched [Fe4S4Cl4](=) and 10%-(57)Fe and 90%-(54)Fe labeled [Fe4S4Cl4](=) has been well resolved; (ii) a quick-switching energy calibration procedure, which reduces each calibration time from 3-4 h to about 30 min. Although the quick-switching calibration is not in situ, it is suitable for normal NRVS measurements.

  15. Phenotypic and genetic overlap between autistic traits at the extremes of the general population.

    PubMed

    Ronald, Angelica; Happé, Francesca; Price, Thomas S; Baron-Cohen, Simon; Plomin, Robert

    2006-10-01

    To investigate children selected from a community sample for showing extreme autistic-like traits and to assess the degree to which these individual traits--social impairments (SIs), communication impairments (CIs), and restricted repetitive behaviors and interests (RRBIs)--are caused by genes and environments, whether all of them are caused by the same genes and environments, and how often they occur together (as required by an autism diagnosis). The most extreme-scoring 5% were selected from 3,419 8-year-old pairs in the Twins Early Development Study assessed on the Childhood Asperger Syndrome Test. Phenotypic associations between extreme traits were compared with associations among the full-scale scores. Genetic associations between extreme traits were quantified using bivariate DeFries-Fulker extremes analysis. Phenotypic relationships between extreme SIs, CIs, and RRBIs were modest. There was a degree of genetic overlap between them, but also substantial genetic specificity. This first twin study assessing the links between extreme individual autistic-like traits (SIs, CIs, and RRBIs) found that all are highly heritable but show modest phenotypic and genetic overlap. This finding concurs with that of an earlier study from the same cohort that showed that a total autistic symptoms score at the extreme showed high heritability and that SIs, CIs, and RRBIs show weak links in the general population. This new finding has relevance for both clinical models and future molecular genetic studies.

  16. Representing Extremes in Agricultural Models

    NASA Technical Reports Server (NTRS)

    Ruane, Alex

    2015-01-01

    AgMIP and related projects are conducting several activities to understand and improve crop model response to extreme events. This involves crop model studies as well as the generation of climate datasets and scenarios more capable of capturing extremes. Models are typically less responsive to extreme events than we observe, and miss several forms of extreme events. Models also can capture interactive effects between climate change and climate extremes. Additional work is needed to understand response of markets and economic systems to food shocks. AgMIP is planning a Coordinated Global and Regional Assessment of Climate Change Impacts on Agricultural Production and Food Security with an aim to inform the IPCC Sixth Assessment Report.

  17. Small but Dynamic Active Region

    NASA Image and Video Library

    2018-04-20

    The sun featured just one, rather small active region over the past few days, but it developed rapidly and sported a lot of magnetic activity in just one day (Apr. 11-12, 2018). The activity was observed in a wavelength of extreme ultraviolet light. The loops and twisting arches above it are evidence of magnetic forces tangling with each other. The video clip was produced using Helioviewer software. Movies are available at https://photojournal.jpl.nasa.gov/catalog/PIA06676

  18. Trunk restraint to promote upper extremity recovery in stroke patients: a systematic review and meta-analysis.

    PubMed

    Wee, Seng Kwee; Hughes, Ann-Marie; Warner, Martin; Burridge, Jane H

    2014-09-01

    Many stroke patients exhibit excessive compensatory trunk movements during reaching. Compensatory movement behaviors may improve upper extremity function in the short-term but be detrimental to long-term recovery. To evaluate the evidence that trunk restraint limits compensatory trunk movement and/or promotes better upper extremity recovery in stroke patients. A search was conducted through electronic databases from January 1980 to June 2013. Only randomized controlled trials (RCTs) comparing upper extremity training with and without trunk restraint were selected for review. Three review authors independently assessed the methodological quality and extracted data from the studies. Meta-analysis was conducted when there was sufficient homogenous data. Six RCTs involving 187 chronic stroke patients were identified. Meta-analysis of key outcome measures showed that trunk restraint has a moderate statistically significant effect on improving Fugl-Meyer Upper Extremity (FMA-UE) score, active shoulder flexion, and reduction in trunk displacement during reaching. There was a small, nonsignificant effect of trunk restraint on upper extremity function. Trunk restraint has a moderate effect on reduction of upper extremity impairment in chronic stroke patients, in terms of FMA-UE score, increased shoulder flexion, and reduction in excessive trunk movement during reaching. There is insufficient evidence to demonstrate that trunk restraint improves upper extremity function and reaching trajectory smoothness and straightness in chronic stroke patients. Future research on stroke patients at different phases of recovery and with different levels of upper extremity impairment is recommended. © The Author(s) 2014.

  19. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds

    PubMed Central

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie

    2018-01-01

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811

  20. Spatial Distribution of Stony Desertification and Key Influencing Factors on Different Sampling Scales in Small Karst Watersheds.

    PubMed

    Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei

    2018-04-13

    Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.

  1. Aberrant expression of miR-141 and nuclear receptor small heterodimer partner in clinical samples of prostate cancer.

    PubMed

    Khorasani, Maryam; Teimoori-Toolabi, Ladan; Farivar, Taghi Naserpour; Asgari, Mojgan; Abolhasani, Maryam; Shahrokh, Hossein; Afgar, Ali; Kalantari, Elham; Peymani, Amir; Mahdian, Reza

    2018-01-01

    Prostate cancer (PCa) is the second most common cancer in men worldwide. Currently, prostate-specific antigen (PSA) test and digital rectal exam are the main screening tests used for PCa diagnosis. However, due to the low specificity of these tests, new alternative biomarkers such as deregulated RNAs and microRNAs have been implemented. Aberrant expressions of small heterodimer partner gene (SHP, NR0B2) and mir-141 are reported in various cancers. The aim of this study was to investigate the SHP and miR-141 expression level in tissue samples of prostate cancer. The expression level of SHP gene and miR-141 was assessed by real time PCR and their relative amounts were calculated by the Δ⁢ΔCT method. Also, IHC technique was used to determine the expression level of SHP protein. The miR-141 was significantly up-regulated in the samples of metastatic tumors compared to localized tumor samples (P< 0.001, 31.17-fold change). Tumor samples showed lower SHP mRNA expression levels than BPH samples (p= 0.014, 4.7-fold change). The results of paired t-test analysis showed there was no significant difference between the SHP gene expression in PCa samples and their matched tumor-adjacent normal tissue (p= 0.5). The data obtained in our study confirm the involvement of miR-141 in PCa progression and metastasis. These effects could be mediated by AR via down-regulation of its co-repressor protein, i.e., SHP.

  2. Determining storm sampling requirements for improving precision of annual load estimates of nutrients from a small forested watershed.

    PubMed

    Ide, Jun'ichiro; Chiwa, Masaaki; Higashi, Naoko; Maruno, Ryoko; Mori, Yasushi; Otsuki, Kyoichi

    2012-08-01

    This study sought to determine the lowest number of storm events required for adequate estimation of annual nutrient loads from a forested watershed using the regression equation between cumulative load (∑L) and cumulative stream discharge (∑Q). Hydrological surveys were conducted for 4 years, and stream water was sampled sequentially at 15-60-min intervals during 24 h in 20 events, as well as weekly in a small forested watershed. The bootstrap sampling technique was used to determine the regression (∑L-∑Q) equations of dissolved nitrogen (DN) and phosphorus (DP), particulate nitrogen (PN) and phosphorus (PP), dissolved inorganic nitrogen (DIN), and suspended solid (SS) for each dataset of ∑L and ∑Q. For dissolved nutrients (DN, DP, DIN), the coefficient of variance (CV) in 100 replicates of 4-year average annual load estimates was below 20% with datasets composed of five storm events. For particulate nutrients (PN, PP, SS), the CV exceeded 20%, even with datasets composed of more than ten storm events. The differences in the number of storm events required for precise load estimates between dissolved and particulate nutrients were attributed to the goodness of fit of the ∑L-∑Q equations. Bootstrap simulation based on flow-stratified sampling resulted in fewer storm events than the simulation based on random sampling and showed that only three storm events were required to give a CV below 20% for dissolved nutrients. These results indicate that a sampling design considering discharge levels reduces the frequency of laborious chemical analyses of water samples required throughout the year.

  3. Reliability of the mangled extremity severity score in combat-related upper and lower extremity injuries

    PubMed Central

    Ege, Tolga; Unlu, Aytekin; Tas, Huseyin; Bek, Dogan; Turkan, Selim; Cetinkaya, Aytac

    2015-01-01

    Background: Decision of limb salvage or amputation is generally aided with several trauma scoring systems such as the mangled extremity severity score (MESS). However, the reliability of the injury scores in the settling of open fractures due to explosives and missiles is challenging. Mortality and morbidity of the extremity trauma due to firearms are generally associated with time delay in revascularization, injury mechanism, anatomy of the injured site, associated injuries, age and the environmental circumstance. The purpose of the retrospective study was to evaluate the extent of extremity injuries due to ballistic missiles and to detect the reliability of mangled extremity severity score (MESS) in both upper and lower extremities. Materials and Methods: Between 2004 and 2014, 139 Gustillo Anderson Type III open fractures of both the upper and lower extremities were enrolled in the study. Data for patient age, fire arm type, transporting time from the field to the hospital (and the method), injury severity scores, MESS scores, fracture types, amputation levels, bone fixation methods and postoperative infections and complications retrieved from the two level-2 trauma center's data base. Sensitivity, specificity, positive and negative predictive values of the MESS were calculated to detect the ability in deciding amputation in the mangled limb. Results: Amputation was performed in 39 extremities and limb salvage attempted in 100 extremities. The mean followup time was 14.6 months (range 6–32 months). In the amputated group, the mean MESS scores for upper and lower extremity were 8.8 (range 6–11) and 9.24 (range 6–11), respectively. In the limb salvage group, the mean MESS scores for upper and lower extremities were 5.29 (range 4–7) and 5.19 (range 3–8), respectively. Sensitivity of MESS in upper and lower extremities were calculated as 80% and 79.4% and positive predictive values detected as 55.55% and 83.3%, respectively. Specificity of MESS score for

  4. The Extreme Climate Index: a novel and multi-hazard index for extreme weather events.

    NASA Astrophysics Data System (ADS)

    Cucchi, Marco; Petitta, Marcello; Calmanti, Sandro

    2017-04-01

    In this presentation we introduce the Extreme Climate Index (ECI): an objective, multi-hazard index capable of tracking changes in the frequency or magnitude of extreme weather events in African countries, thus indicating that a shift to a new climate regime is underway in a particular area. This index has been developed in the context of XCF (eXtreme Climate Facilities) project lead by ARC (African Risk Capacity, specialised agency of the African Union), and will be used in the payouts triggering mechanism of an insurance programme against risks related to the increase of frequency and magnitude of extreme weather events due to climate regimes' changes. The main hazards covered by ECI will be extreme dry, wet and heat events, with the possibility of adding region-specific risk events such as tropical cyclones for the most vulnerable areas. It will be based on data coming from consistent, sufficiently long, high quality historical records and will be standardized across broad geographical regions, so that extreme events occurring under different climatic regimes in Africa can be comparable. The first step to construct such an index is to define single hazard indicators. In this first study we focused on extreme dry/wet and heat events, using for their description respectively the well-known SPI (Standardized Precipitation Index) and an index developed by us, called SHI (Standardized Heat-waves Index). The second step consists in the development of a computational strategy to combine these, and possibly other indices, so that the ECI can describe, by means of a single indicator, different types of climatic extremes. According to the methodology proposed in this paper, the ECI is defined by two statistical components: the ECI intensity, which indicates whether an event is extreme or not; the angular component, which represent the contribution of each hazard to the overall intensity of the index. The ECI can thus be used to identify "extremes" after defining a

  5. Development of a miniature Stirling cryocooler for LWIR small satellite applications

    NASA Astrophysics Data System (ADS)

    Kirkconnell, C. S.; Hon, R. C.; Perella, M. D.; Crittenden, T. M.; Ghiaasiaan, S. M.

    2017-05-01

    The optimum small satellite (SmallSat) cryocooler system must be extremely compact and lightweight, achieved in this paper by operating a linear cryocooler at a frequency of approximately 300 Hz. Operation at this frequency, which is well in excess of the 100-150 Hz reported in recent papers on related efforts, requires an evolution beyond the traditional Oxford-class, flexure-based methods of setting the mechanical resonance. A novel approach that optimizes the electromagnetic design and the mechanical design together to simultaneously achieve the required dynamic and thermodynamic performances is described. Since highly miniaturized pulse tube coolers are fundamentally ill-suited for the sub-80K temperature range of interest because the boundary layer losses inside the pulse tube become dominant at the associated very small pulse tube size, a moving displacer Stirling cryocooler architecture is used. Compact compressor mechanisms developed on a previous program are reused for this design, and they have been adapted to yield an extremely compact Stirling warm end motor mechanism. Supporting thermodynamic and electromagnetic analysis results are reported.

  6. A pH sensing system using fluorescence-based fibre optical sensor capable of small volume sample measurement

    NASA Astrophysics Data System (ADS)

    Deng, Shijie; McAuliffe, Michael A. P.; Salaj-Kosla, Urszula; Wolfe, Raymond; Lewis, Liam; Huyet, Guillaume

    2017-02-01

    In this work, a low cost optical pH sensing system that allows for small volume sample measurements was developed. The system operates without the requirement of laboratory instruments (e.g. laser source, spectrometer and CCD camera), this lowers the cost and enhances the portability. In the system, an optical arrangement employing a dichroic filter was used which allows the excitation and emission light to be transmitted using a single fibre thus improving the collection efficiency of the fluorescence signal and also the ability of inserting measurement. The pH sensor in the system uses bromocresol purple as the indicator which is immobilised by sol-gel technology through a dip-coating process. The sensor material was coated on the tip of a 1 mm diameter optical fibre which makes it possible for inserting into very small volume samples to measure the pH. In the system, a LED with a peak emission wavelength of 465 nm is used as the light source and a silicon photo-detector is used to detect the uorescence signal. Optical filters are applied after the LED and in front of the photo-detector to separate the excitation and emission light. The fluorescence signal collected is transferred to a PC through a DAQ and processed by a Labview-based graphic-user-interface (GUI). Experimental results show that the system is capable of sensing pH values from 5.3 to 8.7 with a linear response of R2=0.969. Results also show that the response times for a pH changes from 5.3 to 8.7 is approximately 150 s and for a 0.5 pH changes is approximately 50 s.

  7. Extreme current fluctuations in lattice gases: Beyond nonequilibrium steady states

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Sasorov, Pavel V.

    2014-01-01

    We use the macroscopic fluctuation theory (MFT) to study large current fluctuations in nonstationary diffusive lattice gases. We identify two universality classes of these fluctuations, which we call elliptic and hyperbolic. They emerge in the limit when the deterministic mass flux is small compared to the mass flux due to the shot noise. The two classes are determined by the sign of compressibility of effective fluid, obtained by mapping the MFT into an inviscid hydrodynamics. An example of the elliptic class is the symmetric simple exclusion process, where, for some initial conditions, we can solve the effective hydrodynamics exactly. This leads to a super-Gaussian extreme current statistics conjectured by Derrida and Gerschenfeld [J. Stat. Phys. 137, 978 (2009), 10.1007/s10955-009-9830-1] and yields the optimal path of the system. For models of the hyperbolic class, the deterministic mass flux cannot be neglected, leading to a different extreme current statistics.

  8. Moving in extreme environments: what's extreme and who decides?

    PubMed

    Cotter, James David; Tipton, Michael J

    2014-01-01

    Humans work, rest and play in immensely varied extreme environments. The term 'extreme' typically refers to insufficiency or excess of one or more stressors, such as thermal energy or gravity. Individuals' behavioural and physiological capacity to endure and enjoy such environments varies immensely. Adverse effects of acute exposure to these environments are readily identifiable (e.g. heat stroke or bone fracture), whereas adverse effects of chronic exposure (e.g. stress fractures or osteoporosis) may be as important but much less discernable. Modern societies have increasingly sought to protect people from such stressors and, in that way, minimise their adverse effects. Regulations are thus established, and advice is provided on what is 'acceptable' exposure. Examples include work/rest cycles in the heat, hydration regimes, rates of ascent to and duration of stay at altitude and diving depth. While usually valuable and well intentioned, it is important to realise the breadth and importance of limitations associated with such guidelines. Regulations and advisories leave less room for self-determination, learning and perhaps adaptation. Regulations based on stress (e.g. work/rest cycles relative to WBGT) are more practical but less direct than those based on strain (e.g. core temperature), but even the latter can be substantively limited (e.g. by lack of criterion validation and allowance for behavioural regulation in the research on which they are based). Extreme Physiology & Medicine is publishing a series of reviews aimed at critically examining the issues involved with self- versus regulation-controlled human movement acutely and chronically in extreme environments. These papers, arising from a research symposium in 2013, are about the impact of people engaging in such environments and the effect of rules and guidelines on their safety, enjoyment, autonomy and productivity. The reviews will cover occupational heat stress, sporting heat stress, hydration, diving

  9. Soliton formation from a noise-like pulse during extreme events in a fibre ring laser

    NASA Astrophysics Data System (ADS)

    Pottiez, O.; Ibarra-Villalon, H. E.; Bracamontes-Rodriguez, Y.; Minguela-Gallardo, J. A.; Garcia-Sanchez, E.; Lauterio-Cruz, J. P.; Hernandez-Garcia, J. C.; Bello-Jimenez, M.; Kuzin, E. A.

    2017-10-01

    We study experimentally the interactions between soliton and noise-like pulse (NLP) components in a mode-locked fibre ring laser operating in a hybrid soliton-NLP regime. For proper polarization adjustments, one NLP and multiple packets of solitons coexist in the cavity, at 1530 nm and 1558 nm, respectively. By examining time-domain sequences measured using a 16 GHz real-time oscilloscope, we unveil the process of soliton genesis: they are produced during extreme-intensity episodes affecting the NLP. These extreme events can emerge sporadically, appear in small groups or even form quasi-periodic sequences. Once formed, the wavelength-shifted soliton packet drifts away from the NLP in the dispersive cavity, and eventually vanishes after a variable lifetime. Evidence of the inverse process, through which NLP formation is occasionally seeded by an extreme-intensity event affecting a bunch of solitons, is also provided. The quasi-stationary dynamics described here constitutes an impressive illustration of the connections and interactions between NLPs, extreme events and solitons in passively mode-locked fibre lasers.

  10. Implications from Near-Shoemaker Imaging of Eros for Small-Scale Structure and Surface Sampling

    NASA Technical Reports Server (NTRS)

    Chapman, C. R.

    2000-01-01

    What we know about asteroids has always been bifurcated by the enormous gap between astronomical studies of small, distant bodies, and the close-up laboratory measurements of hand-sample sized meteorites. The gulf has been narrowed somewhat by improvements in Earth-based astronomical techniques (e.g. Hubble Space Telescope, radar, adaptive optics) and especially by spacecraft fly-bys of asteroids. But the Near Earth Asteroid Rendezvous (NEAR)-Shoemaker mission has gone considerably more in the direction of bridging the gap. Any consideration of intelligent sample-return from an asteroid must be based on the best possible knowledge of the asteroid at the spatial scales pertinent to operations at the asteroid and of the sample/s. Otherwise, we are in danger of succumbing to the 'Martian Horror Story' that Bruce Murray, in the 1960's, envisioned might impair our exploration of the surface of the red planet if we tried to land on it without first bolstering the information content of our database about Mars, especially at high resolutions. NEAR-Shoemaker is helping to bridge that gap in the case of Eros. The best resolution obtained by the Galileo spacecraft on Ida was 25 m/pixel. As of this writing, NEAR has already obtained images with resolutions at least five times better (information content 25 times better) and vastly better images may be available at the time of this Workshop from the late October low flyby. Already, we are seeing that the Martian horror story looks tame compared with Eros. Everywhere we have landed on Mars, the surface has been covered with rocks and boulders, with much higher spatial coverage than seen anywhere on the lunar surface. We have, in fact, been rather lucky that none of our Martian landers have tipped over so far, and there were justified fears in the early aftermath of last year's failure of Mars Polar Lander that it had suffered from inadequate high-resolution characterization of polar regions on Mars (the failure is now known to

  11. Extreme Material Physical Properties and Measurements above 100 tesla

    NASA Astrophysics Data System (ADS)

    Mielke, Charles

    2011-03-01

    The National High Magnetic Field Laboratory (NHMFL) Pulsed Field Facility (PFF) at Los Alamos National Laboratory (LANL) offers extreme environments of ultra high magnetic fields above 100 tesla by use of the Single Turn method as well as fields approaching 100 tesla with more complex methods. The challenge of metrology in the extreme magnetic field generating devices is complicated by the millions of amperes of current and tens of thousands of volts that are required to deliver the pulsed power needed for field generation. Methods of detecting physical properties of materials are essential parts of the science that seeks to understand and eventually control the fundamental functionality of materials in extreme environments. De-coupling the signal of the sample from the electro-magnetic interference associated with the magnet system is required to make these state-of-the-art magnetic fields useful to scientists studying materials in high magnetic fields. The cutting edge methods that are being used as well as methods in development will be presented with recent results in Graphene and High-Tc superconductors along with the methods and challenges. National Science Foundation DMR-Award 0654118.

  12. Blue compact dwarfs - Extreme dwarf irregular galaxies

    NASA Technical Reports Server (NTRS)

    Thuan, Trinh X.

    1987-01-01

    Observational data on the most extreme members of the irregular dwarf (dI) galaxy class, the blue compact dwarfs (BCDs), are characterized, reviewing the results of recent investigations. The properties of the young stellar population, the ionized gas, the older star population, and the gas and dust of BCDs are contrasted with those of other dIs; BCD morphology is illustrated with sample images; and the value of BCDs (as nearby 'young' chemically unevolved galaxies) for studies of galaxy formation, galactic evolution, and starburst triggering mechanisms is indicated.

  13. Surface roughness control by extreme ultraviolet (EUV) radiation

    NASA Astrophysics Data System (ADS)

    Ahad, Inam Ul; Obeidi, Muhannad Ahmed; Budner, Bogusław; Bartnik, Andrzej; Fiedorowicz, Henryk; Brabazon, Dermot

    2017-10-01

    Surface roughness control of polymeric materials is often desirable in various biomedical engineering applications related to biocompatibility control, separation science and surface wettability control. In this study, Polyethylene terephthalate (PET) polymer films were irradiated with Extreme ultraviolet (EUV) photons in nitrogen environment and investigations were performed on surface roughness modification via EUV exposure. The samples were irradiated at 3 mm and 4 mm distance from the focal spot to investigate the effect of EUV fluence on topography. The topography of the EUV treated PET samples were studied by AFM. The detailed scanning was also performed on the sample irradiated at 3 mm. It was observed that the average surface roughness of PET samples was increased from 9 nm (pristine sample) to 280 nm and 253 nm for EUV irradiated samples. Detailed AFM studies confirmed the presence of 1.8 mm wide period U-shaped channels in EUV exposed PET samples. The walls of the channels were having FWHM of about 0.4 mm. The channels were created due to translatory movements of the sample in horizontal and transverse directions during the EUV exposure. The increased surface roughness is useful for many applications. The nanoscale channels fabricated by EUV exposure could be interesting for microfluidic applications based on lab-on-a-chip (LOC) devices.

  14. Fetal primary small bowel volvulus in a child without intestinal malrotation.

    PubMed

    Chung, Jae Hee; Lim, Gye-Yeon; We, Ji Sun

    2013-07-01

    Fetal primary small bowel volvulus without atresia or malrotation is an extremely rare but life-threatening surgical emergency. We report a case of primary small bowel volvulus that presented as sudden fetal distress and was diagnosed on the basis of the 'whirl-pool sign' of fetal sonography. This diagnosis led to emergency operation after birth at the third trimester with a good outcome. Although the pathogenesis of fetal primary small bowel volvulus is unclear, ganglion cell immaturity may play a role in the etiology. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Antenatal antecedents of a small head circumference at age 24-months post-term equivalent in a sample of infants born before the 28th post-menstrual week.

    PubMed

    Leviton, Alan; Kuban, Karl; Allred, Elizabeth N; Hecht, Jonathan L; Onderdonk, Andrew; O'Shea, T Michael; McElrath, Thomas; Paneth, Nigel

    2010-08-01

    Little is known about the antecedents of microcephaly in early childhood among children born at extremely low gestational age. To identify some of the antecedents of microcephaly at age two years among children born before the 28th week of gestation. Observational cohort study. 1004 infants born before the 28th week of gestation. Head circumference Z-scores of <-2 and >or=-2, <-1. Risk of microcephaly and a less severely restricted head circumference decreased monotonically with increasing gestational age. After adjusting for gestational age and other potential confounders, the risk of microcephaly at age 2 years was increased if microcephaly was present at birth [odds ratio: 8.8 ((95% confidence interval: 3.7, 21)], alpha hemolytic Streptococci were recovered from the placenta parenchyma [2.9 (1.2, 6.9)], the child was a boy [2.8 (1.6, 4.9)], and the child's mother was not married [2.5 (1.5, 4.3)]. Antecedents associated not with microcephaly, but with a less extreme reduction in head circumference were recovery of Propionibacterium sp from the placenta parenchyma [2.9 (1.5, 5.5)], tobacco exposure [2.0 (1.4, 3.0)], and increased syncytial knots in the placenta [2.0 (1.2, 3.2)]. Although microcephaly at birth predicts a small head circumference at 2 years among children born much before term, pregnancy and maternal characteristics provide supplemental information about the risk of a small head circumference years later. Two findings appear to be novel. Tobacco exposure during pregnancy, and organisms recovered from the placenta predict reduced head circumference at age two years. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Antenatal antecedents of a small head circumference at age 24-months post-term equivalent in a sample of infants born before the 28th post-menstrual week

    PubMed Central

    Leviton, Alan; Kuban, Karl; Allred, Elizabeth N.; Hecht, Jonathan L.; Onderdonk, Andrew; O'Shea, T. Michael; McElrath, Thomas; Paneth, Nigel

    2010-01-01

    Background Little is known about the antecedents of microcephaly in early childhood among children born at extremely low gestational age. Aim To identify some of the antecedents of microcephaly at age two years among children born before the 28th week of gestation. Study design Observational cohort study. Subjects 1004 infants born before the 28th week of gestation. Outcome measures Head circumference Z-scores of <−2 and ≥−2, <−1. Results Risk of microcephaly and a less severely restricted head circumference decreased monotonically with increasing gestational age. After adjusting for gestational age and other potential confounders, the risk of microcephaly at age 2 years was increased if microcephaly was present at birth [odds ratio: 8.8 ((95% confidence interval: 3.7, 21)], alpha hemolytic Streptococci were recovered from the placenta parenchyma [2.9 (1.2, 6.9)], the child was a boy [2.8 (1.6, 4.9)], and the child's mother was not married [2.5 (1.5, 4.3)]. Antecedents associated not with microcephaly, but with a less extreme reduction in head circumference were recovery of Propionibacterium sp from the placenta parenchyma [2.9 (1.5, 5.5)], tobacco exposure [2.0 (1.4, 3.0)], and increased syncytial knots in the placenta [2.0 (1.2, 3.2)]. Conclusions Although microcephaly at birth predicts a small head circumference at 2 years among children born much before term, pregnancy and maternal characteristics provide supplemental information about the risk of a small head circumference years later. Two findings appear to be novel. Tobacco exposure during pregnancy, and organisms recovered from the placenta predict reduced head circumference at age two years. PMID:20674197

  17. Extreme Value Analysis of hydro meteorological extremes in the ClimEx Large-Ensemble

    NASA Astrophysics Data System (ADS)

    Wood, R. R.; Martel, J. L.; Willkofer, F.; von Trentini, F.; Schmid, F. J.; Leduc, M.; Frigon, A.; Ludwig, R.

    2017-12-01

    Many studies show an increase in the magnitude and frequency of hydrological extreme events in the course of climate change. However the contribution of natural variability to the magnitude and frequency of hydrological extreme events is not yet settled. A reliable estimate of extreme events is from great interest for water management and public safety. In the course of the ClimEx Project (www.climex-project.org) a new single-model large-ensemble was created by dynamically downscaling the CanESM2 large-ensemble with the Canadian Regional Climate Model version 5 (CRCM5) for an European Domain and a Northeastern North-American domain. By utilizing the ClimEx 50-Member Large-Ensemble (CRCM5 driven by CanESM2 Large-Ensemble) a thorough analysis of natural variability in extreme events is possible. Are the current extreme value statistical methods able to account for natural variability? How large is the natural variability for e.g. a 1/100 year return period derived from a 50-Member Large-Ensemble for Europe and Northeastern North-America? These questions should be answered by applying various generalized extreme value distributions (GEV) to the ClimEx Large-Ensemble. Hereby various return levels (5-, 10-, 20-, 30-, 60- and 100-years) based on various lengths of time series (20-, 30-, 50-, 100- and 1500-years) should be analyzed for the maximum one day precipitation (RX1d), the maximum three hourly precipitation (RX3h) and the streamflow for selected catchments in Europe. The long time series of the ClimEx Ensemble (7500 years) allows us to give a first reliable estimate of the magnitude and frequency of certain extreme events.

  18. Extreme Environment Technologies for Space and Terrestrial Applications

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Cutts, James A.; Kolawa, Elizabeth A.; Peterson, Craig E.

    2008-01-01

    Over the next decades, NASA's planned solar system exploration missions are targeting planets, moons and small bodies, where spacecraft would be expected to encounter diverse extreme environmental (EE) conditions throughout their mission phases. These EE conditions are often coupled. For instance, near the surface of Venus and in the deep atmospheres of giant planets, probes would experience high temperatures and pressures. In the Jovian system low temperatures are coupled with high radiation. Other environments include thermal cycling, and corrosion. Mission operations could also introduce extreme conditions, due to atmospheric entry heat flux and deceleration. Some of these EE conditions are not unique to space missions; they can be encountered by terrestrial assets from the fields of defense,oil and gas, aerospace, and automotive industries. In this paper we outline the findings of NASA's Extreme Environments Study Team, including discussions on state of the art and emerging capabilities related to environmental protection, tolerance and operations in EEs. We will also highlight cross cutting EE mitigation technologies, for example, between high g-load tolerant impactors for Europa and instrumented projectiles on Earth; high temperature electronics sensors on Jupiter deep probes and sensors inside jet engines; and pressure vessel technologies for Venus probes and sea bottom monitors. We will argue that synergistic development programs between these fields could be highly beneficial and cost effective for the various agencies and industries. Some of these environments, however, are specific to space and thus the related technology developments should be spear headed by NASA with collaboration from industry and academia.

  19. Generalized extreme gust wind speeds distributions

    USGS Publications Warehouse

    Cheng, E.; Yeung, C.

    2002-01-01

    Since summer 1996, the US wind engineers are using the extreme gust (or 3-s gust) as the basic wind speed to quantify the destruction of extreme winds. In order to better understand these destructive wind forces, it is important to know the appropriate representations of these extreme gust wind speeds. Therefore, the purpose of this study is to determine the most suitable extreme value distributions for the annual extreme gust wind speeds recorded in large selected areas. To achieve this objective, we are using the generalized Pareto distribution as the diagnostic tool for determining the types of extreme gust wind speed distributions. The three-parameter generalized extreme value distribution function is, thus, reduced to either Type I Gumbel, Type II Frechet or Type III reverse Weibull distribution function for the annual extreme gust wind speeds recorded at a specific site.With the considerations of the quality and homogeneity of gust wind data collected at more than 750 weather stations throughout the United States, annual extreme gust wind speeds at selected 143 stations in the contiguous United States were used in the study. ?? 2002 Elsevier Science Ltd. All rights reserved.

  20. Free style perforator based propeller flaps: Simple solutions for upper extremity reconstruction!

    PubMed

    Panse, Nikhil; Sahasrabudhe, Parag

    2014-01-01

    The introduction of perforator flaps by Koshima et al. was met with much animosity in the plastic surgery fraternity. The safety concerns of these flaps following the intentional twist of the perforators have prevented widespread adoption of this technique. Use of perforator based propeller flaps in the lower extremity is gradually on the rise, but their use in upper extremity reconstruction is infrequently reported, especially in the Indian subcontinent. We present a retrospective series of 63 free style perforator flaps used for soft tissue reconstruction of the upper extremity from November 2008 to June 2013. Flaps were performed by a single surgeon for various locations and indications over the upper extremity. Patient demographics, surgical indication, defect features, complications and clinical outcome are evaluated and presented as an uncontrolled case series. 63 free style perforator based propeller flaps were used for soft tissue reconstruction of 62 patients for the upper extremity from November 2008 to June 2013. Of the 63 flaps, 31 flaps were performed for trauma, 30 for post burn sequel, and two for post snake bite defects. We encountered flap necrosis in 8 flaps, of which there was complete necrosis in 4 flaps, and partial necrosis in four flaps. Of these 8 flaps, 7 needed a secondary procedure, and one healed secondarily. Although we had a failure rate of 12-13%, most of our failures were in the early part of the series indicative of a learning curve associated with the flap. Free style perforator based propeller flaps are a reliable option for coverage of small to moderate sized defects. Therapeutic IV.