NASA Astrophysics Data System (ADS)
Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd
2017-09-01
The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-12-08
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi
2014-01-01
Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350
Maybe Small Is Too Small a Term: Introduction to Advancing Small Sample Prevention Science.
Fok, Carlotta Ching Ting; Henry, David; Allen, James
2015-10-01
Prevention research addressing health disparities often involves work with small population groups experiencing such disparities. The goals of this special section are to (1) address the question of what constitutes a small sample; (2) identify some of the key research design and analytic issues that arise in prevention research with small samples; (3) develop applied, problem-oriented, and methodologically innovative solutions to these design and analytic issues; and (4) evaluate the potential role of these innovative solutions in describing phenomena, testing theory, and evaluating interventions in prevention research. Through these efforts, we hope to promote broader application of these methodological innovations. We also seek whenever possible, to explore their implications in more general problems that appear in research with small samples but concern all areas of prevention research. This special section includes two sections. The first section aims to provide input for researchers at the design phase, while the second focuses on analysis. Each article describes an innovative solution to one or more challenges posed by the analysis of small samples, with special emphasis on testing for intervention effects in prevention research. A concluding article summarizes some of their broader implications, along with conclusions regarding future directions in research with small samples in prevention science. Finally, a commentary provides the perspective of the federal agencies that sponsored the conference that gave rise to this special section.
Small-Noise Analysis and Symmetrization of Implicit Monte Carlo Samplers
Goodman, Jonathan; Lin, Kevin K.; Morzfeld, Matthias
2015-07-06
Implicit samplers are algorithms for producing independent, weighted samples from multivariate probability distributions. These are often applied in Bayesian data assimilation algorithms. We use Laplace asymptotic expansions to analyze two implicit samplers in the small noise regime. Our analysis suggests a symmetrization of the algorithms that leads to improved implicit sampling schemes at a relatively small additional cost. Here, computational experiments confirm the theory and show that symmetrization is effective for small noise sampling problems.
Statistical Analysis Techniques for Small Sample Sizes
NASA Technical Reports Server (NTRS)
Navard, S. E.
1984-01-01
The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Contamination of successive samples in portable pumping systems
Robert B. Thomas; Rand E. Eads
1983-01-01
Automatic discrete sample pumping systems used to monitor water quality should deliver to storage all materials pumped in a given cycle. If they do not, successive samples will be contaminated, a severe problem with highly variable suspended sediment concentrations in small streams. The cross-contamination characteristics of two small commonly used portable pumping...
Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures
ERIC Educational Resources Information Center
Prodromou, Theodosia
2016-01-01
In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…
Future Lunar Sampling Missions: Big Returns on Small Samples
NASA Astrophysics Data System (ADS)
Shearer, C. K.; Borg, L.
2002-01-01
The next sampling missions to the Moon will result in the return of sample mass (100g to 1 kg) substantially smaller than those returned by the Apollo missions (380 kg). Lunar samples to be returned by these missions are vital for: (1) calibrating the late impact history of the inner solar system that can then be extended to other planetary surfaces; (2) deciphering the effects of catastrophic impacts on a planetary body (i.e. Aitken crater); (3) understanding the very late-stage thermal and magmatic evolution of a cooling planet; (4) exploring the interior of a planet; and (5) examining volatile reservoirs and transport on an airless planetary body. Can small lunar samples be used to answer these and other pressing questions concerning important solar system processes? Two potential problems with small, robotically collected samples are placing them in a geologic context and extracting robust planetary information. Although geologic context will always be a potential problem with any planetary sample, new lunar samples can be placed within the context of the important Apollo - Luna collections and the burgeoning planet-scale data sets for the lunar surface and interior. Here we illustrate the usefulness of applying both new or refined analytical approaches in deciphering information locked in small lunar samples.
SMURC: High-Dimension Small-Sample Multivariate Regression With Covariance Estimation.
Bayar, Belhassen; Bouaynaya, Nidhal; Shterenberg, Roman
2017-03-01
We consider a high-dimension low sample-size multivariate regression problem that accounts for correlation of the response variables. The system is underdetermined as there are more parameters than samples. We show that the maximum likelihood approach with covariance estimation is senseless because the likelihood diverges. We subsequently propose a normalization of the likelihood function that guarantees convergence. We call this method small-sample multivariate regression with covariance (SMURC) estimation. We derive an optimization problem and its convex approximation to compute SMURC. Simulation results show that the proposed algorithm outperforms the regularized likelihood estimator with known covariance matrix and the sparse conditional Gaussian graphical model. We also apply SMURC to the inference of the wing-muscle gene network of the Drosophila melanogaster (fruit fly).
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
Hobbs, Michael T.; Brehme, Cheryl S.
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.
Hobbs, Michael T; Brehme, Cheryl S
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
NASA Astrophysics Data System (ADS)
Hanike, Yusrianti; Sadik, Kusman; Kurnia, Anang
2016-02-01
This research implemented unemployment rate in Indonesia that based on Poisson distribution. It would be estimated by modified the post-stratification and Small Area Estimation (SAE) model. Post-stratification was one of technique sampling that stratified after collected survey data. It's used when the survey data didn't serve for estimating the interest area. Interest area here was the education of unemployment which separated in seven category. The data was obtained by Labour Employment National survey (Sakernas) that's collected by company survey in Indonesia, BPS, Statistic Indonesia. This company served the national survey that gave too small sample for level district. Model of SAE was one of alternative to solved it. According the problem above, we combined this post-stratification sampling and SAE model. This research gave two main model of post-stratification sampling. Model I defined the category of education was the dummy variable and model II defined the category of education was the area random effect. Two model has problem wasn't complied by Poisson assumption. Using Poisson-Gamma model, model I has over dispersion problem was 1.23 solved to 0.91 chi square/df and model II has under dispersion problem was 0.35 solved to 0.94 chi square/df. Empirical Bayes was applied to estimate the proportion of every category education of unemployment. Using Bayesian Information Criteria (BIC), Model I has smaller mean square error (MSE) than model II.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533
Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data
ERIC Educational Resources Information Center
McNeish, Daniel; Harring, Jeffrey R.
2017-01-01
To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…
Ion beam machining error control and correction for small scale optics.
Xie, Xuhui; Zhou, Lin; Dai, Yifan; Li, Shengyi
2011-09-20
Ion beam figuring (IBF) technology for small scale optical components is discussed. Since the small removal function can be obtained in IBF, it makes computer-controlled optical surfacing technology possible to machine precision centimeter- or millimeter-scale optical components deterministically. Using a small ion beam to machine small optical components, there are some key problems, such as small ion beam positioning on the optical surface, material removal rate, ion beam scanning pitch control on the optical surface, and so on, that must be seriously considered. The main reasons for the problems are that it is more sensitive to the above problems than a big ion beam because of its small beam diameter and lower material ratio. In this paper, we discuss these problems and their influences in machining small optical components in detail. Based on the identification-compensation principle, an iterative machining compensation method is deduced for correcting the positioning error of an ion beam with the material removal rate estimated by a selected optimal scanning pitch. Experiments on ϕ10 mm Zerodur planar and spherical samples are made, and the final surface errors are both smaller than λ/100 measured by a Zygo GPI interferometer.
ERIC Educational Resources Information Center
Yanqing, Ding
2012-01-01
After a brief review of the achievements and the problems in compulsory education enrollment in the thirty years since the reform and opening up, this study analyzes the current compulsory education enrollment and dropout rates in China's least-developed regions and the factors affecting school enrollment based on survey data from a small sample…
NASA Astrophysics Data System (ADS)
Takayama, T.; Iwasaki, A.
2016-06-01
Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE) of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.
Concentration by centrifugation for gas exchange EPR oximetry measurements with loop-gap resonators.
Subczynski, Witold K; Felix, Christopher C; Klug, Candice S; Hyde, James S
2005-10-01
Measurement of the bimolecular collision rate between a spin label and oxygen is conveniently carried out using a gas permeable plastic sample tube of small diameter that fits a loop-gap resonator. It is often desirable to concentrate the sample by centrifugation in order to improve the signal-to-noise ratio (SNR), but the deformable nature of small plastic sample tubes presents technical problems. Solutions to these problems are described. Two geometries were considered: (i) a methylpentene polymer, TPX, from Mitsui Chemicals, at X-band and (ii) Teflon tubing with 0.075 mm wall thickness at Q-band. Sample holders were fabricated from Delrin that fit the Eppendorf microcentrifuge tubes and support the sample capillaries. For TPX, pressure of the sealant at the end of the sample tube against the Delrin sample holder provided an adequate seal. For Teflon, the holder permitted introduction of water around the tube in order to equalize pressures across the sealant during centrifugation. Typically, the SNR was improved by a factor of five to eight. Oxygen accessibility applications in site-directed spin labeling studies are discussed.
Concentration by centrifugation for gas exchange EPR oximetry measurements with loop-gap resonators
NASA Astrophysics Data System (ADS)
Subczynski, Witold K.; Felix, Christopher C.; Klug, Candice S.; Hyde, James S.
2005-10-01
Measurement of the bimolecular collision rate between a spin label and oxygen is conveniently carried out using a gas permeable plastic sample tube of small diameter that fits a loop-gap resonator. It is often desirable to concentrate the sample by centrifugation in order to improve the signal-to-noise ratio (SNR), but the deformable nature of small plastic sample tubes presents technical problems. Solutions to these problems are described. Two geometries were considered: (i) a methylpentene polymer, TPX, from Mitsui Chemicals, at X-band and (ii) Teflon tubing with 0.075 mm wall thickness at Q-band. Sample holders were fabricated from Delrin that fit the Eppendorf microcentrifuge tubes and support the sample capillaries. For TPX, pressure of the sealant at the end of the sample tube against the Delrin sample holder provided an adequate seal. For Teflon, the holder permitted introduction of water around the tube in order to equalize pressures across the sealant during centrifugation. Typically, the SNR was improved by a factor of five to eight. Oxygen accessibility applications in site-directed spin labeling studies are discussed.
NASA Astrophysics Data System (ADS)
Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.
2016-11-01
The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.
Small area estimation (SAE) model: Case study of poverty in West Java Province
NASA Astrophysics Data System (ADS)
Suhartini, Titin; Sadik, Kusman; Indahwati
2016-02-01
This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.
de Moor, Marleen H. M.; Vink, Jacqueline M.; van Beek, Jenny H. D. A.; Geels, Lot M.; Bartels, Meike; de Geus, Eco J. C.; Willemsen, Gonneke; Boomsma, Dorret I.
2011-01-01
This study examined the heritability of problem drinking and investigated the phenotypic and genetic relationships between problem drinking and personality. In a sample of 5,870 twins and siblings and 4,420 additional family members from the Netherlands Twin Register. Data on problem drinking (assessed with the AUDIT and CAGE; 12 items) and personality [NEO Five-Factor Inventory (FFI); 60 items] were collected in 2009/2010 by surveys. Confirmatory factor analysis on the AUDIT and CAGE items showed that the items clustered on two separate but highly correlated (r = 0.74) underlying factors. A higher-order factor was extracted that reflected those aspects of problem drinking that are common to the AUDIT and CAGE, which showed a heritability of 40%. The correlations between problem drinking and the five dimensions of personality were small but significant, ranging from 0.06 for Extraversion to −0.12 for Conscientiousness. All personality dimensions (with broad-sense heritabilities between 32 and 55%, and some evidence for non-additive genetic influences) were genetically correlated with problem drinking. The genetic correlations were small to modest (between |0.12| and |0.41|). Future studies with longitudinal data and DNA polymorphisms are needed to determine the biological mechanisms that underlie the genetic link between problem drinking and personality. PMID:22303371
ERIC Educational Resources Information Center
Chromy, James R.
This study addressed statistical techniques that might ameliorate some of the sampling problems currently facing states with small populations participating in State National Assessment of Educational Progress (NAEP) assessments. The study explored how the application of finite population correction factors to the between-school component of…
Support vector regression to predict porosity and permeability: Effect of sample size
NASA Astrophysics Data System (ADS)
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function type and loss functions used.
ERIC Educational Resources Information Center
Spiropoulos, Georgia V.; Spruance, Lisa; Van Voorhis, Patricia; Schmitt, Michelle M.
2005-01-01
The effects of "Problem Solving" (Taymans & Parese, 1998) are compared across small diversion and prison samples for men and women. A second program, "Pathfinders" (Hansen, 1993), was compared to the Problem Solving program among incarcerated women offenders to determine whether its focus upon empowerment and relationships enhanced the effects of…
NASA Astrophysics Data System (ADS)
Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.
1993-05-01
Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.
NASA Technical Reports Server (NTRS)
Lindstrom, Marilyn M.; Shervais, John W.; Vetter, Scott K.
1993-01-01
Most of the recent advances in lunar petrology are the direct result of breccia pull-apart studies, which have identified a wide array of new highland and mare basalt rock types that occur only as clasts within the breccias. These rocks show that the lunar crust is far more complex than suspected previously, and that processes such as magma mixing and wall-rock assimilation were important in its petrogenesis. These studies are based on the implicit assumption that the breccia clasts, which range in size from a few mm to several cm across, are representative of the parent rock from which they were derived. In many cases, the aliquot allocated for analysis may be only a few grain diameters across. While this problem is most acute for coarse-grained highland rocks, it can also cause considerable uncertainty in the analysis of mare basalt clasts. Similar problems arise with small aliquots of individual hand samples. Our study of sample heterogeneity in 9 samples of Apollo 15 olivine normative basalt (ONB) which exhibit a range in average grain size from coarse to fine are reported. Seven of these samples have not been analyzed previously, one has been analyzed by INAA only, and one has been analyzed by XRF+INAA. Our goal is to assess the effects of small aliquot size on the bulk chemistry of large mare basalt samples, and to extend this assessment to analyses of small breccia clasts.
ERIC Educational Resources Information Center
Goodall, H. Lloyd, Jr.
1982-01-01
Examines the idea of organizational communication competence and describes how behavioral, cognitive, and performance objectives can be developed for a simulation course. Explains how the course works using small groups, organizational problems, and problem-solving discussions. Includes a sample syllabus with evaluation forms, a discussion of…
Responses to Simon and Other Population Revisionists.
ERIC Educational Resources Information Center
Campbell, Martha
1993-01-01
Population revisionists are cited as people who do not believe that global population growth is a problem. This paper presents a small sampling of responses to revisionist books and articles, primarily focused on perceived technical problems in the revisionist position as identified by demography, economics, and hard science specialists. Includes…
Applications of the Analytical Electron Microscope to Materials Science
NASA Technical Reports Server (NTRS)
Goldstein, J. I.
1992-01-01
In the last 20 years, the analytical electron microscope (AEM) as allowed investigators to obtain chemical and structural information from less than 50 nanometer diameter regions in thin samples of materials and to explore problems where reactions occur at boundaries and interfaces or within small particles or phases in bulk samples. Examples of the application of the AEM to materials science problems are presented in this paper and demonstrate the usefulness and the future potential of this instrument.
Hidalgo-Ruz, Valeria; Thiel, Martin
2013-01-01
The accumulation of large and small plastic debris is a problem throughout the world's oceans and coastlines. Abundances and types of small plastic debris have only been reported for some isolated beaches in the SE Pacific, but these data are insufficient to evaluate the situation in this region. The citizen science project "National Sampling of Small Plastic Debris" was supported by schoolchildren from all over Chile who documented the distribution and abundance of small plastic debris on Chilean beaches. Thirty-nine schools and nearly 1000 students from continental Chile and Easter Island participated in the activity. To validate the data obtained by the students, all samples were recounted in the laboratory. The results of the present study showed that the students were able to follow the instructions and generate reliable data. The average abundance obtained was 27 small plastic pieces per m(2) for the continental coast of Chile, but the samples from Easter Island had extraordinarily higher abundances (>800 items per m(2)). The abundance of small plastic debris on the continental coast could be associated with coastal urban centers and their economic activities. The high abundance found on Easter Island can be explained mainly by the transport of plastic debris via the surface currents in the South Pacific Subtropical Gyre, resulting in the accumulation of small plastic debris on the beaches of the island. This first report of the widespread distribution and abundance of small plastic debris on Chilean beaches underscores the need to extend plastic debris research to ecological aspects of the problem and to improve waste management. Copyright © 2013 Elsevier Ltd. All rights reserved.
Serang, Oliver
2012-01-01
Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics. PMID:22952741
Zachrisson, Henrik Daae; Dearing, Eric; Lekhal, Ratib; Toppelberg, Claudio O.
2012-01-01
Associations between maternal reports of hours in child care and children’s externalizing problems at 18 and 36 months of age were examined in a population-based Norwegian sample (n = 75,271). Within a sociopolitical context of homogenously high-quality child care, there was little evidence that high quantity of care causes externalizing problems. Using conventional approaches to handling selection bias and listwise deletion for substantial attrition in this sample, more hours in care predicted higher problem levels, yet with small effect sizes. The finding, however, was not robust to using multiple imputation for missing values. Moreover, when sibling and individual fixed-effects models for handling selection bias were used, no relation between hours and problems was evident. PMID:23311645
Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A
2017-06-30
Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Finding fixed satellite service orbital allotments with a k-permutation algorithm
NASA Technical Reports Server (NTRS)
Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.
1990-01-01
A satellite system synthesis problem, the satellite location problem (SLP), is addressed. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the fixed satellite service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: the problem of ordering the satellites and the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, has been developed to find solutions to SLPs. Solutions to small sample problems are presented and analyzed on the basis of calculated interferences.
Anomalous small-angle scattering as a way to solve the Babinet principle problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boiko, M. E., E-mail: m.e.boiko@mail.ioffe.ru; Sharkov, M. D.; Boiko, A. M.
2013-12-15
X-ray absorption spectra (XAS) have been used to determine the absorption edges of atoms present in a sample under study. A series of small-angle X-ray scattering (SAXS) measurements using different monochromatic X-ray beams at different wavelengths near the absorption edges is performed to solve the Babinet principle problem. The sizes of clusters containing atoms determined by the method of XAS were defined in SAXS experiments. In contrast to differential X-ray porosimetry, anomalous SAXS makes it possible to determine sizes of clusters of different atomic compositions.
Anomalous small-angle scattering as a way to solve the Babinet principle problem
NASA Astrophysics Data System (ADS)
Boiko, M. E.; Sharkov, M. D.; Boiko, A. M.; Bobyl, A. V.
2013-12-01
X-ray absorption spectra (XAS) have been used to determine the absorption edges of atoms present in a sample under study. A series of small-angle X-ray scattering (SAXS) measurements using different monochromatic X-ray beams at different wavelengths near the absorption edges is performed to solve the Babinet principle problem. The sizes of clusters containing atoms determined by the method of XAS were defined in SAXS experiments. In contrast to differential X-ray porosimetry, anomalous SAXS makes it possible to determine sizes of clusters of different atomic compositions.
Recognition Using Hybrid Classifiers.
Osadchy, Margarita; Keren, Daniel; Raviv, Dolev
2016-04-01
A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.
Improving small-angle X-ray scattering data for structural analyses of the RNA world
Rambo, Robert P.; Tainer, John A.
2010-01-01
Defining the shape, conformation, or assembly state of an RNA in solution often requires multiple investigative tools ranging from nucleotide analog interference mapping to X-ray crystallography. A key addition to this toolbox is small-angle X-ray scattering (SAXS). SAXS provides direct structural information regarding the size, shape, and flexibility of the particle in solution and has proven powerful for analyses of RNA structures with minimal requirements for sample concentration and volumes. In principle, SAXS can provide reliable data on small and large RNA molecules. In practice, SAXS investigations of RNA samples can show inconsistencies that suggest limitations in the SAXS experimental analyses or problems with the samples. Here, we show through investigations on the SAM-I riboswitch, the Group I intron P4-P6 domain, 30S ribosomal subunit from Sulfolobus solfataricus (30S), brome mosaic virus tRNA-like structure (BMV TLS), Thermotoga maritima asd lysine riboswitch, the recombinant tRNAval, and yeast tRNAphe that many problems with SAXS experiments on RNA samples derive from heterogeneity of the folded RNA. Furthermore, we propose and test a general approach to reducing these sample limitations for accurate SAXS analyses of RNA. Together our method and results show that SAXS with synchrotron radiation has great potential to provide accurate RNA shapes, conformations, and assembly states in solution that inform RNA biological functions in fundamental ways. PMID:20106957
The coverage of a random sample from a biological community.
Engen, S
1975-03-01
A taxonomic group will frequently have a large number of species with small abundances. When a sample is drawn at random from this group, one is therefore faced with the problem that a large proportion of the species will not be discovered. A general definition of quantitative measures of "sample coverage" is proposed, and the problem of statistical inference is considered for two special cases, (1) the actual total relative abundance of those species that are represented in the sample, and (2) their relative contribution to the information index of diversity. The analysis is based on a extended version of the negative binomial species frequency model. The results are tabulated.
Hookah use among college students: prevalence, drug use, and mental health.
Goodwin, Renee D; Grinberg, Alice; Shapiro, Jack; Keith, Diana; McNeil, Michael P; Taha, Farah; Jiang, Bianca; Hart, Carl L
2014-08-01
There is consistent evidence that hookah use is as, if not more, harmful than cigarette use. Yet, hookah users underestimate the potential deleterious effects of hookah use. This study examined the rates of hookah use and associated demographic characteristics in a sample of undergraduates at a small Northeastern university. This study also examined the relationships between hookah use and other substance use, mental health problems, and perceived levels of stress. Data were drawn from the Spring 2009 American Health Association-National College Health Assessment (ACHA-NCHA) at one small, Northeastern university (N=1799). The relationships between hookah use and other substance use, mental health problems, and perceived stress levels were examined using logistic regression analyses. Hookah use (in the past month) was reported among 14.1% (253/1799) of this sample of undergraduates. Hookah users were more likely to use other substances, including cigarettes, cannabis, alcohol, cocaine, and amphetamines. The strongest associations emerged between hookah use and alcohol and cigarette use. There were no significant associations found between hookah use and any mental health problems or perceived stress levels. Hookah users are significantly more likely to use other substances, including alcohol, cigarettes, cannabis, cocaine, and amphetamines compared with non-hookah users. In contrast to cigarette smoking, hookah use does not appear to be associated with mental health problems or perceived stress levels in this sample of undergraduates. Further investigation into the prevalence and correlates of hookah use is needed in representative population samples. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Constituent loads in small streams: the process and problems of estimating sediment flux
R. B. Thomas
1989-01-01
Constituent loads in small streams are often estimated poorly. This is especially true for discharge-related constituents like sediment, since their flux is highly variable and mainly occurs during infrequent high-flow events. One reason for low-quality estimates is that most prevailing data collection methods ignore sampling probabilities and only partly account for...
Automatic devices to take water samples and to raise trash screens at weirs
K. G. Reinhart; R. E. Leonard; G. E. Hart
1960-01-01
Experimentation on small watersheds is assuming increasing importance in watershed-management research. Much has been accomplished in developing adequate instrumentation for use in these experiments. Yet many problems still await solution. One difficulty encountered is that small streams are subject to wide variations in flow and that these variations are generally...
Small-Sample Equating with Prior Information. Research Report. ETS RR-09-25
ERIC Educational Resources Information Center
Livingston, Samuel A.; Lewis, Charles
2009-01-01
This report proposes an empirical Bayes approach to the problem of equating scores on test forms taken by very small numbers of test takers. The equated score is estimated separately at each score point, making it unnecessary to model either the score distribution or the equating transformation. Prior information comes from equatings of other…
Core Cutting Test with Vertical Rock Cutting Rig (VRCR)
NASA Astrophysics Data System (ADS)
Yasar, Serdar; Osman Yilmaz, Ali
2017-12-01
Roadheaders are frequently used machines in mining and tunnelling, and performance prediction of roadheaders is important for project economics and stability. Several methods were proposed so far for this purpose and, rock cutting tests are the best choice. Rock cutting tests are generally divided into two groups which are namely, full scale rock cutting tests and small scale rock cutting tests. These two tests have some superiorities and deficiencies over themselves. However, in many cases, where rock sampling becomes problematic, small scale rock cutting test (core cutting test) is preferred for performance prediction, since small block samples and core samples can be conducted to rock cutting testing. Common problem for rock cutting tests are that they can be found in very limited research centres. In this study, a new mobile rock cutting testing equipment, vertical rock cutting rig (VRCR) was introduced. Standard testing procedure was conducted on seven rock samples which were the part of a former study on cutting rocks with another small scale rock cutting test. Results showed that core cutting test can be realized successfully with VRCR with the validation of paired samples t-test.
Statistical issues in reporting quality data: small samples and casemix variation.
Zaslavsky, A M
2001-12-01
To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Pinquart, Martin
2017-05-01
The present meta-analysis integrates research from 1,435 studies on associations of parenting dimensions and styles with externalizing symptoms in children and adolescents. Parental warmth, behavioral control, autonomy granting, and an authoritative parenting style showed very small to small negative concurrent and longitudinal associations with externalizing problems. In contrast, harsh control, psychological control, authoritarian, permissive, and neglectful parenting were associated with higher levels of externalizing problems. The strongest associations were observed for harsh control and psychological control. Parental warmth, behavioral control, harsh control, psychological control, autonomy granting, authoritative, and permissive parenting predicted change in externalizing problems over time, with associations of externalizing problems with warmth, behavioral control, harsh control, psychological control, and authoritative parenting being bidirectional. Moderating effects of sampling, child's age, form of externalizing problems, rater of parenting and externalizing problems, quality of measures, and publication status were identified. Implications for future research and practice are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The Effects of Maternal Social Phobia on Mother-Infant Interactions and Infant Social Responsiveness
ERIC Educational Resources Information Center
Murray, Lynne; Cooper, Peter; Creswell, Cathy; Schofield, Elizabeth; Sack, Caroline
2007-01-01
Background: Social phobia aggregates in families. The genetic contribution to intergenerational transmission is modest, and parenting is considered important. Research on the effects of social phobia on parenting has been subject to problems of small sample size, heterogeneity of samples and lack of specificity of observational frameworks. We…
USDA-ARS?s Scientific Manuscript database
The presence of antibiotic residues in edible animal products is a human food safety concern. To address this potential problem, the government samples edible tissues, such as muscle, to monitor for residues. Due to loss of valuable product and analytical difficulties only a small percentage of po...
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Therapeutic effects of problem-solving training and play-reading groups.
Coché, E; Douglas, A A
1977-07-01
Twenty-five adult patients of a private psychiatric hospital participated in small groups that convened for eight sessions in order to increase skills in interpersonal problem solving. The groups repeatedly went through the steps of (a) bringing up a problem; (b) clarifying it; (c) proposing solutions; and(d) weighing the solutions. A control grop of 29 patients did not receive problem-solving training. A "placebo" sample of 21 Ss also met in small groups, but their task was to read comedies together. The results obtained through a series of analyses of covariance showed that the experimental condition was more successful than the other two in improving people's impulse control, self-esteem and feeling of competence. The play-reading condition was found to be as helpful as the problem-solving groups in reducing depression and general psychopathology. Control patients showed significantly less improvement than did patients in the other conditions.
Panadero, Sonia; Vázquez, José Juan; Martín, Rosa María
2016-06-14
The work analyzes different aspects related to alcohol consumption among homeless people and people at risk of social exclusion. The data was gathered from a representative sample of homeless people in Madrid (n = 188) and a sample of people at risk of social exclusion (n = 164) matched in sex, age, and origin (Spaniards vs. foreigners). The results showed that homeless people present a greater consumption of alcohol and have experienced more problems derived from its consumption than people at risk of social exclusion. Most of the homeless people who had alcohol-related problems had had them prior to their homelessness, and they stated they had poorer health and had experienced a greater number of homelessness episodes. Despite the relevance of problems related to alcohol among our sample, only a small percentage of the sample had participated in treatment programs for alcohol consumption.
Salvatore, Jessica E; Aliev, Fazil; Edwards, Alexis C; Evans, David M; Macleod, John; Hickman, Matthew; Lewis, Glyn; Kendler, Kenneth S; Loukola, Anu; Korhonen, Tellervo; Latvala, Antti; Rose, Richard J; Kaprio, Jaakko; Dick, Danielle M
2014-04-10
Alcohol problems represent a classic example of a complex behavioral outcome that is likely influenced by many genes of small effect. A polygenic approach, which examines aggregate measured genetic effects, can have predictive power in cases where individual genes or genetic variants do not. In the current study, we first tested whether polygenic risk for alcohol problems-derived from genome-wide association estimates of an alcohol problems factor score from the age 18 assessment of the Avon Longitudinal Study of Parents and Children (ALSPAC; n = 4304 individuals of European descent; 57% female)-predicted alcohol problems earlier in development (age 14) in an independent sample (FinnTwin12; n = 1162; 53% female). We then tested whether environmental factors (parental knowledge and peer deviance) moderated polygenic risk to predict alcohol problems in the FinnTwin12 sample. We found evidence for both polygenic association and for additive polygene-environment interaction. Higher polygenic scores predicted a greater number of alcohol problems (range of Pearson partial correlations 0.07-0.08, all p-values ≤ 0.01). Moreover, genetic influences were significantly more pronounced under conditions of low parental knowledge or high peer deviance (unstandardized regression coefficients (b), p-values (p), and percent of variance (R2) accounted for by interaction terms: b = 1.54, p = 0.02, R2 = 0.33%; b = 0.94, p = 0.04, R2 = 0.30%, respectively). Supplementary set-based analyses indicated that the individual top single nucleotide polymorphisms (SNPs) contributing to the polygenic scores were not individually enriched for gene-environment interaction. Although the magnitude of the observed effects are small, this study illustrates the usefulness of polygenic approaches for understanding the pathways by which measured genetic predispositions come together with environmental factors to predict complex behavioral outcomes.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
Can Financial Need Analysis be Simplified?
ERIC Educational Resources Information Center
Orwig, M. D.; Jones, Paul K.
This paper examines the problem of collecting financial data on aid applicants. A 10% sample (12,383) of student records was taken from the 1968-69 alphabetic history file for the ACT Student Need Analysis Service. Random sub-samples were taken in certain phases of the study. A relatively small number of financial variables were found to predict…
Röttgers, Rüdiger; Doxaran, David; Dupouy, Cecile
2016-01-25
The accurate determination of light absorption coefficients of particles in water, especially in very oligotrophic oceanic areas, is still a challenging task. Concentrating aquatic particles on a glass fiber filter and using the Quantitative Filter Technique (QFT) is a common practice. Its routine application is limited by the necessary use of high performance spectrophotometers, distinct problems induced by the strong scattering of the filters and artifacts induced by freezing and storing samples. Measurements of the sample inside a large integrating sphere reduce scattering effects and direct field measurements avoid artifacts due to sample preservation. A small, portable, Integrating Cavity Absorption Meter setup (QFT-ICAM) is presented, that allows rapid measurements of a sample filter. The measurement technique takes into account artifacts due to chlorophyll-a fluorescence. The QFT-ICAM is shown to be highly comparable to similar measurements in laboratory spectrophotometers, in terms of accuracy, precision, and path length amplification effects. No spectral artifacts were observed when compared to measurement of samples in suspension, whereas freezing and storing of sample filters induced small losses of water-soluble pigments (probably phycoerythrins). Remaining problems in determining the particulate absorption coefficient with the QFT-ICAM are strong sample-to-sample variations of the path length amplification, as well as fluorescence by pigments that is emitted in a different spectral region than that of chlorophyll-a.
Perceived risk associated with ecstasy use: a latent class analysis approach
Martins, SS; Carlson, RG; Alexandre, PK; Falck, RS
2011-01-01
This study aims to define categories of perceived health problems among ecstasy users based on observed clustering of their perceptions of ecstasy-related health problems. Data from a community sample of ecstasy users (n=402) aged 18 to 30, in Ohio, was used in this study. Data was analyzed via Latent Class Analysis (LCA) and Regression. This study identified five different subgroups of ecstasy users based on their perceptions of health problems they associated with their ecstasy use. Almost one third of the sample (28.9%) belonged to a class with “low level of perceived problems” (Class 4). About one fourth (25.6%) of the sample (Class 2), had high probabilities of “perceiving problems on sexual-related items”, but generally low or moderate probabilities of perceiving problems in other areas. Roughly one-fifth of the sample (21.1%, Class 1) had moderate probabilities of perceiving ecstasy health-related problems in all areas. A small proportion of respondents (11.9%, Class 5) had high probabilities of reporting “perceived memory and cognitive problems, and of perceiving “ecstasy related-problems in all areas” (12.4%, Class 3). A large proportion of ecstasy users perceive either low or moderate risk associated with their ecstasy use. It is important to further investigate whether lower levels of risk perception are associated with persistence of ecstasy use. PMID:21296504
NASA Astrophysics Data System (ADS)
Xia, Xintao; Wang, Zhongyu
2008-10-01
For some methods of stability analysis of a system using statistics, it is difficult to resolve the problems of unknown probability distribution and small sample. Therefore, a novel method is proposed in this paper to resolve these problems. This method is independent of probability distribution, and is useful for small sample systems. After rearrangement of the original data series, the order difference and two polynomial membership functions are introduced to estimate the true value, the lower bound and the supper bound of the system using fuzzy-set theory. Then empirical distribution function is investigated to ensure confidence level above 95%, and the degree of similarity is presented to evaluate stability of the system. Cases of computer simulation investigate stable systems with various probability distribution, unstable systems with linear systematic errors and periodic systematic errors and some mixed systems. The method of analysis for systematic stability is approved.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Broecker, Peter; Trebst, Simon
2016-08-01
Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.
NASA Astrophysics Data System (ADS)
Yahyaei, Mohsen; Bashiri, Mahdi
2017-12-01
The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.
Incipient fault detection study for advanced spacecraft systems
NASA Technical Reports Server (NTRS)
Milner, G. Martin; Black, Michael C.; Hovenga, J. Mike; Mcclure, Paul F.
1986-01-01
A feasibility study to investigate the application of vibration monitoring to the rotating machinery of planned NASA advanced spacecraft components is described. Factors investigated include: (1) special problems associated with small, high RPM machines; (2) application across multiple component types; (3) microgravity; (4) multiple fault types; (5) eight different analysis techniques including signature analysis, high frequency demodulation, cepstrum, clustering, amplitude analysis, and pattern recognition are compared; and (6) small sample statistical analysis is used to compare performance by computation of probability of detection and false alarm for an ensemble of repeated baseline and faulted tests. Both detection and classification performance are quantified. Vibration monitoring is shown to be an effective means of detecting the most important problem types for small, high RPM fans and pumps typical of those planned for the advanced spacecraft. A preliminary monitoring system design and implementation plan is presented.
Rosenblum, Michael A; Laan, Mark J van der
2009-01-07
The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
An agglomerative hierarchical clustering approach to visualisation in Bayesian clustering problems
Dawson, Kevin J.; Belkhir, Khalid
2009-01-01
Clustering problems (including the clustering of individuals into outcrossing populations, hybrid generations, full-sib families and selfing lines) have recently received much attention in population genetics. In these clustering problems, the parameter of interest is a partition of the set of sampled individuals, - the sample partition. In a fully Bayesian approach to clustering problems of this type, our knowledge about the sample partition is represented by a probability distribution on the space of possible sample partitions. Since the number of possible partitions grows very rapidly with the sample size, we can not visualise this probability distribution in its entirety, unless the sample is very small. As a solution to this visualisation problem, we recommend using an agglomerative hierarchical clustering algorithm, which we call the exact linkage algorithm. This algorithm is a special case of the maximin clustering algorithm that we introduced previously. The exact linkage algorithm is now implemented in our software package Partition View. The exact linkage algorithm takes the posterior co-assignment probabilities as input, and yields as output a rooted binary tree, - or more generally, a forest of such trees. Each node of this forest defines a set of individuals, and the node height is the posterior co-assignment probability of this set. This provides a useful visual representation of the uncertainty associated with the assignment of individuals to categories. It is also a useful starting point for a more detailed exploration of the posterior distribution in terms of the co-assignment probabilities. PMID:19337306
Sampling problems: The small scale structure of precipitation
NASA Technical Reports Server (NTRS)
Crane, R. K.
1981-01-01
The quantitative measurement of precipitation characteristics for any area on the surface of the Earth is not an easy task. Precipitation is rather variable in both space and time, and the distribution of surface rainfall data given location typically is substantially skewed. There are a number of precipitation process at work in the atmosphere, and few of them are well understood. The formal theory on sampling and estimating precipitation appears considerably deficient. Little systematic attention is given to nonsampling errors that always arise in utilizing any measurement system. Although the precipitation measurement problem is an old one, it continues to be one that is in need of systematic and careful attention. A brief history of the presently competing measurement technologies should aid us in understanding the problem inherent in this measurement task.
Distribution-Preserving Stratified Sampling for Learning Problems.
Cervellera, Cristiano; Maccio, Danilo
2017-06-09
The need for extracting a small sample from a large amount of real data, possibly streaming, arises routinely in learning problems, e.g., for storage, to cope with computational limitations, obtain good training/test/validation sets, and select minibatches for stochastic gradient neural network training. Unless we have reasons to select the samples in an active way dictated by the specific task and/or model at hand, it is important that the distribution of the selected points is as similar as possible to the original data. This is obvious for unsupervised learning problems, where the goal is to gain insights on the distribution of the data, but it is also relevant for supervised problems, where the theory explains how the training set distribution influences the generalization error. In this paper, we analyze the technique of stratified sampling from the point of view of distances between probabilities. This allows us to introduce an algorithm, based on recursive binary partition of the input space, aimed at obtaining samples that are distributed as much as possible as the original data. A theoretical analysis is proposed, proving the (greedy) optimality of the procedure together with explicit error bounds. An adaptive version of the algorithm is also introduced to cope with streaming data. Simulation tests on various data sets and different learning tasks are also provided.
Gini, Gianluca; Card, Noel A; Pozzoli, Tiziana
2018-03-01
This meta-analysis examined the associations between cyber-victimization and internalizing problems controlling for the occurrence of traditional victimization. Twenty independent samples with a total of 90,877 participants were included. Results confirmed the significant intercorrelation between traditional and cyber-victimization (r = .43). They both have medium-to-large bivariate correlations with internalizing problems. Traditional victimization (sr = .22) and cyber-victimization (sr = .12) were also uniquely related to internalizing problems. The difference in the relations between each type of victimization and internalizing problems was small (differential d = .06) and not statistically significant (p = .053). Moderation of these effect sizes by sample characteristics (e.g., age and proportion of girls) and study features (e.g., whether a definition of bullying was provided to participants and the time frame used as reference) was investigated. Results are discussed within the extant literature on cyber-aggression and cyber-victimization and future directions are proposed. © 2017 Wiley Periodicals, Inc.
A novel multi-target regression framework for time-series prediction of drug efficacy.
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-18
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task.
A novel multi-target regression framework for time-series prediction of drug efficacy
Li, Haiqing; Zhang, Wei; Chen, Ying; Guo, Yumeng; Li, Guo-Zheng; Zhu, Xiaoxin
2017-01-01
Excavating from small samples is a challenging pharmacokinetic problem, where statistical methods can be applied. Pharmacokinetic data is special due to the small samples of high dimensionality, which makes it difficult to adopt conventional methods to predict the efficacy of traditional Chinese medicine (TCM) prescription. The main purpose of our study is to obtain some knowledge of the correlation in TCM prescription. Here, a novel method named Multi-target Regression Framework to deal with the problem of efficacy prediction is proposed. We employ the correlation between the values of different time sequences and add predictive targets of previous time as features to predict the value of current time. Several experiments are conducted to test the validity of our method and the results of leave-one-out cross-validation clearly manifest the competitiveness of our framework. Compared with linear regression, artificial neural networks, and partial least squares, support vector regression combined with our framework demonstrates the best performance, and appears to be more suitable for this task. PMID:28098186
Upshur, Carole; Weinreb, Linda; Bharel, Monica; Reed, George; Frisard, Christine
2015-04-01
A clinician-randomized trial was conducted using the chronic care model for disease management for alcohol use problems among n = 82 women served in a health care for the homeless clinic. Women with problem alcohol use received either usual care or an intervention consisting of a primary care provider (PCP) brief intervention, referral to addiction services, and on-going support from a care manager (CM) for 6 months. Both groups significantly reduced their alcohol consumption, with a small effect size favoring intervention at 3 months, but there were no significant differences between groups in reductions in drinking or in housing stability, or mental or physical health. However, intervention women had significantly more frequent participation in substance use treatment services. Baseline differences and small sample size limit generalizability, although substantial reductions in drinking for both groups suggest that screening and PCP brief treatment are promising interventions for homeless women with alcohol use problems. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Marsella, Adam M.; Huang, Jiping; Ellis, David A.; Mabury, Scott A.
1999-12-01
An undergraduate field experiment is described for the measurement of nicotine and various carbonyl compounds arising from environmental tobacco smoke. Students are introduced to practical techniques in HPLC-UV and GC-NPD. Also introduced are current methods in personal air sampling using small and portable field sampling pumps. Carbonyls (formaldehyde, acetaldehyde, acrolein, and acetone) are sampled with silica solid-phase extraction cartridges impregnated with 2,4-dinitrophenylhydrazine, eluted, and analyzed by HPLC-UV (360-380 nm). Nicotine is sampled using XAD-2 cartridges, extracted, and analyzed by GC-NPD. Students gain an appreciation for the problems associated with measuring ubiquitous pollutants such as formaldehyde, as well as the issue of chromatographic peak resolution when trying to resolve closely eluting peaks. By allowing the students to formulate their own hypothesis and sampling scheme, critical thinking and problem solving are developed in addition to analysis skills. As an experiment in analytical environmental chemistry, this laboratory introduces the application of field sampling and analysis techniques to the undergraduate lab.
Weight Status and Behavioral Problems among Very Young Children in Chile.
Kagawa, Rose M C; Fernald, Lia C H; Behrman, Jere R
2016-01-01
Our objective was to explore the association between weight status and behavioral problems in children before school age. We examined whether the association between weight status and behavioral problems varied by age and sex. This study used cross-sectional data from a nationally-representative sample of children and their families in Chile (N = 11,207). These children were selected using a cluster-stratified random sampling strategy. Data collection for this study took place in 2012 when the children were 1.5-6 years of age. We used multivariable analyses to examine the association between weight status and behavioral problems (assessed using the Child Behavior Checklist), while controlling for child's sex, indigenous status, birth weight, and months breastfed; primary caregiver's BMI and education level; and household wealth. Approximately 24% of our sample was overweight or obese. Overweight or obese girls showed more behavioral problems than normal weight girls at age 6 (β = 0.270 SD, 95% CI = 0.047, 0.493, P = 0.018). Among boys age 1 to 5 years, overweight/obesity was associated with a small reduction in internalizing behaviors (β = -0.09 SD, 95% CI = -0.163, -0.006, P = 0.034). Our data suggest that the associations between weight status and behavioral problems vary across age and sex.
Fuzzy support vector machine for microarray imbalanced data classification
NASA Astrophysics Data System (ADS)
Ladayya, Faroh; Purnami, Santi Wulan; Irhamah
2017-11-01
DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.
Ecologists are often faced with problem of small sample size, correlated and large number of predictors, and high noise-to-signal relationships. This necessitates excluding important variables from the model when applying standard multiple or multivariate regression analyses. In ...
Ozminkowski, R J; Goetzel, R Z
2001-01-01
The authors describe the most important methodological challenges often encountered in conducting research and evaluation on the financial impact of health promotion. These include selection bias, skewed data, small sample size, metrics. They discuss when these problems can and cannot be overcome and suggest how some of these problems can be overcome through a creating an appropriate framework for the study, and using state of the art statistical methods.
Variational Approach to Enhanced Sampling and Free Energy Calculations
NASA Astrophysics Data System (ADS)
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
The effects of cosmetic surgery on body image, self-esteem, and psychological problems.
von Soest, T; Kvalem, I L; Roald, H E; Skolleborg, K C
2009-10-01
This study aims to investigate whether cosmetic surgery has an effect on an individual's body image, general self-esteem, and psychological problems. Further tests were conducted to assess whether the extent of psychological problems before surgery influenced improvements in postoperative psychological outcomes. Questionnaire data from 155 female cosmetic surgery patients from a plastic surgery clinic were obtained before and approximately 6 months after surgery. The questionnaire consisted of measures on body image, self-esteem, and psychological problems. Pre- and postoperative values were compared. Pre- and postoperative measures were also compared with the data compiled from a representative sample of 838 Norwegian women, aged 22-55, with no cosmetic surgery experience. No differences in psychological problems between the presurgery patient and comparison samples were found, whereas differences in body image and self-esteem between the sample groups were reported in an earlier publication. Analyses further revealed an improvement in body image (satisfaction with own appearance) after surgery. A significant but rather small effect on self-esteem was also found, whereas the level of psychological problems did not change after surgery. Postoperative measures of appearance satisfaction, self-esteem, and psychological problems did not differ from values derived from the comparison sample. Finally, few psychological problems before surgery predicted a greater improvement in appearance satisfaction and self-esteem after surgery. The study provides evidence of improvement in satisfaction with own appearance after cosmetic surgery, a variable that is thought to play a central role in understanding the psychology of cosmetic surgery patients. The study also points to the factors that surgeons should be aware of, particularly the role of psychological problems, which could inhibit the positive effects of cosmetic surgery.
Linear systems on balancing chemical reaction problem
NASA Astrophysics Data System (ADS)
Kafi, R. A.; Abdillah, B.
2018-01-01
The concept of linear systems appears in a variety of applications. This paper presents a small sample of the wide variety of real-world problems regarding our study of linear systems. We show that the problem in balancing chemical reaction can be described by homogeneous linear systems. The solution of the systems is obtained by performing elementary row operations. The obtained solution represents the finding coefficients of chemical reaction. In addition, we present a computational calculation to show that mathematical software such as Matlab can be used to simplify completion of the systems, instead of manually using row operations.
Sampled-data chain-observer design for a class of delayed nonlinear systems
NASA Astrophysics Data System (ADS)
Kahelras, M.; Ahmed-Ali, T.; Giri, F.; Lamnabhi-Lagarrigue, F.
2018-05-01
The problem of observer design is addressed for a class of triangular nonlinear systems with not-necessarily small delay and sampled output measurements. One more difficulty is that the system state matrix is dependent on the un-delayed output signal which is not accessible to measurement, making existing observers inapplicable. A new chain observer, composed of m elementary observers in series, is designed to compensate for output sampling and arbitrary large delays. The larger the time-delay the larger the number m. Each elementary observer includes an output predictor that is conceived to compensate for the effects of output sampling and a fractional delay. The predictors are defined by first-order ordinary differential equations (ODEs) much simpler than those of existing predictors which involve both output and state predictors. Using a small gain type analysis, sufficient conditions for the observer to be exponentially convergent are established in terms of the minimal number m of elementary observers and the maximum sampling interval.
Risk-Based Sampling: I Don't Want to Weight in Vain.
Powell, Mark R
2015-12-01
Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.
Guide to a condensed form of NASTRAN
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1978-01-01
A limited capability form of NASTRAN level 16 is presented to meet the needs of universities and small consulting firms. The input cards, the programming language of the direct matrix abstraction program, the plotting, the problem definition, and the modules' diagnostic messages are described. Sample problems relating to the analysis of linear static, vibration, and buckling are included. This guide can serve as a handbook for instructional courses in the use of NASTRAN or for users who need only the capability provided by the condensed form.
Childhood Reports of Food Neglect and Impulse Control Problems and Violence in Adulthood
Vaughn, Michael G.; Salas-Wright, Christopher P.; Naeger, Sandra; Huang, Jin; Piquero, Alex R.
2016-01-01
Food insecurity and hunger during childhood are associated with an array of developmental problems in multiple domains, including impulse control problems and violence. Unfortunately, extant research is based primarily on small convenience samples and an epidemiological assessment of the hunger-violence link is lacking. The current study employed data from Wave 1 (2001–2002) and Wave 2 (2004–2005) of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). The NESARC is a nationally representative sample of non-institutionalized U.S. residents aged 18 years and older. Participants who experienced frequent hunger during childhood had significantly greater impulsivity, worse self-control, and greater involvement in several forms of interpersonal violence. These effects were stronger among whites, Hispanics, and males. The findings support general theoretical models implicating impulse control problems as a key correlate of crime and violence and add another facet to the importance of ameliorating food neglect in the United States. PMID:27043598
Hu, Jianhua; Wright, Fred A
2007-03-01
The identification of the genes that are differentially expressed in two-sample microarray experiments remains a difficult problem when the number of arrays is very small. We discuss the implications of using ordinary t-statistics and examine other commonly used variants. For oligonucleotide arrays with multiple probes per gene, we introduce a simple model relating the mean and variance of expression, possibly with gene-specific random effects. Parameter estimates from the model have natural shrinkage properties that guard against inappropriately small variance estimates, and the model is used to obtain a differential expression statistic. A limiting value to the positive false discovery rate (pFDR) for ordinary t-tests provides motivation for our use of the data structure to improve variance estimates. Our approach performs well compared to other proposed approaches in terms of the false discovery rate.
Taniguchi, Hidetaka; Sato, Hiroshi; Shirakawa, Tomohiro
2018-05-09
Human learners can generalize a new concept from a small number of samples. In contrast, conventional machine learning methods require large amounts of data to address the same types of problems. Humans have cognitive biases that promote fast learning. Here, we developed a method to reduce the gap between human beings and machines in this type of inference by utilizing cognitive biases. We implemented a human cognitive model into machine learning algorithms and compared their performance with the currently most popular methods, naïve Bayes, support vector machine, neural networks, logistic regression and random forests. We focused on the task of spam classification, which has been studied for a long time in the field of machine learning and often requires a large amount of data to obtain high accuracy. Our models achieved superior performance with small and biased samples in comparison with other representative machine learning methods.
Meng, Fan; Yang, Xiaomei; Zhou, Chenghu
2014-01-01
This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the -norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image. PMID:25248103
Mason, W Alex; Toumbourou, John W; Herrenkohl, Todd I; Hemphill, Sheryl A; Catalano, Richard F; Patton, George C
2011-12-01
This paper examines whether there is cross-national similarity in the longitudinal relationship between early age alcohol use and adolescent alcohol problems. Potential mechanisms underlying this relationship also are examined, testing adolescent alcohol use, low self-regulation, and peer deviance as possible mediators. Students (N = 1,945) participating in the International Youth Development Study, a longitudinal panel survey study, responded to questions on alcohol use and influencing factors, and were followed annually over a 3-year period from 2002 to 2004 (98% retention rate). State-representative, community student samples were recruited in grade 7 in Washington State, United States (US, n = 961, 78% of those eligible; Mage = 13.09, SD = .44) and Victoria, Australia (n = 984, 76% of those eligible; Mage = 12.93, SD = .41). Analyses were conducted using multiple-group structural equation modeling. In both states, early age alcohol use (age 13) had a small but statistically significant association with subsequent alcohol problems (age 15). Overall, there was little evidence for mediation of early alcohol effects. Low self-regulation prospectively predicted peer deviance, alcohol use, and alcohol problems in both states. Peer deviance was more positively related to alcohol use and low self-regulation among students in Victoria compared to students in Washington State. The small but persistent association of early age alcohol use with alcohol problems across both samples is consistent with efforts to delay alcohol initiation to help prevent problematic alcohol use. Self-regulation was an important influence, supporting the need to further investigate the developmental contribution of neurobehavioral disinhibition.
Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.
Saunders, Christopher T; Wong, Wendy S W; Swamy, Sajani; Becq, Jennifer; Murray, Lisa J; Cheetham, R Keira
2012-07-15
Whole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity. We describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests. The Strelka workflow source code is available at ftp://strelka@ftp.illumina.com/. csaunders@illumina.com
Wittek, Charlotte Thoresen; Finserås, Turi Reiten; Pallesen, Ståle; Mentzoni, Rune Aune; Hanss, Daniel; Griffiths, Mark D; Molde, Helge
Video gaming has become a popular leisure activity in many parts of the world, and an increasing number of empirical studies examine the small minority that appears to develop problems as a result of excessive gaming. This study investigated prevalence rates and predictors of video game addiction in a sample of gamers, randomly selected from the National Population Registry of Norway ( N = 3389). Results showed there were 1.4 % addicted gamers, 7.3 % problem gamers, 3.9 % engaged gamers, and 87.4 % normal gamers. Gender (being male) and age group (being young) were positively associated with addicted-, problem-, and engaged gamers. Place of birth (Africa, Asia, South- and Middle America) were positively associated with addicted- and problem gamers. Video game addiction was negatively associated with conscientiousness and positively associated with neuroticism. Poor psychosomatic health was positively associated with problem- and engaged gaming. These factors provide insight into the field of video game addiction, and may help to provide guidance as to how individuals that are at risk of becoming addicted gamers can be identified.
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
An information diffusion technique to assess integrated hazard risks.
Huang, Chongfu; Huang, Yundong
2018-02-01
An integrated risk is a scene in the future associated with some adverse incident caused by multiple hazards. An integrated probability risk is the expected value of disaster. Due to the difficulty of assessing an integrated probability risk with a small sample, weighting methods and copulas are employed to avoid this obstacle. To resolve the problem, in this paper, we develop the information diffusion technique to construct a joint probability distribution and a vulnerability surface. Then, an integrated risk can be directly assessed by using a small sample. A case of an integrated risk caused by flood and earthquake is given to show how the suggested technique is used to assess the integrated risk of annual property loss. Copyright © 2017 Elsevier Inc. All rights reserved.
A Multimodal Approach to Emotion Recognition Ability in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jones, Catherine R. G.; Pickles, Andrew; Falcaro, Milena; Marsden, Anita J. S.; Happe, Francesca; Scott, Sophie K.; Sauter, Disa; Tregay, Jenifer; Phillips, Rebecca J.; Baird, Gillian; Simonoff, Emily; Charman, Tony
2011-01-01
Background: Autism spectrum disorders (ASD) are characterised by social and communication difficulties in day-to-day life, including problems in recognising emotions. However, experimental investigations of emotion recognition ability in ASD have been equivocal, hampered by small sample sizes, narrow IQ range and over-focus on the visual modality.…
A Note on Structural Equation Modeling Estimates of Reliability
ERIC Educational Resources Information Center
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Problems and Limitations in Studies on Screening for Language Delay
ERIC Educational Resources Information Center
Eriksson, Marten; Westerlund, Monica; Miniscalco, Carmela
2010-01-01
This study discusses six common methodological limitations in screening for language delay (LD) as illustrated in 11 recent studies. The limitations are (1) whether the studies define a target population, (2) whether the recruitment procedure is unbiased, (3) attrition, (4) verification bias, (5) small sample size and (6) inconsistencies in choice…
A two-dimensional approach to relationship conflict: meta-analytic findings.
Woodin, Erica M
2011-06-01
This meta-analysis of 64 studies (5,071 couples) used a metacoding system to categorize observed couple conflict behaviors into categories differing in terms of valence (positive to negative) and intensity (high to low) and resulting in five behavioral categories: hostility, distress, withdrawal, problem solving, and intimacy. Aggregate effect sizes indicated that women were somewhat more likely to display hostility, distress, and intimacy during conflict, whereas men were somewhat more likely to display withdrawal and problem solving. Gender differences were of a small magnitude. For both men and women, hostility was robustly associated with lower relationship satisfaction (medium effect), distress and withdrawal were somewhat associated (small effect), and intimacy and problem solving were both closely associated with relationship satisfaction (medium effect). Effect sizes were moderated in several cases by study characteristics including year of publication, developmental period of the sample, recruitment design, duration of observed conflict, method used to induce conflict, and type of coding system used. Findings from this meta-analysis suggest that high-intensity conflict behaviors of both a positive and negative nature are important correlates of relationship satisfaction and underscore the relatively small gender differences in many conflict behaviors. 2011 APA, all rights reserved
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
A computer system for analysis and transmission of spirometry waveforms using volume sampling.
Ostler, D V; Gardner, R M; Crapo, R O
1984-06-01
A microprocessor-controlled data gathering system for telemetry and analysis of spirometry waveforms was implemented using a completely digital design. Spirometry waveforms were obtained from an optical shaft encoder attached to a rolling seal spirometer. Time intervals between 10-ml volume changes (volume sampling) were stored. The digital design eliminated problems of analog signal sampling. The system measured flows up to 12 liters/sec with 5% accuracy and volumes up to 10 liters with 1% accuracy. Transmission of 10 waveforms took about 3 min. Error detection assured that no data were lost or distorted during transmission. A pulmonary physician at the central hospital reviewed the volume-time and flow-volume waveforms and interpretations generated by the central computer before forwarding the results and consulting with the rural physician. This system is suitable for use in a major hospital, rural hospital, or small clinic because of the system's simplicity and small size.
Permutation modulation for quantization and information reconciliation in CV-QKD systems
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
2017-08-01
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal to Noise Ratio (SNR) exasperating the problem. Here we propose to use Permutation Modulation (PM) as a means of quantization of Gaussian vectors at Alice and Bob over a d-dimensional space with d ≫ 1. The goal is to achieve the necessary coding efficiency to extend the achievable range of continuous variable QKD by quantizing over larger and larger dimensions. Fractional bit rate per sample is easily achieved using PM at very reasonable computational cost. Ordered statistics is used extensively throughout the development from generation of the seed vector in PM to analysis of error rates associated with the signs of the Gaussian samples at Alice and Bob as a function of the magnitude of the observed samples at Bob.
Integrated Blood Barcode Chips
Fan, Rong; Vermesh, Ophir; Srivastava, Alok; Yen, Brian K.H.; Qin, Lidong; Ahmad, Habib; Kwong, Gabriel A.; Liu, Chao-Chao; Gould, Juliane; Hood, Leroy; Heath, James R.
2008-01-01
Blood comprises the largest version of the human proteome1. Changes of plasma protein profiles can reflect physiological or pathological conditions associated with many human diseases, making blood the most important fluid for clinical diagnostics2-4. Nevertheless, only a handful of plasma proteins are utilized in routine clinical tests. This is due to a host of reasons, including the intrinsic complexity of the plasma proteome1, the heterogeneity of human diseases and the fast kinetics associated with protein degradation in sampled blood5. Simple technologies that can sensitively sample large numbers of proteins over broad concentration ranges, from small amounts of blood, and within minutes of sample collection, would assist in solving these problems. Herein, we report on an integrated microfluidic system, called the Integrated Blood Barcode Chip (IBBC). It enables on-chip blood separation and the rapid measurement of a panel of plasma proteins from small quantities of blood samples including a fingerprick of whole blood. This platform holds potential for inexpensive, non-invasive, and informative clinical diagnoses, particularly, for point-of-care. PMID:19029914
Self-similarity Clustering Event Detection Based on Triggers Guidance
NASA Astrophysics Data System (ADS)
Zhang, Xianfei; Li, Bicheng; Tian, Yuxuan
Traditional method of Event Detection and Characterization (EDC) regards event detection task as classification problem. It makes words as samples to train classifier, which can lead to positive and negative samples of classifier imbalance. Meanwhile, there is data sparseness problem of this method when the corpus is small. This paper doesn't classify event using word as samples, but cluster event in judging event types. It adopts self-similarity to convergence the value of K in K-means algorithm by the guidance of event triggers, and optimizes clustering algorithm. Then, combining with named entity and its comparative position information, the new method further make sure the pinpoint type of event. The new method avoids depending on template of event in tradition methods, and its result of event detection can well be used in automatic text summarization, text retrieval, and topic detection and tracking.
More Reasons to be Straightforward: Findings and Norms for Two Scales Relevant to Social Anxiety
Rodebaugh, Thomas L.; Heimberg, Richard G.; Brown, Patrick J.; Fernandez, Katya C.; Blanco, Carlos; Schneier, Franklin R.; Liebowitz, Michael R.
2011-01-01
The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. PMID:21388781
Powers, Christopher J; Bierman, Karen L; Coffman, Donna L
2016-08-01
Students with early-starting conduct problems often do poorly in school; they are disproportionately placed in restrictive educational placements outside of mainstream classrooms. Although intended to benefit students, research suggests that restrictive placements may exacerbate the maladjustment of youth with conduct problems. Mixed findings, small samples, and flawed designs limit the utility of existing research. This study examined the impact of restrictive educational placements on three adolescent outcomes (high school noncompletion, conduct disorder, depressive symptoms) in a sample of 861 students with early-starting conduct problems followed longitudinally from kindergarten (age 5-6). Causal modeling with propensity scores was used to adjust for confounding factors associated with restrictive placements. Analyses explored the timing of placement (elementary vs. secondary school) and moderation of impact by initial problem severity. Restrictive educational placement in secondary school (but not in elementary school) was iatrogenic, increasing the risk of high school noncompletion and the severity of adolescent conduct disorder. Negative effects were amplified for students with conduct problem behavior with less cognitive impairment. To avoid harm to students and to society, schools must find alternatives to restrictive placements for students with conduct problems in secondary school, particularly when these students do not have cognitive impairments that might warrant specialized educational supports. © 2015 Association for Child and Adolescent Mental Health.
ERIC Educational Resources Information Center
Savalei, Victoria
2010-01-01
Incomplete nonnormal data are common occurrences in applied research. Although these 2 problems are often dealt with separately by methodologists, they often cooccur. Very little has been written about statistics appropriate for evaluating models with such data. This article extends several existing statistics for complete nonnormal data to…
Application of a New Resampling Method to SEM: A Comparison of S-SMART with the Bootstrap
ERIC Educational Resources Information Center
Bai, Haiyan; Sivo, Stephen A.; Pan, Wei; Fan, Xitao
2016-01-01
Among the commonly used resampling methods of dealing with small-sample problems, the bootstrap enjoys the widest applications because it often outperforms its counterparts. However, the bootstrap still has limitations when its operations are contemplated. Therefore, the purpose of this study is to examine an alternative, new resampling method…
Zhang, Gang; Liang, Zhaohui; Yin, Jian; Fu, Wenbin; Li, Guo-Zheng
2013-01-01
Chronic neck pain is a common morbid disorder in modern society. Acupuncture has been administered for treating chronic pain as an alternative therapy for a long time, with its effectiveness supported by the latest clinical evidence. However, the potential effective difference in different syndrome types is questioned due to the limits of sample size and statistical methods. We applied machine learning methods in an attempt to solve this problem. Through a multi-objective sorting of subjective measurements, outstanding samples are selected to form the base of our kernel-oriented model. With calculation of similarities between the concerned sample and base samples, we are able to make full use of information contained in the known samples, which is especially effective in the case of a small sample set. To tackle the parameters selection problem in similarity learning, we propose an ensemble version of slightly different parameter setting to obtain stronger learning. The experimental result on a real data set shows that compared to some previous well-known methods, the proposed algorithm is capable of discovering the underlying difference among different syndrome types and is feasible for predicting the effective tendency in clinical trials of large samples.
Mixing problems in using indicators for measuring regional blood flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushioda, E.; Nuwayhid, B.; Tabsh, K.
A basic requirement for using indicators for measuring blood flow is adequate mixing of the indicator with blood prior to sampling the site. This requirement has been met by depositing the indicator in the heart and sampling from an artery. Recently, authors have injected microspheres into veins and sampled from venous sites. The present studies were designed to investigate the mixing problems in sheep and rabbits by means of Cardio-Green and labeled microspheres. The indicators were injected at different points in the circulatory system, and blood was sampled at different levels of the venous and arterial systems. Results show themore » following: (a) When an indicator of small molecular size (Cardio-Green) is allowed to pass through the heart chambers, adequate mixing is achieved, yielding accurate and reproducible results. (b) When any indicator (Cardio-Green or microspheres) is injected into veins, and sampling is done at any point in the venous system, mixing is inadequate, yielding flow results which are inconsistent and erratic. (c) For an indicator or large molecular size (microspheres), injecting into the left side of the heart and sampling from arterial sites yield accurate and reproducible results regardless of whether blood is sampled continuously or intermittently.« less
Developing non-routine problems for assessing students’ mathematical literacy
NASA Astrophysics Data System (ADS)
Murdiyani, N. M.
2018-03-01
The purpose of this study is to develop non-routine problems for assessing the mathematics literacy skills of students, which is valid, practical, and effective. It is based on the previous research said that Indonesian students’ mathematical literacy is still low. The results of this study can be used as a guide in developing the evaluation questions that can train students to improve the ability of solving non-routine problems in everyday life. This research type is formative evaluation that consists of preliminary, self evaluation, expert reviews, one-to-one, small group, and field test. The sample of this research is grade 8 students at one of Junior High School in Yogyakarta. This study results in mathematics literacy problems prototype consisting of level 1 to level 6 problems similar to PISA problems. This study also discusses the examples of students’ answer and their reasoning.
Incidence of behavior problems in toddlers and preschool children from families living in poverty.
Holtz, Casey A; Fox, Robert A; Meurer, John R
2015-01-01
Few studies have examined the incidence of behavior problems in toddlers and preschool children from families living in poverty. The available research suggests behavior problems occur at higher rates in children living in poverty and may have long-term negative outcomes if not identified and properly treated. This study included an ethnically representative sample of 357 children, five years of age and younger, from a diverse, low-income, urban area. All families' incomes met the federal threshold for living in poverty. Behavior problems were assessed by parent report through a questionnaire specifically designed for low-income families. Boys and younger children were reported as demonstrating a higher rate of externalizing behaviors than girls and older children. The overall rate of children scoring at least one standard deviation above the sample's mean for challenging behaviors was 17.4% and was not related to the child's gender, age or ethnicity. This study also sampled children's positive behaviors, which is unique in studies of behavior problems. Gender and age were not related to the frequency of reported positive behaviors. Ethnicity did influence scores on the positive scale. African American children appeared to present their parents more difficulty on items reflecting cooperative behaviors than Caucasian or Latino children. The implications of the study are discussed based on the recognized need for universal screening of behavior problems in young children and the small number professional training programs targeting the identification and treatment of early childhood behavior problems, despite the availability of evidence-based treatment programs tailored to young children in low-income families.
Human factors in air traffic control: problems at the interfaces.
Shouksmith, George
2003-10-01
The triangular ISIS model for describing the operation of human factors in complex sociotechnical organisations or systems is applied in this research to a large international air traffic control system. A large sample of senior Air Traffic Controllers were randomly assigned to small focus discussion groups, whose task was to identify problems occurring at the interfaces of the three major human factor components: individual, system impacts, and social. From these discussions, a number of significant interface problems, which could adversely affect the functioning of the Air Traffic Control System, emerged. The majority of these occurred at the Individual-System Impact and Individual-Social interfaces and involved a perceived need for further interface centered training.
Binge drinking and sleep problems among young adults.
Popovici, Ioana; French, Michael T
2013-09-01
As most of the literature exploring the relationships between alcohol use and sleep problems is descriptive and with small sample sizes, the present study seeks to provide new information on the topic by employing a large, nationally representative dataset with several waves of data and a broad set of measures for binge drinking and sleep problems. We use data from the National Longitudinal Study of Adolescent Health (Add Health), a nationally representative survey of adolescents and young adults. The analysis sample consists of all Wave 4 observations without missing values for the sleep problems variables (N=14,089, 53% females). We estimate gender-specific multivariate probit models with a rich set of socioeconomic, demographic, physical, and mental health variables to control for confounding factors. Our results confirm that alcohol use, and specifically binge drinking, is positively and significantly associated with various types of sleep problems. The detrimental effects on sleep increase in magnitude with frequency of binge drinking, suggesting a dose-response relationship. Moreover, binge drinking is associated with sleep problems independent of psychiatric conditions. The statistically strong association between sleep problems and binge drinking found in this study is a first step in understanding these relationships. Future research is needed to determine the causal links between alcohol misuse and sleep problems to inform appropriate clinical and policy responses. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Xinguo; Chen, Dan
2017-08-01
Traditional supervised classifiers neglect a large amount of data which not have sufficient follow-up information, only work with labeled data. Consequently, the small sample size limits the advancement of design appropriate classifier. In this paper, a transductive learning method which combined with the filtering strategy in transductive framework and progressive labeling strategy is addressed. The progressive labeling strategy does not need to consider the distribution of labeled samples to evaluate the distribution of unlabeled samples, can effective solve the problem of evaluate the proportion of positive and negative samples in work set. Our experiment result demonstrate that the proposed technique have great potential in cancer prediction based on gene expression.
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
Mine, Madisa; Nkoane, Tapologo; Sebetso, Gaseene; Sakyi, Bright; Makhaola, Kgomotso; Gaolathe, Tendani
2013-12-01
The sample requirement of 1 mL for the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 test, version 2.0 (CAP CTM HIV v2.0) limits its utility in measuring plasma HIV-1 RNA levels for small volume samples from children infected with HIV-1. Viral load monitoring is the standard of care for HIV-1-infected patients on antiretroviral therapy in Botswana. The study aimed to validate the dilution of small volume samples with phosphate buffered saline (1× PBS) when quantifying HIV-1 RNA in patient plasma. HIV RNA concentrations were determined in undiluted and diluted pairs of samples comprising panels of quality assessment standards (n=52) as well as patient samples (n=325). There was strong correlation (R(2)) of 0.98 and 0.95 within the dynamic range of the CAP CTM HIV v2.0 test between undiluted and diluted samples from quality assessment standards and patients, respectively. The difference between viral load measurements of diluted and undiluted pairs of quality assessment standards and patient samples using the Altman-Bland test showed that the 95% limits of agreement were between -0.40 Log 10 and 0.49 Log 10. This difference was within the 0.5 Log 10 which is generally considered as normal assay variation of plasma RNA levels. Dilution of samples with 1× PBS produced comparable viral load measurements to undiluted samples. Copyright © 2013 Elsevier B.V. All rights reserved.
Fast, Nonlinear, Fully Probabilistic Inversion of Large Geophysical Problems
NASA Astrophysics Data System (ADS)
Curtis, A.; Shahraeeni, M.; Trampert, J.; Meier, U.; Cho, G.
2010-12-01
Almost all Geophysical inverse problems are in reality nonlinear. Fully nonlinear inversion including non-approximated physics, and solving for probability distribution functions (pdf’s) that describe the solution uncertainty, generally requires sampling-based Monte-Carlo style methods that are computationally intractable in most large problems. In order to solve such problems, physical relationships are usually linearized leading to efficiently-solved, (possibly iterated) linear inverse problems. However, it is well known that linearization can lead to erroneous solutions, and in particular to overly optimistic uncertainty estimates. What is needed across many Geophysical disciplines is a method to invert large inverse problems (or potentially tens of thousands of small inverse problems) fully probabilistically and without linearization. This talk shows how very large nonlinear inverse problems can be solved fully probabilistically and incorporating any available prior information using mixture density networks (driven by neural network banks), provided the problem can be decomposed into many small inverse problems. In this talk I will explain the methodology, compare multi-dimensional pdf inversion results to full Monte Carlo solutions, and illustrate the method with two applications: first, inverting surface wave group and phase velocities for a fully-probabilistic global tomography model of the Earth’s crust and mantle, and second inverting industrial 3D seismic data for petrophysical properties throughout and around a subsurface hydrocarbon reservoir. The latter problem is typically decomposed into 104 to 105 individual inverse problems, each solved fully probabilistically and without linearization. The results in both cases are sufficiently close to the Monte Carlo solution to exhibit realistic uncertainty, multimodality and bias. This provides far greater confidence in the results, and in decisions made on their basis.
Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study
Broman, Karl W.; Keller, Mark P.; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S.; Sen, Śaunak; Attie, Alan D.
2015-01-01
In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual’s eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. PMID:26290572
Identification and Correction of Sample Mix-Ups in Expression Genetic Data: A Case Study.
Broman, Karl W; Keller, Mark P; Broman, Aimee Teo; Kendziorski, Christina; Yandell, Brian S; Sen, Śaunak; Attie, Alan D
2015-08-19
In a mouse intercross with more than 500 animals and genome-wide gene expression data on six tissues, we identified a high proportion (18%) of sample mix-ups in the genotype data. Local expression quantitative trait loci (eQTL; genetic loci influencing gene expression) with extremely large effect were used to form a classifier to predict an individual's eQTL genotype based on expression data alone. By considering multiple eQTL and their related transcripts, we identified numerous individuals whose predicted eQTL genotypes (based on their expression data) did not match their observed genotypes, and then went on to identify other individuals whose genotypes did match the predicted eQTL genotypes. The concordance of predictions across six tissues indicated that the problem was due to mix-ups in the genotypes (although we further identified a small number of sample mix-ups in each of the six panels of gene expression microarrays). Consideration of the plate positions of the DNA samples indicated a number of off-by-one and off-by-two errors, likely the result of pipetting errors. Such sample mix-ups can be a problem in any genetic study, but eQTL data allow us to identify, and even correct, such problems. Our methods have been implemented in an R package, R/lineup. Copyright © 2015 Broman et al.
An Analysis Of Coast Guard Enlisted Retention
1993-03-01
Instrument Identification Number(i f applicable , Address (cirv. state, and ZIP code) 10 Source of Funding Numbers Program Element No Project No ITask...46 E. SAMPLE RESTRICTIONS ........ .............. 48 F. DATA LIMITATIONS AND PROBLEMS ... ......... .. 52 1. PMIS Data Base...civilian employment suggest retention behavior may be similar. Also, the small personnel inventories of some of the rates would limit the model’s
Early Detection of At-Risk Undergraduate Students through Academic Performance Predictors
ERIC Educational Resources Information Center
Rowtho, Vikash
2017-01-01
Undergraduate student dropout is gradually becoming a global problem and the 39 Small Islands Developing States (SIDS) are no exception to this trend. The purpose of this research was to develop a method that can be used for early detection of students who are at-risk of performing poorly in their undergraduate studies. A sample of 279 students…
Cognitive impairments in cancer patients represent an important clinical problem. Studies to date estimating prevalence of difficulties in memory, executive function, and attention deficits have been limited by small sample sizes and many have lacked healthy control groups. More information is needed on promising biomarkers and allelic variants that may help to determine the
ERIC Educational Resources Information Center
Hasselhorn, Marcus; Linke-Hasselhorn, Kathrin
2013-01-01
Eight six-year old German children with development disabilities regarding such number competencies as have been demonstrated to be among the most relevant precursor skills for the acquisition of elementary mathematics received intensive training with the program "Mengen, zählen, Zahlen" ["quantities, counting, numbers"] (MZZ,…
Deng, Yangqing; Pan, Wei
2018-06-01
Due to issues of practicality and confidentiality of genomic data sharing on a large scale, typically only meta- or mega-analyzed genome-wide association study (GWAS) summary data, not individual-level data, are publicly available. Reanalyses of such GWAS summary data for a wide range of applications have become more and more common and useful, which often require the use of an external reference panel with individual-level genotypic data to infer linkage disequilibrium (LD) among genetic variants. However, with a small sample size in only hundreds, as for the most popular 1000 Genomes Project European sample, estimation errors for LD are not negligible, leading to often dramatically increased numbers of false positives in subsequent analyses of GWAS summary data. To alleviate the problem in the context of association testing for a group of SNPs, we propose an alternative estimator of the covariance matrix with an idea similar to multiple imputation. We use numerical examples based on both simulated and real data to demonstrate the severe problem with the use of the 1000 Genomes Project reference panels, and the improved performance of our new approach. Copyright © 2018 by the Genetics Society of America.
Tagging of Test Tubes with Electronic p-Chips for Use in Biorepositories.
Mandecki, Wlodek; Kopacka, Wesley M; Qian, Ziye; Ertwine, Von; Gedzberg, Katie; Gruda, Maryann; Reinhardt, David; Rodriguez, Efrain
2017-08-01
A system has been developed to electronically tag and track test tubes used in biorepositories. The system is based on a light-activated microtransponder, also known as a "p-Chip." One of the pressing problems with storing and retrieving biological samples at low temperatures is the difficulty of reliably reading the identification (ID) number that links each storage tube with the database containing sample details. Commonly used barcodes are not always reliable at low temperatures because of poor adhesion of the label to the test tube and problems with reading under conditions of frost and ice accumulation. Traditional radio frequency identification (RFID) tags are not cost effective and are too large for this application. The system described herein consists of the p-Chip, p-Chip-tagged test tubes, two ID readers (for single tubes or for racks of tubes), and software. We also describe a robot that is configured for retrofitting legacy test tubes in biorepositories with p-Chips while maintaining the temperature of the sample below -50°C at all times. The main benefits of the p-Chip over other RFID devices are its small size (600 × 600 × 100 μm) that allows even very small tubes or vials to be tagged, low cost due to the chip's unitary construction, durability, and the ability to read the ID through frost and ice.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Improved radiation dose efficiency in solution SAXS using a sheath flow sample environment
Kirby, Nigel; Cowieson, Nathan; Hawley, Adrian M.; Mudie, Stephen T.; McGillivray, Duncan J.; Kusel, Michael; Samardzic-Boban, Vesna; Ryan, Timothy M.
2016-01-01
Radiation damage is a major limitation to synchrotron small-angle X-ray scattering analysis of biomacromolecules. Flowing the sample during exposure helps to reduce the problem, but its effectiveness in the laminar-flow regime is limited by slow flow velocity at the walls of sample cells. To overcome this limitation, the coflow method was developed, where the sample flows through the centre of its cell surrounded by a flow of matched buffer. The method permits an order-of-magnitude increase of X-ray incident flux before sample damage, improves measurement statistics and maintains low sample concentration limits. The method also efficiently handles sample volumes of a few microlitres, can increase sample throughput, is intrinsically resistant to capillary fouling by sample and is suited to static samples and size-exclusion chromatography applications. The method unlocks further potential of third-generation synchrotron beamlines to facilitate new and challenging applications in solution scattering. PMID:27917826
Speil, Sidney
1974-01-01
The problems of quantitating chrysotile in water by fiber count techniques are reviewed briefly and the use of mass quantitation is suggested as a preferable measure. Chrysotile fiber has been found in almost every sample of natural water examined, but generally transmission electron miscroscopy (TEM) is required because of the small diameters involved. The extreme extrapolation required in mathematically converting a few fibers or fiber fragments under the TEM to the fiber content of a liquid sample casts considerable doubt on the validity of numbers used to compare chrysotile contents of different liquids. PMID:4470930
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina; Olia, Khashayar
This paper is focused on the problem of Information Reconciliation (IR) for continuous variable Quantum Key Distribution (QKD). The main problem is quantization and assignment of labels to the samples of the Gaussian variables observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective Signal-to-Noise Ratio (SNR) exasperating the problem. Quantization over higher dimensions is advantageous since it allows for fractional bit per sample accuracy which may be needed at very low SNR conditions whereby the achievable secret key rate is significantly less than one bit per sample. In this paper, we propose to use Permutation Modulation (PM) for quantization of Gaussian vectors potentially containing thousands of samples. PM is applied to the magnitudes of the Gaussian samples and we explore the dependence of the sign error probability on the magnitude of the samples. At very low SNR, we may transmit the entire label of the PM code from Bob to Alice in Reverse Reconciliation (RR) over public channel. The side information extracted from this label can then be used by Alice to characterize the sign error probability of her individual samples. Forward Error Correction (FEC) coding can be used by Bob on each subset of samples with similar sign error probability to aid Alice in error correction. This can be done for different subsets of samples with similar sign error probabilities leading to an Unequal Error Protection (UEP) coding paradigm.
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1981-01-01
The propagation of photons in a medium with strongly anisotropic scattering is a problem with a considerable history. Like the propagation of electrons in metal foils, it may be solved in the small-angle scattering approximation by the use of Fourier-transform techniques. In certain limiting cases, one may even obtain analytic expressions. This paper presents some of these results in a model-independent form and also illustrates them by the use of four different phase-function models. Sample calculations are provided for comparison purposes
Gibs, J.; Wicklund, A.; Suffet, I.H.
1986-01-01
The 'rule of thumb' that large volumes of water can be sampled for trace organic pollutants by XAD resin columns which are designed by small column laboratory studies or pure compounds is examined and shown to be a problem. A theory of multicomponent breakthrough is presented as a frame of reference to help solve the problem and develop useable criteria to aid the design of resin columns. An important part of the theory is the effect of humic substances on the breakthrough character of multicomponent chemical systems.
Krawczyk, Paweł Adam; Ramlau, Rodryg Adam; Szumiło, Justyna; Kozielski, Jerzy; Kalinka-Warzocha, Ewa; Bryl, Maciej; Knopik-Dąbrowicz, Alina; Spychalski, Łukasz; Szczęsna, Aleksandra; Rydzik, Ewelina; Milanowski, Janusz
2013-01-01
Introduction ALK gene rearrangement is observed in a small subset (3–7%) of non-small cell lung cancer (NSCLC) patients. The efficacy of crizotinib was shown in lung cancer patients harbouring ALK rearrangement. Nowadays, the analysis of ALK gene rearrangement is added to molecular examination of predictive factors. Aim of the study The frequency of ALK gene rearrangement as well as the type of its irregularity was analysed by fluorescence in situ hybridisation (FISH) in tissue samples from NSCLC patients. Material and methods The ALK gene rearrangement was analysed in 71 samples including 53 histological and 18 cytological samples. The analysis could be performed in 56 cases (78.87%), significantly more frequently in histological than in cytological materials. The encountered problem with ALK rearrangement diagnosis resulted from the scarcity of tumour cells in cytological samples, high background fluorescence noises and fragmentation of cell nuclei. Results The normal ALK copy number without gene rearrangement was observed in 26 (36.62%) patients ALK gene polysomy without gene rearrangement was observed in 25 (35.21%) samples while in 3 (4.23%) samples ALK gene amplification was found. ALK gene rearrangement was observed in 2 (2.82%) samples from males, while in the first case the rearrangement coexisted with ALK amplification. In the second case, signet-ring tumour cells were found during histopathological examination and this patient was successfully treated with crizotinib with partial remission lasting 16 months. Conclusions FISH is a useful technique for ALK gene rearrangement analysis which allows us to specify the type of gene irregularities. ALK gene examination could be performed in histological as well as cytological (cellblocks) samples, but obtaining a reliable result in cytological samples depends on the cellularity of examined materials. PMID:24592134
Elliott, Luther; Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise
2012-12-01
AIMS: To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. DESIGN: Nationally representative survey. SETTING: Online. PARTICIPANTS: Large sample (n=3,380) of adult video gamers in the US. MEASUREMENTS: Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. FINDINGS: Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. CONCLUSIONS: Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of "addictive" video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play.
Ream, Geoffrey; McGinsky, Elizabeth; Dunlap, Eloise
2012-01-01
Aims To assess the contribution of patterns of video game play, including game genre, involvement, and time spent gaming, to problem use symptomatology. Design Nationally representative survey. Setting Online. Participants Large sample (n=3,380) of adult video gamers in the US. Measurements Problem video game play (PVGP) scale, video game genre typology, use patterns (gaming days in the past month and hours on days used), enjoyment, consumer involvement, and background variables. Findings Study confirms game genre's contribution to problem use as well as demographic variation in play patterns that underlie problem video game play vulnerability. Conclusions Identification of a small group of game types positively correlated with problem use suggests new directions for research into the specific design elements and reward mechanics of “addictive” video games. Unique vulnerabilities to problem use among certain groups demonstrate the need for ongoing investigation of health disparities related to contextual dimensions of video game play. PMID:23284310
A General Exponential Framework for Dimensionality Reduction.
Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan
2014-02-01
As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.
Image aesthetic quality evaluation using convolution neural network embedded learning
NASA Astrophysics Data System (ADS)
Li, Yu-xin; Pu, Yuan-yuan; Xu, Dan; Qian, Wen-hua; Wang, Li-peng
2017-11-01
A way of embedded learning convolution neural network (ELCNN) based on the image content is proposed to evaluate the image aesthetic quality in this paper. Our approach can not only solve the problem of small-scale data but also score the image aesthetic quality. First, we chose Alexnet and VGG_S to compare for confirming which is more suitable for this image aesthetic quality evaluation task. Second, to further boost the image aesthetic quality classification performance, we employ the image content to train aesthetic quality classification models. But the training samples become smaller and only using once fine-tuning cannot make full use of the small-scale data set. Third, to solve the problem in second step, a way of using twice fine-tuning continually based on the aesthetic quality label and content label respective is proposed, the classification probability of the trained CNN models is used to evaluate the image aesthetic quality. The experiments are carried on the small-scale data set of Photo Quality. The experiment results show that the classification accuracy rates of our approach are higher than the existing image aesthetic quality evaluation approaches.
Patchanee, Prapas; Tadee, Pakpoom; Ingkaninan, Pimlada; Tankaew, Pallop; Hoet, Armando E; Chupia, Vena
2014-03-01
Of 416 samples taken from veterinary staff (n = 30), dogs (n = 356) and various environmental sites (n = 30) at the Small Animal Hospital, Faculty of Veterinary Medicine, Chiang Mai University, Thailand, 13 samples contained methicillin-resistant Staphylococcus aureus (MRSA), of which 1 (SCCmec type II) came from veterinarian, 9 (SCCmec types I, III, IVa, V and untypeable) from dogs, and 3 (SCCmec types I, III, and IVb) from environmental samples. The MRSA isolates were 100% susceptible to vancomycin (100%), 69% to cephazolin and 62% to gentamicin, but were up to 92% resistant to tetracycline group, 69% to trimethoprim-sulfamethoxazoles and 62% to ceftriaxone. In addition, all MRSA isolates showed multidrug resistance. As the MRSA isolates from the veterinary staff and dogs were of different SCCmec types, this suggests there were no cross-infections. However, environmental contamination appears to have come from dogs, and appropriate hygienic practices should be introduced to solve this problem.
Exploring the Energy Landscapes of Protein Folding Simulations with Bayesian Computation
Burkoff, Nikolas S.; Várnai, Csilla; Wells, Stephen A.; Wild, David L.
2012-01-01
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. PMID:22385859
Exploring the energy landscapes of protein folding simulations with Bayesian computation.
Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L
2012-02-22
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
NASA Astrophysics Data System (ADS)
Ulyanov, Sergey; Ulianova, Onega; Filonova, Nadezhda; Moiseeva, Yulia; Zaitsev, Sergey; Saltykov, Yury; Polyanina, Tatiana; Lyapina, Anna; Kalduzova, Irina; Larionova, Olga; Utz, Sergey; Feodorova, Valentina
2018-04-01
Theory of diffusing wave spectroscopy has been firstly adapted to the problem of rapid detection of Chlamydia trachomatis bacteria in blood samples of Chlamydia patients. Formula for correlation function of temporal fluctuations of speckle intensity is derived for the case of small number of scattering events. Dependence of bandwidth of spectrum on average number of scatterers is analyzed. Set-up for detection of the presence of C. trachomatis cells in aqueous suspension is designed. Good agreement between theoretical results and experimental data is shown. Possibility of detection of the presence of C. trachomatis cells in probing volume using diffusing wave spectroscopy with a small number of scatterers is successfully demonstrated for the first time.
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
More reasons to be straightforward: findings and norms for two scales relevant to social anxiety.
Rodebaugh, Thomas L; Heimberg, Richard G; Brown, Patrick J; Fernandez, Katya C; Blanco, Carlos; Schneier, Franklin R; Liebowitz, Michael R
2011-06-01
The validity of both the Social Interaction Anxiety Scale and Brief Fear of Negative Evaluation scale has been well-supported, yet the scales have a small number of reverse-scored items that may detract from the validity of their total scores. The current study investigates two characteristics of participants that may be associated with compromised validity of these items: higher age and lower levels of education. In community and clinical samples, the validity of each scale's reverse-scored items was moderated by age, years of education, or both. The straightforward items did not show this pattern. To encourage the use of the straightforward items of these scales, we provide normative data from the same samples as well as two large student samples. We contend that although response bias can be a substantial problem, the reverse-scored questions of these scales do not solve that problem and instead decrease overall validity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Prevalence study of compulsive buying in a sample with low individual monthly income.
Leite, Priscilla Lourenço; Silva, Adriana Cardoso
2015-01-01
Compulsive buying can be characterized as an almost irresistible impulse to acquire various items. This is a current issue and the prevalence rate in the global population is around 5 to 8%. Some surveys indicate that the problem is growing in young and low-income populations. To evaluate the prevalence of compulsive buying among people with low personal monthly incomes and analyze relationships with socio-demographic data. The Compulsive Buying Scale was administered to screen for compulsive buying and the Hospital Anxiety and Depression Scale was used to assess anxiety and depression in a sample of 56 participants. Pearson coefficients were used to test for correlations. The results indicated that 44.6% presented an average family income equal to or greater than 2.76 minimum wages. It is possible that compulsive buying is not linked to the purchasing power since it was found in a low-income population. Despite the small sample, the results of this study are important for understanding the problem in question.
Cardiac vagal control and children’s adaptive functioning: A meta-analysis
Graziano, Paulo; Derefinko, Karen
2014-01-01
Polyvagal theory has influenced research on the role of cardiac vagal control, indexed by respiratory sinus arrhythmia withdrawal (RSA-W) during challenging states, in children’s self-regulation. However, it remains unclear how well RSA-W predicts adaptive functioning (AF) outcomes and whether certain caveats of measuring RSA (e.g., respiration) significantly impact these associations. A meta-analysis of 44 studies (n = 4,996 children) revealed small effect sizes such that greater levels of RSA-W were related to fewer externalizing, internalizing, and cognitive/academic problems. In contrast, RSA-W was differentially related to children’s social problems according to sample type (community vs. clinical/at-risk). The relations between RSA-W and children’s AF outcomes were stronger among studies that co-varied baseline RSA and in Caucasian children (no effect was found for respiration). Children from clinical/at-risk samples displayed lower levels of baseline RSA and RSA-W compared to children from community samples. Theoretical/practical implications for the study of cardiac vagal control are discussed. PMID:23648264
Microbiological testing of Skylab foods.
NASA Technical Reports Server (NTRS)
Heidelbaugh, N. D.; Mcqueen, J. L.; Rowley, D. B.; Powers , E. M.; Bourland, C. T.
1973-01-01
Review of some of the unique food microbiology problems and problem-generating circumstances the Skylab manned space flight program involves. The situations these problems arise from include: extended storage times, variations in storage temperatures, no opportunity to resupply or change foods after launch of the Skylab Workshop, first use of frozen foods in space, first use of a food-warming device in weightlessness, relatively small size of production lots requiring statistically valid sampling plans, and use of food as an accurately controlled part in a set of sophisticated life science experiments. Consideration of all of these situations produced the need for definite microbiological tests and test limits. These tests are described along with the rationale for their selection. Reported test results show good compliance with the test limits.
Motivators for change and barriers to help-seeking in Australian problem gamblers.
Evans, Lyn; Delfabbro, Paul H
2005-01-01
Although prevalence studies consistently indicate that many thousands of Australians experience gambling-related problems, only a relatively small proportion of these people seek professional help. This study examines the principal motivations for, and impediments to, help-seeking in a sample of 77 problem gamblers recruited from agencies and the general community. The results indicated that profession help-seeking is predominantly crisis-driven rather than being motived by a gradual recognition of problematic behaviour. Shame, denial and social factors were identified as the most significant barriers to change rather than a lack of knowledge, or dislike of, treatment agencies. The value of early interventions including the screening of gamblers in routine medical consultations and partner support strategies is discussed.
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Karriker-Jaffe, Katherine J; Witbrodt, Jane; Subbaraman, Meenakshi S; Kaskutas, Lee Ann
2018-03-30
We examined whether alcohol-dependent individuals with sustained substance use or psychiatric problems after completing treatment were more likely to experience low social status and whether continued help-seeking would improve outcomes. Ongoing alcohol, drug and psychiatric problems after completing treatment were associated with increased odds of low social status (unemployment, unstable housing and/or living in high-poverty neighborhood) over 7 years. The impact of drug problems declined over time, and there were small, delayed benefits of AA attendance on social status. Alcohol-dependent individuals sampled from public and private treatment programs (N = 491; 62% male) in Northern California were interviewed at treatment entry and 1, 3, 5 and 7 years later. Random effects models tested relationships between problem severity (alcohol, drug and psychiatric problems) and help-seeking (attending specialty alcohol/drug treatment and Alcoholics Anonymous, AA) with low social status (unemployment, unstable housing and/or living in a high-poverty neighborhood) over time. The proportion of participants experiencing none of the indicators of low social status increased between baseline and the 1-year follow-up and remained stable thereafter. Higher alcohol problem scores and having any drug and/or psychiatric problems in the years after treatment were associated with increased odds of low social status over time. An interaction of drug problems with time indicated the impact of drug problems on social status declined over the 7-year period. Both treatment-seeking and AA attendance were associated with increased odds of low social status, although lagged models suggested there were small, delayed benefits of AA attendance on improved social status over time. Specialty addiction treatment alone was not sufficient to have positive long-term impacts on social status and social integration of most alcohol-dependent people.
ERIC Educational Resources Information Center
Coyle, Shawn; Jones, Thea; Pickle, Shirley Kirk
2009-01-01
This article presents a sample of online learning programs serving very different populations: a small district spread over a vast area, a large inner school district, and a statewide program serving numerous districts. It describes how these districts successfully implemented e-learning programs in their schools and discusses the positive impact…
VAN Rooij, Antonius J; Kuss, Daria J; Griffiths, Mark D; Shorter, Gillian W; Schoenmakers, M Tim; VAN DE Mheen, Dike
2014-09-01
The current study explored the nature of problematic (addictive) video gaming (PVG) and the association with game type, psychosocial health, and substance use. Data were collected using a paper and pencil survey in the classroom setting. Three samples were aggregated to achieve a total sample of 8478 unique adolescents. Scales included measures of game use, game type, the Video game Addiction Test (VAT), depressive mood, negative self-esteem, loneliness, social anxiety, education performance, and use of cannabis, alcohol and nicotine (smoking). Findings confirmed problematic gaming is most common amongst adolescent gamers who play multiplayer online games. Boys (60%) were more likely to play online games than girls (14%) and problematic gamers were more likely to be boys (5%) than girls (1%). High problematic gamers showed higher scores on depressive mood, loneliness, social anxiety, negative self-esteem, and self-reported lower school performance. Nicotine, alcohol, and cannabis using boys were almost twice more likely to report high PVG than non-users. It appears that online gaming in general is not necessarily associated with problems. However, problematic gamers do seem to play online games more often, and a small subgroup of gamers - specifically boys - showed lower psychosocial functioning and lower grades. Moreover, associations with alcohol, nicotine, and cannabis use are found. It would appear that problematic gaming is an undesirable problem for a small subgroup of gamers. The findings encourage further exploration of the role of psychoactive substance use in problematic gaming.
VAN ROOIJ, ANTONIUS J.; KUSS, DARIA J.; GRIFFITHS, MARK D.; SHORTER, GILLIAN W.; SCHOENMAKERS, M. TIM; VAN DE MHEEN, DIKE
2014-01-01
Abstract Aims: The current study explored the nature of problematic (addictive) video gaming (PVG) and the association with game type, psychosocial health, and substance use. Methods: Data were collected using a paper and pencil survey in the classroom setting. Three samples were aggregated to achieve a total sample of 8478 unique adolescents. Scales included measures of game use, game type, the Video game Addiction Test (VAT), depressive mood, negative self-esteem, loneliness, social anxiety, education performance, and use of cannabis, alcohol and nicotine (smoking). Results: Findings confirmed problematic gaming is most common amongst adolescent gamers who play multiplayer online games. Boys (60%) were more likely to play online games than girls (14%) and problematic gamers were more likely to be boys (5%) than girls (1%). High problematic gamers showed higher scores on depressive mood, loneliness, social anxiety, negative self-esteem, and self-reported lower school performance. Nicotine, alcohol, and cannabis using boys were almost twice more likely to report high PVG than non-users. Conclusions: It appears that online gaming in general is not necessarily associated with problems. However, problematic gamers do seem to play online games more often, and a small subgroup of gamers – specifically boys – showed lower psychosocial functioning and lower grades. Moreover, associations with alcohol, nicotine, and cannabis use are found. It would appear that problematic gaming is an undesirable problem for a small subgroup of gamers. The findings encourage further exploration of the role of psychoactive substance use in problematic gaming. PMID:25317339
Habitat fragmentation effects on birds in grasslands and wetlands: A critique of our knowledge
Johnson, D.H.
2001-01-01
Habitat fragmentation exacerbates the problem of habitat loss for grassland and wetland birds. Remaining patches of grasslands and wetlands may be too small, too isolated, and too influenced by edge effects to maintain viable populations of some breeding birds. Knowledge of the effects of fragmentation on bird populations is critically important for decisions about reserve design, grassland and wetland management, and implementation of cropland set-aside programs that benefit wildlife. In my review of research that has been conducted on habitat fragmentation, I found at least five common problems in the methodology used. The results of many studies are compromised by these problems: passive sampling (sampling larger areas in larger patches), confounding effects of habitat heterogeneity, consequences of inappropriate pooling of data from different species, artifacts associated with artificial nest data, and definition of actual habitat patches. As expected, some large-bodied birds with large territorial requirements, such as the northern harrier (Circus cyaneus), appear area sensitive. In addition, some small species of grassland birds favor patches of habitat far in excess of their territory size, including the Savannah (Passerculus sandwichensis), grasshopper (Ammodramus savannarum) and Henslow's (A. henslowii) sparrows, and the bobolink (Dolichonyx oryzivorus). Other species may be area sensitive as well, but the data are ambiguous. Area sensitivity among wetland birds remains unknown since virtually no studies have been based on solid methodologies. We need further research on grassland bird response to habitat that distinguishes supportable conclusions from those that may be artifactual.
Smith, Stephen D. A.; Markic, Ana
2013-01-01
Marine debris is a global issue with impacts on marine organisms, ecological processes, aesthetics and economies. Consequently, there is increasing interest in quantifying the scale of the problem. Accumulation rates of debris on beaches have been advocated as a useful proxy for at-sea debris loads. However, here we show that past studies may have vastly underestimated the quantity of available debris because sampling was too infrequent. Our study of debris on a small beach in eastern Australia indicates that estimated daily accumulation rates decrease rapidly with increasing intervals between surveys, and the quantity of available debris is underestimated by 50% after only 3 days and by an order of magnitude after 1 month. As few past studies report sampling frequencies of less than a month, estimates of the scale of the marine debris problem need to be critically re-examined and scaled-up accordingly. These results reinforce similar, recent work advocating daily sampling as a standard approach for accurate quantification of available debris in coastal habitats. We outline an alternative approach whereby site-specific accumulation models are generated to correct bias when daily sampling is impractical. PMID:24367607
NASA Astrophysics Data System (ADS)
Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl
2018-06-01
In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.
NASA Astrophysics Data System (ADS)
Rakkapao, S.; Pengpan, T.; Srikeaw, S.; Prasitpong, S.
2014-01-01
This study aims to investigate the use of the predict-observe-explain (POE) approach integrated into large lecture classes on forces and motion. It is compared to the instructor-led problem-solving method using model analysis. The samples are science (SC, N = 420) and engineering (EN, N = 434) freshmen, from Prince of Songkla University, Thailand. Research findings from the force and motion conceptual evaluation indicate that the multimedia-supported POE method promotes students’ learning better than the problem-solving method, in particular for the velocity and acceleration concepts. There is a small shift of the students’ model states after the problem-solving instruction. Moreover, by using model analysis instructors are able to investigate students’ misconceptions and evaluate teaching methods. It benefits instructors in organizing subsequent instructional materials.
Jozefiak, Thomas; Larsson, Bo; Wichstrøm, Lars; Rimehaug, Tormod
2012-10-01
Previous studies from Nordic countries suggest that parent ratings of children's emotional and behavioural problems using the Child Behavior Checklist (CBCL) are among the lowest in the world. However, there has been no Norwegian population study with acceptable response rates to provide valid Norwegian reference data. Firstly, to compare CBCL Internalizing, Externalizing, Total Problems and Competence scores of Norwegian children and adolescents with those from 1) previous Norwegian studies, 2) other Nordic countries, and 3) international data. Secondly, to present Norwegian reference data in order to perform these comparisons. Thirdly, to investigate the effects of age, gender, socio-economic and urban/rural status on the CBCL. A stratified cluster sample of 2582 school children (1302 girls and 1280 boys) was identified from the general Norwegian population and their parents were asked to complete the CBCL. The response rate was 65.5%. The mean Total Problems score for the whole sample was 14.2 (standard deviation, s = 14.1). Girls were rated as having greater Competence and fewer Total Problems than boys. Younger children had more Total Problems than adolescents. Parents with low education reported more child Total Problems and lower Competence than those with high education. All effect sizes were small, except for the effect of parental education on child Competence, which was moderate. Total Problems scores were lower than in other societies. The data from this study obtained from one county in central Norway provide an important reference for clinical practice and treatment outcome research.
NASA Astrophysics Data System (ADS)
Puget, P.
The reliable and fast detection of chemical or biological molecules, or the measurement of their concentrations in a sample, are key problems in many fields such as environmental analysis, medical diagnosis, or the food industry. There are traditionally two approaches to this problem. The first aims to carry out a measurement in situ in the sample using chemical and biological sensors. The constraints imposed by detection limits, specificity, and in some cases stability are entirely imputed to the sensor. The second approach uses so-called total analysis systems to process the sample according to a protocol made up of different steps, such as extractions, purifications, concentrations, and a final detection stage. The latter is made in better conditions than with the first approach, which may justify the greater complexity of the process. It is this approach that is implemented in most methods for identifying pathogens, whether they be in biological samples (especially for in vitro diagnosis) or samples taken from the environment. The instrumentation traditionally used to carry out these protocols comprises a set of bulky benchtop apparatus, which needs to be plugged into the mains in order to function. However, there are many specific applications (to be discussed in this chapter) for which analysis instruments with the following characteristics are needed: Possibility of use outside the laboratory, i.e., instruments as small as possible, consuming little energy, and largely insensitive to external conditions of temperature, humidity, vibrations, and so on. Possibility of use by non-specialised agents, or even unmanned operation. Possibility of handling a large number of samples in a limited time, typically for high-throughput screening applications. Possibility of handling small samples. At the same time, a high level of performance is required, in particular in terms of (1) the detection limit, which must be as low as possible, (2) specificity, i.e., the ability to detect a particular molecule in a complex mixture, and (3) speed.
Arnett, Anne B; Pennington, Bruce F; Young, Jami F; Hankin, Benjamin L
2016-04-01
The onset of hyperactivity/impulsivity and attention problems (HAP) is typically younger than that of conduct problems (CP), and some research supports a directional relation wherein HAP precedes CP. Studies have tested this theory using between-person and between-group comparisons, with conflicting results. In contrast, prior research has not examined the effects of within-person fluctuations in HAP on CP. This study tested the hypothesis that within-person variation in HAP would positively predict subsequent within-person variation in CP, in two population samples of youth (N = 620) who participated in identical methods of assessment over the course of 30 months. Three-level, hierarchical models were used to test for within-person, longitudinal associations between HAP and CP, as well as moderating effects of between-person and between-family demographics. We found a small but significant association in the expected direction for older youth, but the opposite effect in younger and non-Caucasian youth. These results were replicated across both samples. The process by which early HAP relates to later CP may vary by age and racial identity. © 2015 Association for Child and Adolescent Mental Health.
Marco, C A; Suls, J
1993-06-01
Experience sampling methodology was used to examine the effects of current and prior problems on negative mood within and across days. Forty male community residents wore signal watches and kept dairy records of problem occurrence and mood 8 times a day for 8 consecutive days. Trait negative affectivity (NA), prior mood, and concurrent stress were related to mood during the day. Mood in response to a current problem was worse if the prior time had been problem free than if the prior time had been stressful. High NA Ss were more reactive to concurrent stressors than were low NAs, but the effect was small. NA and current-day stress were the major influences of mood across days. High NAs were more distressed by current-day problems and recovered more slowly from problems of the preceding day. The benefits of conceptualizing the effects of daily stressors on mood in terms of spillover, response assimilation, habituation, and contrast are discussed.
Occurrence of aflatoxin M1 in conventional and organic milk offered for sale in Italy.
Armorini, Sara; Altafini, Alberto; Zaghini, Anna; Roncada, Paola
2016-11-01
In the present study, 58 samples of milk were analyzed for the presence of aflatoxin M 1 (AFM 1 ). The samples were purchased during the period April-May 2013 in a random manner from local stores (supermarkets, small retail shops, small groceries, and specialized suppliers) located in the surrounding of Bologna (Italy). The commercial samples of milk were either organic (n = 22) or conventional (n = 36); fresh milk samples and UHT milk samples, whole milk samples, and partially skim milk samples were present in both the two considered categories. For the quantification of AFM 1 in milk, the extraction-purification technique based on the use of immunoaffinity columns was adopted and analyses were performed using HPLC-FD. AFM 1 was detected in 35 samples, 11 from organic production and 24 from conventional production. No statistically (P > 0.05) significant differences were observed in the concentration of AFM 1 in the two categories of product. The levels of contamination found in the positive samples ranged between 0.009 and 0.026 ng mL -1 . No sample exceeded the limit defined at community level for AFM 1 in milk (0.05 μg kg -1 ). This demonstrates the effectiveness of the checks before the placing on the market of these food products. Thus, the "aflatoxins" problem that characterized the summer of 2012 does not seem to have had effect on the contamination level of the considered milk samples.
Zandstra, Anna Roos E; Ormel, Johan; Dietrich, Andrea; van den Heuvel, Edwin R; Hoekstra, Pieter J; Hartman, Catharina A
2018-04-01
From the literature it is not clear whether low resting heart rate (HR) reflects low or high sensitivity to the detrimental effects of adverse environments on externalizing problems. We studied parental psychiatric history (PH), reflecting general vulnerability, as possible moderator explaining these inconsistencies. Using Linear Mixed Models, we analyzed data from 1914 subjects, obtained in three measurement waves (mean age 11, 13.5, and 16 years) from the TRacking Adolescents' Individual Lives Survey population-based cohort and the parallel clinic-referred cohort. As hypothesized, more chronic stressors predicted more externalizing problems in vulnerable individuals with high resting HR but not in those with low resting HR, suggesting high vs. low sensitivity, respectively, to adverse environmental influences. Low sensitivity to adverse environmental influences in vulnerable individuals exposed to high stressor levels was additionally confirmed by high heart rate variability (Root Mean Squared Successive Difference; RMSSD). In adolescents with low vulnerability, in contrast, the association between chronic stressors and externalizing problems did not substantially differ by resting HR and RMSSD. Future research may demonstrate whether our findings extend to other adverse, or beneficial, influences. Notwithstanding their theoretical interest, the effects were small, only pertained to parent-reported externalizing problems, refer to a small subset of respondents in our sample, and are in need of replication. We conclude that HR and RMSSD are unlikely to be strong moderators of the association between stressors and externalizing problems. Copyright © 2018 Elsevier B.V. All rights reserved.
Application of micro-Fourier transform infrared spectroscopy to the examination of paint samples
NASA Astrophysics Data System (ADS)
Zięba-Palus, J.
1999-11-01
The examination and identification of automobile paints is an important problem in road accidents investigations. Since the real sample available is very small, only sensitive microtechniques can be applied. The methods of optical microscopy and micro-Fourier transform infrared spectroscopy (MK-FTIR) supported by scanning electron microscopy together with X-ray microanalysis (SEM-EDX) allow one to carry out the examination of each paint layer without any separation procedure. In this paper an attempt is made to discriminate between different automobile paints of the same colour by the use of these methods for criminalistic investigations.
Bayesian Analysis of the Power Spectrum of the Cosmic Microwave Background
NASA Technical Reports Server (NTRS)
Jewell, Jeffrey B.; Eriksen, H. K.; O'Dwyer, I. J.; Wandelt, B. D.
2005-01-01
There is a wealth of cosmological information encoded in the spatial power spectrum of temperature anisotropies of the cosmic microwave background. The sky, when viewed in the microwave, is very uniform, with a nearly perfect blackbody spectrum at 2.7 degrees. Very small amplitude brightness fluctuations (to one part in a million!!) trace small density perturbations in the early universe (roughly 300,000 years after the Big Bang), which later grow through gravitational instability to the large-scale structure seen in redshift surveys... In this talk, I will discuss a Bayesian formulation of this problem; discuss a Gibbs sampling approach to numerically sampling from the Bayesian posterior, and the application of this approach to the first-year data from the Wilkinson Microwave Anisotropy Probe. I will also comment on recent algorithmic developments for this approach to be tractable for the even more massive data set to be returned from the Planck satellite.
Advanced atomic force microscopy: Development and application
NASA Astrophysics Data System (ADS)
Walters, Deron A.
Over the decade since atomic force microscopy (AFM) was invented, development of new microscopes has been closely intertwined with application of AFM to problems of interest in physics, chemistry, biology, and engineering. New techniques such as tapping mode AFM move quickly in our lab from the designer's bench to the user's table-since this is often the same piece of furniture. In return, designers get ample feedback as to what problems are limiting current instruments, and thus need most urgent attention. Tip sharpness and characterization are such a problem. Chapter 1 describes an AFM designed to operate in a scanning electron microscope, whose electron beam is used to deposit sharp carbonaceous tips. These tips can be tested and used in situ. Another limitation is addressed in Chapter 2: the difficulty of extracting more than just topographic information from a sample. A combined AFM/confocal optical microscope was built to provide simultaneous, independent images of the topography and fluorescence of a sample. In combination with staining or antibody labelling, this could provide submicron information about the composition of a sample. Chapters 3 and 4 discuss two generations of small cantilevers developed for lower-noise, higher-speed AFM of biological samples. In Chapter 4, a 26 mum cantilever is used to image the process of calcite growth from solution at a rate of 1.6 sec/frame. Finally, Chapter 5 explores in detail a biophysics problem that motivates us to develop fast, quiet, and gentle microscopes; namely, the control of crystal growth in seashells by the action of soluble proteins on a growing calcite surface.
Maximum-performance fiber-optic irradiation with nonimaging designs.
Fang, Y; Feuermann, D; Gordon, J M
1997-10-01
A range of practical nonimaging designs for optical fiber applications is presented. Rays emerging from a fiber over a restricted angular range (small numerical aperture) are needed to illuminate a small near-field detector at maximum radiative efficiency. These designs range from pure reflector (all-mirror), to pure dielectric (refractive and based on total internal reflection) to lens-mirror combinations. Sample designs are shown for a specific infrared fiber-optic irradiation problem of practical interest. Optical performance is checked with computer three-dimensional ray tracing. Compared with conventional imaging solutions, nonimaging units offer considerable practical advantages in compactness and ease of alignment as well as noticeably superior radiative efficiency.
Fabrication of electron beam deposited tip for atomic-scale atomic force microscopy in liquid.
Miyazawa, K; Izumi, H; Watanabe-Nakayama, T; Asakawa, H; Fukuma, T
2015-03-13
Recently, possibilities of improving operation speed and force sensitivity in atomic-scale atomic force microscopy (AFM) in liquid using a small cantilever with an electron beam deposited (EBD) tip have been intensively explored. However, the structure and properties of an EBD tip suitable for such an application have not been well-understood and hence its fabrication process has not been established. In this study, we perform atomic-scale AFM measurements with a small cantilever and clarify two major problems: contaminations from a cantilever and tip surface, and insufficient mechanical strength of an EBD tip having a high aspect ratio. To solve these problems, here we propose a fabrication process of an EBD tip, where we attach a 2 μm silica bead at the cantilever end and fabricate a 500-700 nm EBD tip on the bead. The bead height ensures sufficient cantilever-sample distance and enables to suppress long-range interaction between them even with a short EBD tip having high mechanical strength. After the tip fabrication, we coat the whole cantilever and tip surface with Si (30 nm) to prevent the generation of contamination. We perform atomic-scale AFM imaging and hydration force measurements at a mica-water interface using the fabricated tip and demonstrate its applicability to such an atomic-scale application. With a repeated use of the proposed process, we can reuse a small cantilever for atomic-scale measurements for several times. Therefore, the proposed method solves the two major problems and enables the practical use of a small cantilever in atomic-scale studies on various solid-liquid interfacial phenomena.
An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.
Yang, Yifei; Tan, Minjia; Dai, Yuewei
2017-01-01
A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
Cornely, P; Bromet, E
1986-07-01
The Behavior Screening Questionnaire (BSQ) was used to determine whether 2 1/2-3 1/2 yr old children living near the TMI nuclear reactor were more disturbed than children living near another nuclear plant or near a fossil-fuel facility in Pennsylvania when assessed 2 1/2 yr later. The prevalence of behavior problems was 11%. Differences among the sites in overall rates and individual symptoms were small. Perceptions of environmental stress among the TMI sample of mothers were unrelated to BSQ scores, whereas in the comparison sites, where unemployment was rising, economic concerns were meaningfully related to the BSQ.
Reliability of a Measure of Institutional Discrimination against Minorities
1979-12-01
samples are presented. The first is based upon classical statistical theory and the second derives from a series of computer-generated Monte Carlo...Institutional racism and sexism . Englewood Cliffs, N. J.: Prentice-Hall, Inc., 1978. Hays, W. L. and Winkler, R. L. Statistics : probability, inference... statistical measure of the e of institutional discrimination are discussed. Two methods of dealing with the problem of reliability of the measure in small
DRME: Count-based differential RNA methylation analysis at small sample size scenario.
Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia
2016-04-15
Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong
2016-01-01
In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Zhao, Yufeng; Xie, Qi; He, Liyun; Liu, Baoyan; Li, Kun; Zhang, Xiang; Bai, Wenjing; Luo, Lin; Jing, Xianghong; Huo, Ruili
2014-10-01
To help researchers selecting appropriate data mining models to provide better evidence for the clinical practice of Traditional Chinese Medicine (TCM) diagnosis and therapy. Clinical issues based on data mining models were comprehensively summarized from four significant elements of the clinical studies: symptoms, symptom patterns, herbs, and efficacy. Existing problems were further generalized to determine the relevant factors of the performance of data mining models, e.g. data type, samples, parameters, variable labels. Combining these relevant factors, the TCM clinical data features were compared with regards to statistical characters and informatics properties. Data models were compared simultaneously from the view of applied conditions and suitable scopes. The main application problems were the inconsistent data type and the small samples for the used data mining models, which caused the inappropriate results, even the mistake results. These features, i.e. advantages, disadvantages, satisfied data types, tasks of data mining, and the TCM issues, were summarized and compared. By aiming at the special features of different data mining models, the clinical doctors could select the suitable data mining models to resolve the TCM problem.
Developmental trajectories of girls' and boys' delinquency and associated problems.
Pepler, Debra J; Jiang, Depeng; Craig, Wendy M; Connolly, Jennifer
2010-10-01
Developmental trajectories in delinquency through adolescence were studied along with family and peer relationship problems. Drawing from eight waves of data over seven years, we conducted trajectory analyses with a sample of 746 students (402 girls; 344 boys). Analyzing girls and boys together, a five-class model emerged: 60% of the adolescents rarely reported delinquency; 27.7% reported low initial levels with moderate levels of delinquency over time; 6% in the late onset group reported initially low and rising levels of delinquency; 5% in the early onset group reported moderate initial levels which increased and then decreased in later adolescence. A small group of only boys (1.3%) labeled chronic reported high initial levels of delinquency that increased over time. Group comparisons revealed problems in internalizing, parent and peer relationship problems. The findings provide direction for early identification and interventions to curtail the development of delinquency.
Methods for trend analysis: Examples with problem/failure data
NASA Technical Reports Server (NTRS)
Church, Curtis K.
1989-01-01
Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.
Assessing the Internal Dynamics of Mathematical Problem Solving in Small Groups.
ERIC Educational Resources Information Center
Artzt, Alice F.; Armour-Thomas, Eleanor
The purpose of this exploratory study was to examine the problem-solving behaviors and perceptions of (n=27) seventh-grade students as they worked on solving a mathematical problem within a small-group setting. An assessment system was developed that allowed for this analysis. To assess problem-solving behaviors within a small group a Group…
Peterson, Robin L.; Kirkwood, Michael W.; Taylor, H. Gerry; Stancin, Terry; Brown, Tanya M.; Wade, Shari L.
2013-01-01
Background A small body of previous research has demonstrated that pediatric traumatic brain injury increases risk for internalizing problems, but findings have varied regarding their predictors and correlates. Methods We examined the level and correlates of internalizing symptoms in 130 teens who had sustained a complicated mild to severe TBI within the past 1 to 6 months. Internalizing problems were measured via both maternal and paternal report Child Behavior Checklist. We also measured family functioning, parent psychiatric symptoms, and post-injury teen neurocognitive function. Results Mean parental ratings of internalizing problems were within the normal range. Depending on informant, 22–26% of the sample demonstrated clinically elevated internalizing problems. In multiple and binary logistic regression models, only parent psychiatric symptoms consistently provided unique prediction of teen internalizing symptoms. For maternal but not paternal report, female gender was associated with greater internalizing problems. Conclusion Parent and teen emotional problems are associated following adolescent TBI. Possible reasons for this relationship, including the effects of TBI on the family unit, are discussed. PMID:22935574
NASA Astrophysics Data System (ADS)
Ruf, B.; Erdnuess, B.; Weinmann, M.
2017-08-01
With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.
A meta-analytic review of two modes of learning and the description-experience gap.
Wulff, Dirk U; Mergenthaler-Canseco, Max; Hertwig, Ralph
2018-02-01
People can learn about the probabilistic consequences of their actions in two ways: One is by consulting descriptions of an action's consequences and probabilities (e.g., reading up on a medication's side effects). The other is by personally experiencing the probabilistic consequences of an action (e.g., beta testing software). In principle, people taking each route can reach analogous states of knowledge and consequently make analogous decisions. In the last dozen years, however, research has demonstrated systematic discrepancies between description- and experienced-based choices. This description-experience gap has been attributed to factors including reliance on a small set of experience, the impact of recency, and different weighting of probability information in the two decision types. In this meta-analysis focusing on studies using the sampling paradigm of decisions from experience, we evaluated these and other determinants of the decision-experience gap by reference to more than 70,000 choices made by more than 6,000 participants. We found, first, a robust description-experience gap but also a key moderator, namely, problem structure. Second, the largest determinant of the gap was reliance on small samples and the associated sampling error: free to terminate search, individuals explored too little to experience all possible outcomes. Third, the gap persisted when sampling error was basically eliminated, suggesting other determinants. Fourth, the occurrence of recency was contingent on decision makers' autonomy to terminate search, consistent with the notion of optional stopping. Finally, we found indications of different probability weighting in decisions from experience versus decisions from description when the problem structure involved a risky and a safe option. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Alpha Matting with KL-Divergence Based Sparse Sampling.
Karacan, Levent; Erdem, Aykut; Erdem, Erkut
2017-06-22
In this paper, we present a new sampling-based alpha matting approach for the accurate estimation of foreground and background layers of an image. Previous sampling-based methods typically rely on certain heuristics in collecting representative samples from known regions, and thus their performance deteriorates if the underlying assumptions are not satisfied. To alleviate this, we take an entirely new approach and formulate sampling as a sparse subset selection problem where we propose to pick a small set of candidate samples that best explains the unknown pixels. Moreover, we describe a new dissimilarity measure for comparing two samples which is based on KLdivergence between the distributions of features extracted in the vicinity of the samples. The proposed framework is general and could be easily extended to video matting by additionally taking temporal information into account in the sampling process. Evaluation on standard benchmark datasets for image and video matting demonstrates that our approach provides more accurate results compared to the state-of-the-art methods.
Fong, Kenneth N K; Howie, Dorothy R
2009-01-01
We investigated the effects of an explicit problem-solving skills training program using a metacomponential approach with 33 outpatients with moderate acquired brain injury, in the Hong Kong context. We compared an experimental training intervention with this explicit problem-solving approach, which taught metacomponential strategies, with a conventional cognitive training approach that did not have this explicit metacognitive training. We found significant advantages for the experimental group on the Metacomponential Interview measure in association with the explicit metacomponential training, but transfer to the real-life problem-solving measures was not evidenced in statistically significant findings. Small sample size, limited time of intervention, and some limitations with these tools may have been contributing factors to these results. The training program was demonstrated to have a significantly greater effect than the conventional training approach on metacomponential functioning and the component of problem representation. However, these benefits were not transferable to real-life situations.
Li, Ben; Sun, Zhaonan; He, Qing; Zhu, Yu; Qin, Zhaohui S.
2016-01-01
Motivation: Modern high-throughput biotechnologies such as microarray are capable of producing a massive amount of information for each sample. However, in a typical high-throughput experiment, only limited number of samples were assayed, thus the classical ‘large p, small n’ problem. On the other hand, rapid propagation of these high-throughput technologies has resulted in a substantial collection of data, often carried out on the same platform and using the same protocol. It is highly desirable to utilize the existing data when performing analysis and inference on a new dataset. Results: Utilizing existing data can be carried out in a straightforward fashion under the Bayesian framework in which the repository of historical data can be exploited to build informative priors and used in new data analysis. In this work, using microarray data, we investigate the feasibility and effectiveness of deriving informative priors from historical data and using them in the problem of detecting differentially expressed genes. Through simulation and real data analysis, we show that the proposed strategy significantly outperforms existing methods including the popular and state-of-the-art Bayesian hierarchical model-based approaches. Our work illustrates the feasibility and benefits of exploiting the increasingly available genomics big data in statistical inference and presents a promising practical strategy for dealing with the ‘large p, small n’ problem. Availability and implementation: Our method is implemented in R package IPBT, which is freely available from https://github.com/benliemory/IPBT. Contact: yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26519502
Patient satisfaction with nursing staff in bone marrow transplantation and hematology units.
Piras, A; Poddigue, M; Angelucci, E
2010-01-01
Several validated questionnaires for assessment of hospitalized patient satisfaction have been reported in the literature. Many have been designed specifically for patients with cancer. User satisfaction is one indicator of service quality and benefits. Thus, we conducted a small qualitative survey managed by nursing staff in our Bone Marrow Transplantation Unit and Acute Leukemia Unit, with the objectives of assessing patient satisfaction, determining critical existing problems, and developing required interventions. The sample was not probabilistic. A questionnaire was developed using the Delphi method in a pilot study with 30 patients. Analysis of the data suggested a good level of patient satisfaction with medical and nursing staffs (100%), but poor satisfaction with food (48%), services (38%), and amenities (31%). Limitations of the study were that the questionnaire was unvalidated and the sample was small. However, for the first time, patient satisfaction was directly measured at our hospital. Another qualitative study will be conducted after correction of the critical points that emerged during this initial study, in a larger sample of patients. Copyright 2010 Elsevier Inc. All rights reserved.
Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele
2016-12-07
Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.
NASA Astrophysics Data System (ADS)
Salvalaglio, Matteo; Tiwary, Pratyush; Maggioni, Giovanni Maria; Mazzotti, Marco; Parrinello, Michele
2016-12-01
Condensation of a liquid droplet from a supersaturated vapour phase is initiated by a prototypical nucleation event. As such it is challenging to compute its rate from atomistic molecular dynamics simulations. In fact at realistic supersaturation conditions condensation occurs on time scales that far exceed what can be reached with conventional molecular dynamics methods. Another known problem in this context is the distortion of the free energy profile associated to nucleation due to the small, finite size of typical simulation boxes. In this work the problem of time scale is addressed with a recently developed enhanced sampling method while contextually correcting for finite size effects. We demonstrate our approach by studying the condensation of argon, and showing that characteristic nucleation times of the order of magnitude of hours can be reliably calculated. Nucleation rates spanning a range of 10 orders of magnitude are computed at moderate supersaturation levels, thus bridging the gap between what standard molecular dynamics simulations can do and real physical systems.
The effects of particle loading on turbulence structure and modelling
NASA Technical Reports Server (NTRS)
Squires, Kyle D.; Eaton, J. K.
1989-01-01
The objective of the present research was to extend the Direct Numerical Simulation (DNS) approach to particle-laden turbulent flows using a simple model of particle/flow interaction. The program addressed the simplest type of flow, homogeneous, isotropic turbulence, and examined interactions between the particles and gas phase turbulence. The specific range of problems examined include those in which the particle is much smaller than the smallest length scales of the turbulence yet heavy enough to slip relative to the flow. The particle mass loading is large enough to have a significant impact on the turbulence, while the volume loading was small enough such that particle-particle interactions could be neglected. Therefore, these simulations are relevant to practical problems involving small, dense particles conveyed by turbulent gas flows at moderate loadings. A sample of the results illustrating modifications of the particle concentration field caused by the turbulence structure is presented and attenuation of turbulence by the particle cloud is also illustrated.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
Bayesian inference based on stationary Fokker-Planck sampling.
Berrones, Arturo
2010-06-01
A novel formalism for bayesian learning in the context of complex inference models is proposed. The method is based on the use of the stationary Fokker-Planck (SFP) approach to sample from the posterior density. Stationary Fokker-Planck sampling generalizes the Gibbs sampler algorithm for arbitrary and unknown conditional densities. By the SFP procedure, approximate analytical expressions for the conditionals and marginals of the posterior can be constructed. At each stage of SFP, the approximate conditionals are used to define a Gibbs sampling process, which is convergent to the full joint posterior. By the analytical marginals efficient learning methods in the context of artificial neural networks are outlined. Offline and incremental bayesian inference and maximum likelihood estimation from the posterior are performed in classification and regression examples. A comparison of SFP with other Monte Carlo strategies in the general problem of sampling from arbitrary densities is also presented. It is shown that SFP is able to jump large low-probability regions without the need of a careful tuning of any step-size parameter. In fact, the SFP method requires only a small set of meaningful parameters that can be selected following clear, problem-independent guidelines. The computation cost of SFP, measured in terms of loss function evaluations, grows linearly with the given model's dimension.
Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks
NASA Astrophysics Data System (ADS)
Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li
2016-06-01
Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.
Improving imbalanced scientific text classification using sampling strategies and dictionaries.
Borrajo, L; Romero, R; Iglesias, E L; Redondo Marey, C M
2011-09-15
Many real applications have the imbalanced class distribution problem, where one of the classes is represented by a very small number of cases compared to the other classes. One of the systems affected are those related to the recovery and classification of scientific documentation. Sampling strategies such as Oversampling and Subsampling are popular in tackling the problem of class imbalance. In this work, we study their effects on three types of classifiers (Knn, SVM and Naive-Bayes) when they are applied to search on the PubMed scientific database. Another purpose of this paper is to study the use of dictionaries in the classification of biomedical texts. Experiments are conducted with three different dictionaries (BioCreative, NLPBA, and an ad-hoc subset of the UniProt database named Protein) using the mentioned classifiers and sampling strategies. Best results were obtained with NLPBA and Protein dictionaries and the SVM classifier using the Subsampling balancing technique. These results were compared with those obtained by other authors using the TREC Genomics 2005 public corpus. Copyright 2011 The Author(s). Published by Journal of Integrative Bioinformatics.
Multicategory nets of single-layer perceptrons: complexity and sample-size issues.
Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras
2010-05-01
The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.
Pollastro, R.M.
1982-01-01
Extremely well-oriented clay mineral mounts for X-ray diffraction analysis can be prepared quickly and without introducing segregation using the filter-membrane peel technique. Mounting problems encountered with smectite-rich samples can be resolved by using minimal sample and partial air-drying of the clay film before transfer to a glass slide. Samples containing small quantities of clay can produce useful oriented specimens if Teflon masks having more restrictive areas are inserted above the membrane filter during clay deposition. War]page and thermal shock of glass slides can be controlled by using a flat, porous, ceramic plate as a holding surface during heat treatments.
Lord, Dominique
2006-07-01
There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.
Cortez, Juliana; Pasquini, Celio
2013-02-05
The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.
Problems with the Small Business Administration’s Merit Appraisal and Compensation System.
1981-09-21
TAD-AI07 181 GENERAL ACCOUNTING OFFICE WASHINGTON DC FEDERAL PERS-ETC F/6 5/9 PROBLEMS WITH THE SMALL BUSINESS ADMINISTRATION’S MERIT APPRAIS--ETC(U...Adninistrator, Small Business Administration Dear Mr. Car D Subjec::/ Problems with the Small Business Administra- tjon’s Merit Appraisal and Compensation...System, (rLP68 8i 71). We reviewed the Small Business Administration’s (SBA’s) performance appraisal/merit pay program as part of our review of
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
Failure evolution in granular material retained by rigid wall in active mode
NASA Astrophysics Data System (ADS)
Pietrzak, Magdalena; Leśniewska, Danuta
2012-10-01
This paper presents a detailed study of a selected small scale model test, performed on a sample of surrogate granular material, retained by a rigid wall (typical geotechnical problem of earth thrust on a retaining wall). The experimental data presented in this paper show that the deformation of granular sample behind retaining wall can undergo some cyclic changes. The nature of these cycles is not clear - it is probably related to some micromechanical features of granular materials, which are recently extensively studied in many research centers in the world. Employing very precise DIC (PIV) method can help to relate micro and macro-scale behavior of granular materials.
Tarescavage, Anthony M; Fischler, Gary L; Cappo, Bruce M; Hill, David O; Corey, David M; Ben-Porath, Yossef S
2015-03-01
The current study examined the predictive validity of Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) scores in police officer screenings. We utilized a sample of 712 police officer candidates (82.6% male) from 2 Midwestern police departments. The sample included 426 hired officers, most of whom had supervisor ratings of problem behaviors and human resource records of civilian complaints. With the full sample, we calculated zero-order correlations between MMPI-2-RF scale scores and scale scores from the California Psychological Inventory (Gough, 1956) and Inwald Personality Inventory (Inwald, 2006) by gender. In the hired sample, we correlated MMPI-2-RF scale scores with the outcome data for males only, owing to the relatively small number of hired women. Several scales demonstrated meaningful correlations with the criteria, particularly in the thought dysfunction and behavioral/externalizing dysfunction domains. After applying a correction for range restriction, the correlation coefficient magnitudes were generally in the moderate to large range. The practical implications of these findings were explored by means of risk ratio analyses, which indicated that officers who produced elevations at cutscores lower than the traditionally used 65 T-score level were as much as 10 times more likely than those scoring below the cutoff to exhibit problem behaviors. Overall, the results supported the validity of the MMPI-2-RF in this setting. Implications and limitations of this study are discussed. 2015 APA, all rights reserved
NASA Technical Reports Server (NTRS)
Panzarella, Charles
2004-01-01
As humans prepare for the exploration of our solar system, there is a growing need for miniaturized medical and environmental diagnostic devices for use on spacecrafts, especially during long-duration space missions where size and power requirements are critical. In recent years, the biochip (or Lab-on-a- Chip) has emerged as a technology that might be able to satisfy this need. In generic terms, a biochip is a miniaturized microfluidic device analogous to the electronic microchip that ushered in the digital age. It consists of tiny microfluidic channels, pumps and valves that transport small amounts of sample fluids to biosensors that can perform a variety of tests on those fluids in near real time. It has the obvious advantages of being small, lightweight, requiring less sample fluids and reagents and being more sensitive and efficient than larger devices currently in use. Some of the desired space-based applications would be to provide smaller, more robust devices for analyzing blood, saliva and urine and for testing water and food supplies for the presence of harmful contaminants and microorganisms. Our group has undertaken the goal of adapting as well as improving upon current biochip technology for use in long-duration microgravity environments. In addition to developing computational models of the microfluidic channels, valves and pumps that form the basis of every biochip, we are also trying to identify potential problems that could arise in reduced gravity and develop solutions to these problems. One such problem is due to the prevalence of bubbly sample fluids in microgravity. A bubble trapped in a microfluidic channel could be detrimental to the operation of a biochip. Therefore, the process of bubble formation in microgravity needs to be studied, and a model of this process has been developed and used to understand how bubbles develop and move through biochip components. It is clear that some type of bubble filter would be necessary in Space, and several bubble filter designs are being evaluated.
Is First-Order Vector Autoregressive Model Optimal for fMRI Data?
Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain
2015-09-01
We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.
Niknafs, Noushin; Beleva-Guthrie, Violeta; Naiman, Daniel Q.; Karchin, Rachel
2015-01-01
Recent improvements in next-generation sequencing of tumor samples and the ability to identify somatic mutations at low allelic fractions have opened the way for new approaches to model the evolution of individual cancers. The power and utility of these models is increased when tumor samples from multiple sites are sequenced. Temporal ordering of the samples may provide insight into the etiology of both primary and metastatic lesions and rationalizations for tumor recurrence and therapeutic failures. Additional insights may be provided by temporal ordering of evolving subclones—cellular subpopulations with unique mutational profiles. Current methods for subclone hierarchy inference tightly couple the problem of temporal ordering with that of estimating the fraction of cancer cells harboring each mutation. We present a new framework that includes a rigorous statistical hypothesis test and a collection of tools that make it possible to decouple these problems, which we believe will enable substantial progress in the field of subclone hierarchy inference. The methods presented here can be flexibly combined with methods developed by others addressing either of these problems. We provide tools to interpret hypothesis test results, which inform phylogenetic tree construction, and we introduce the first genetic algorithm designed for this purpose. The utility of our framework is systematically demonstrated in simulations. For most tested combinations of tumor purity, sequencing coverage, and tree complexity, good power (≥ 0.8) can be achieved and Type 1 error is well controlled when at least three tumor samples are available from a patient. Using data from three published multi-region tumor sequencing studies of (murine) small cell lung cancer, acute myeloid leukemia, and chronic lymphocytic leukemia, in which the authors reconstructed subclonal phylogenetic trees by manual expert curation, we show how different configurations of our tools can identify either a single tree in agreement with the authors, or a small set of trees, which include the authors’ preferred tree. Our results have implications for improved modeling of tumor evolution and the importance of multi-region tumor sequencing. PMID:26436540
"Compacted" procedures for adults' simple addition: A review and critique of the evidence.
Chen, Yalin; Campbell, Jamie I D
2018-04-01
We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.
Jurek, Anne M; Maldonado, George; Greenland, Sander
2013-03-01
Special care must be taken when adjusting for outcome misclassification in case-control data. Basic adjustment formulas using either sensitivity and specificity or predictive values (as with external validation data) do not account for the fact that controls are sampled from a much larger pool of potential controls. A parallel problem arises in surveys and cohort studies in which participation or loss is outcome related. We review this problem and provide simple methods to adjust for outcome misclassification in case-control studies, and illustrate the methods in a case-control birth certificate study of cleft lip/palate and maternal cigarette smoking during pregnancy. Adjustment formulas for outcome misclassification that ignore case-control sampling can yield severely biased results. In the data we examined, the magnitude of error caused by not accounting for sampling is small when population sensitivity and specificity are high, but increases as (1) population sensitivity decreases, (2) population specificity decreases, and (3) the magnitude of the differentiality increases. Failing to account for case-control sampling can result in an odds ratio adjusted for outcome misclassification that is either too high or too low. One needs to account for outcome-related selection (such as case-control sampling) when adjusting for outcome misclassification using external information. Copyright © 2013 Elsevier Inc. All rights reserved.
Vermaes, Ignace P R; van Susante, Anna M J; van Bakel, Hedwig J A
2012-03-01
The aim of this meta-analysis was to provide an up-to-date review of the literature to enhance our understanding of how chronic health conditions (CHCs) affect siblings, both positively and negatively. PsycINFO and Medline were systematically searched. Inclusion criteria were as follows: (a) peer-reviewed, empirical research report; (b) sample n ≥ 10; and (c) reports statistics on siblings' internalizing problems, externalizing problems, and/or positive self-attributes. Overall, there was a significant small negative effect of CHCs on siblings (d(+) = -.10). Siblings of children with CHCs had more internalizing problems (d(+) = .17), more externalizing problems (d(+) = .08), and less positive self-attributes (d(+) = -.09) than comparisons. Older siblings and siblings of children with life-threatening and/or highly intrusive CHCs were more at risk for psychological problems. This study identified several mechanisms through which CHCs affect siblings. Future research should focus on parent-child dynamics and the longitudinal development of positive self-attributes and internalizing problems as well as on identifying what works in services for siblings of children with CHCs.
Mental health problems among young doctors: an updated review of prospective studies.
Tyssen, Reidar; Vaglum, Per
2002-01-01
Previous studies have shown the medical community to exhibit a relatively high level of certain mental health problems, particularly depression, which may lead to drug abuse and suicide. We reviewed prospective studies published over the past 20 years to investigate the prevalence and predictors of mental health problems in doctors during their first postgraduate years. We selected clinically relevant mental health problems as the outcome measure. We found nine cohort studies that met our selection criteria. Each of them had limitations, notably low response rate at follow-up, small sample size, and/or short observation period. Most studies showed that symptoms of mental health problems, particularly of depression, were highest during the first postgraduate year. They found that individual factors, such as family background, personality traits (neuroticism and self-criticism), and coping by wishful thinking, as well as contextual factors including perceived medical-school stress, perceived overwork, emotional pressure, working in an intensive-care setting, and stress outside of work, were often predictive of mental health problems. The studies revealed somewhat discrepant findings with respect to gender. The implications of these findings are discussed.
Small sample sizes in the study of ontogenetic allometry; implications for palaeobiology
Vavrek, Matthew J.
2015-01-01
Quantitative morphometric analyses, particularly ontogenetic allometry, are common methods used in quantifying shape, and changes therein, in both extinct and extant organisms. Due to incompleteness and the potential for restricted sample sizes in the fossil record, palaeobiological analyses of allometry may encounter higher rates of error. Differences in sample size between fossil and extant studies and any resulting effects on allometric analyses have not been thoroughly investigated, and a logical lower threshold to sample size is not clear. Here we show that studies based on fossil datasets have smaller sample sizes than those based on extant taxa. A similar pattern between vertebrates and invertebrates indicates this is not a problem unique to either group, but common to both. We investigate the relationship between sample size, ontogenetic allometric relationship and statistical power using an empirical dataset of skull measurements of modern Alligator mississippiensis. Across a variety of subsampling techniques, used to simulate different taphonomic and/or sampling effects, smaller sample sizes gave less reliable and more variable results, often with the result that allometric relationships will go undetected due to Type II error (failure to reject the null hypothesis). This may result in a false impression of fewer instances of positive/negative allometric growth in fossils compared to living organisms. These limitations are not restricted to fossil data and are equally applicable to allometric analyses of rare extant taxa. No mathematically derived minimum sample size for ontogenetic allometric studies is found; rather results of isometry (but not necessarily allometry) should not be viewed with confidence at small sample sizes. PMID:25780770
A cautionary note on Bayesian estimation of population size by removal sampling with diffuse priors.
Bord, Séverine; Bioche, Christèle; Druilhet, Pierre
2018-05-01
We consider the problem of estimating a population size by removal sampling when the sampling rate is unknown. Bayesian methods are now widespread and allow to include prior knowledge in the analysis. However, we show that Bayes estimates based on default improper priors lead to improper posteriors or infinite estimates. Similarly, weakly informative priors give unstable estimators that are sensitive to the choice of hyperparameters. By examining the likelihood, we show that population size estimates can be stabilized by penalizing small values of the sampling rate or large value of the population size. Based on theoretical results and simulation studies, we propose some recommendations on the choice of the prior. Then, we applied our results to real datasets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Meng, Su; Chen, Jie; Sun, Jian
2017-10-01
This paper investigates the problem of observer-based output feedback control for networked control systems with non-uniform sampling and time-varying transmission delay. The sampling intervals are assumed to vary within a given interval. The transmission delay belongs to a known interval. A discrete-time model is first established, which contains time-varying delay and norm-bounded uncertainties coming from non-uniform sampling intervals. It is then converted to an interconnection of two subsystems in which the forward channel is delay-free. The scaled small gain theorem is used to derive the stability condition for the closed-loop system. Moreover, the observer-based output feedback controller design method is proposed by utilising a modified cone complementary linearisation algorithm. Finally, numerical examples illustrate the validity and superiority of the proposed method.
Gewandter, Jennifer S; Walker, Joanna; Heckler, Charles E; Morrow, Gary R; Ryan, Julie L
2013-12-01
Skin reactions and pain are commonly reported side effects of radiation therapy (RT). To characterize RT-induced symptoms according to treatment site subgroups and identify skin symptoms that correlate with pain. A self-report survey-adapted from the MD Anderson Symptom Inventory and the McGill Pain Questionnaire--assessed RT-induced skin problems, pain, and specific skin symptoms. Wilcoxon Sign Ranked tests compared mean severity or pre- and post-RT pain and skin problems within each RT-site subgroup. Multiple linear regression (MLR) investigated associations between skin symptoms and pain. Survey respondents (N = 106) were 58% female and on average 64 years old. RT sites included lung, breast, lower abdomen, head/neck/brain, and upper abdomen. Only patients receiving breast RT reported significant increases in treatment site pain and skin problems (P < or = .007). Patients receiving head/neck/brain RT reported increased skin problems (P < .0009). MLR showed that post-RT skin tenderness and tightness were most strongly associated with post-RT pain (P = .066 and P = .122, respectively). Small sample size, exploratory analyses, and nonvalidated measure. Only patients receiving breast RT reported significant increases in pain and skin problems at the RT site while patients receiving head/neck/brain RT had increased skin problems but not pain. These findings suggest that the severity of skin problems is not the only factor that contributes to pain and that interventions should be tailored to specifically target pain at the RT site, possibly by targeting tenderness and tightness. These findings should be confirmed in a larger sampling of RT patients.
Handling limited datasets with neural networks in medical applications: A small-data approach.
Shaikhina, Torgyn; Khovanova, Natalia A
2017-01-01
Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality.
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2015-12-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991-2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA).
ERIC Educational Resources Information Center
Bowker, Lee H.; Lynch, David M.
Ten management problems for chairs of small departments in small colleges are discussed, along with problem-solving strategies for these administrators. Serious disagreements within a small and intimate department may create a country club culture in which differences are smoothed over and the personal idiosyncrasies of individual members are…
Enhanced sampling techniques in molecular dynamics simulations of biological systems.
Bernardi, Rafael C; Melo, Marcelo C R; Schulten, Klaus
2015-05-01
Molecular dynamics has emerged as an important research methodology covering systems to the level of millions of atoms. However, insufficient sampling often limits its application. The limitation is due to rough energy landscapes, with many local minima separated by high-energy barriers, which govern the biomolecular motion. In the past few decades methods have been developed that address the sampling problem, such as replica-exchange molecular dynamics, metadynamics and simulated annealing. Here we present an overview over theses sampling methods in an attempt to shed light on which should be selected depending on the type of system property studied. Enhanced sampling methods have been employed for a broad range of biological systems and the choice of a suitable method is connected to biological and physical characteristics of the system, in particular system size. While metadynamics and replica-exchange molecular dynamics are the most adopted sampling methods to study biomolecular dynamics, simulated annealing is well suited to characterize very flexible systems. The use of annealing methods for a long time was restricted to simulation of small proteins; however, a variant of the method, generalized simulated annealing, can be employed at a relatively low computational cost to large macromolecular complexes. Molecular dynamics trajectories frequently do not reach all relevant conformational substates, for example those connected with biological function, a problem that can be addressed by employing enhanced sampling algorithms. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide
2014-01-01
There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.
Empirical evidences of owners’ managerial behaviour - the case of small companies
NASA Astrophysics Data System (ADS)
Lobontiu, G.; Banica, M.; Ravai-Nagy, S.
2017-05-01
In a small firm, the founder or the owner-manager often leaves his or her own personal “stamp” on the way things are done, finding solutions for the multitude of problems the firm faces, and maintaining control over the firm’s operations. The paper aims to investigate the degree to which the owner-managers are controlling the operations of their firm on a day-to-day basis or even getting involved into the management of the functional areas. Our empirical research, conducted on a sample of 200 small and medium-sized enterprises (SME) from the North-Western Romania, Maramures (NUTS3 level - RO114), shows that owner-managers tend to be all-powerful, making decisions based on their experience. Furthermore, the survey highlights the focus of owner-managers on two functional areas, namely the production, and sales and marketing. Finally, the correlation analysis states that in the case of small firms, the owner-manager is more involved in managing the functional areas of the firm, as compared to the medium-ones.
Cannabis, motivation, and life satisfaction in an internet sample
Barnwell, Sara Smucker; Earleywine, Mitch; Wilcox, Rand
2006-01-01
Although little evidence supports cannabis-induced amotivational syndrome, sources continue to assert that the drug saps motivation [1], which may guide current prohibitions. Few studies report low motivation in chronic users; another reveals that they have higher subjective wellbeing. To assess differences in motivation and subjective wellbeing, we used a large sample (N = 487) and strict definitions of cannabis use (7 days/week) and abstinence (never). Standard statistical techniques showed no differences. Robust statistical methods controlling for heteroscedasticity, non-normality and extreme values found no differences in motivation but a small difference in subjective wellbeing. Medical users of cannabis reporting health problems tended to account for a significant portion of subjective wellbeing differences, suggesting that illness decreased wellbeing. All p-values were above p = .05. Thus, daily use of cannabis does not impair motivation. Its impact on subjective wellbeing is small and may actually reflect lower wellbeing due to medical symptoms rather than actual consumption of the plant. PMID:16722561
Conversational behaviour of children with Asperger syndrome and conduct disorder.
Adams, Catherine; Green, Jonathan; Gilchrist, Anne; Cox, Anthony
2002-07-01
Social communication problems in individuals who have Asperger syndrome constitute one of the most significant problems in the syndrome. This study makes a systematic analysis of the difficulties demonstrated with the use of language (pragmatics) in adolescents who have Asperger syndrome. Recent advances in discourse analysis were applied to conversational samples from a group of children with Asperger syndrome and a matched control group of children with severe conduct disorder. Two types of conversation were sampled from each group, differing in emotional content. The results showed that in these contexts children with Asperger syndrome were no more verbose as a group than controls, though they showed a tendency to talk more in more emotion-based conversations. Children with Asperger syndrome, as a group, performed similarly to control subjects in ability to respond to questions and comments. However, they were more likely to show responses which were problematic in both types of conversation. In addition, individuals with Asperger syndrome showed more problems in general conversation than during more emotionally and socially loaded topics. The group with Asperger syndrome was found to contain a small number of individuals with extreme verbosity but this was not a reliable characteristic of the group as a whole.
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Perry, C. R., Jr. (Principal Investigator)
1980-01-01
The problem of determining the stratum variances required for an optimum sample allocation for remotely sensed crop surveys is investigated with emphasis on an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical statistics is developed for obtaining initial estimates of stratum variances. The procedure is applied to variance for wheat in the U.S. Great Plains and is evaluated based on the numerical results obtained. It is shown that the proposed technique is viable and performs satisfactorily with the use of a conservative value (smaller than the expected value) for the field size and with the use of crop statistics from the small political division level.
Wedgworth, Jessica C.; Brown, Joe; Johnson, Pauline; Olson, Julie B.; Elliott, Mark; Forehand, Rick; Stauber, Christine E.
2014-01-01
Although small, rural water supplies may present elevated microbial risks to consumers in some settings, characterizing exposures through representative point-of-consumption sampling is logistically challenging. In order to evaluate the usefulness of consumer self-reported data in predicting measured water quality and risk factors for contamination, we compared matched consumer interview data with point-of-survey, household water quality and pressure data for 910 households served by 14 small water systems in rural Alabama. Participating households completed one survey that included detailed feedback on two key areas of water service conditions: delivery conditions (intermittent service and low water pressure) and general aesthetic characteristics (taste, odor and color), providing five condition values. Microbial water samples were taken at the point-of-use (from kitchen faucets) and as-delivered from the distribution network (from outside flame-sterilized taps, if available), where pressure was also measured. Water samples were analyzed for free and total chlorine, pH, turbidity, and presence of total coliforms and Escherichia coli. Of the 910 households surveyed, 35% of participants reported experiencing low water pressure, 15% reported intermittent service, and almost 20% reported aesthetic problems (taste, odor or color). Consumer-reported low pressure was associated with lower gauge-measured pressure at taps. While total coliforms (TC) were detected in 17% of outside tap samples and 12% of samples from kitchen faucets, no reported water service conditions or aesthetic characteristics were associated with presence of TC. We conclude that consumer-reported data were of limited utility in predicting potential microbial risks associated with small water supplies in this setting, although consumer feedback on low pressure—a risk factor for contamination—may be relatively reliable and therefore useful in future monitoring efforts. PMID:25046635
Wedgworth, Jessica C; Brown, Joe; Johnson, Pauline; Olson, Julie B; Elliott, Mark; Forehand, Rick; Stauber, Christine E
2014-07-18
Although small, rural water supplies may present elevated microbial risks to consumers in some settings, characterizing exposures through representative point-of-consumption sampling is logistically challenging. In order to evaluate the usefulness of consumer self-reported data in predicting measured water quality and risk factors for contamination, we compared matched consumer interview data with point-of-survey, household water quality and pressure data for 910 households served by 14 small water systems in rural Alabama. Participating households completed one survey that included detailed feedback on two key areas of water service conditions: delivery conditions (intermittent service and low water pressure) and general aesthetic characteristics (taste, odor and color), providing five condition values. Microbial water samples were taken at the point-of-use (from kitchen faucets) and as-delivered from the distribution network (from outside flame-sterilized taps, if available), where pressure was also measured. Water samples were analyzed for free and total chlorine, pH, turbidity, and presence of total coliforms and Escherichia coli. Of the 910 households surveyed, 35% of participants reported experiencing low water pressure, 15% reported intermittent service, and almost 20% reported aesthetic problems (taste, odor or color). Consumer-reported low pressure was associated with lower gauge-measured pressure at taps. While total coliforms (TC) were detected in 17% of outside tap samples and 12% of samples from kitchen faucets, no reported water service conditions or aesthetic characteristics were associated with presence of TC. We conclude that consumer-reported data were of limited utility in predicting potential microbial risks associated with small water supplies in this setting, although consumer feedback on low pressure-a risk factor for contamination-may be relatively reliable and therefore useful in future monitoring efforts.
Eisenberg, Daniel; Hunt, Justin; Speer, Nicole
2013-01-01
We estimated the prevalence and correlates of mental health problems among college students in the United States. In 2007 and 2009, we administered online surveys with brief mental health screens to random samples of students at 26 campuses nationwide. We used sample probability weights to adjust for survey nonresponse. A total of 14,175 students completed the survey, corresponding to a 44% participation rate. The prevalence of positive screens was 17.3% for depression, 4.1% for panic disorder, 7.0% for generalized anxiety, 6.3% for suicidal ideation, and 15.3% for nonsuicidal self-injury. Mental health problems were significantly associated with sex, race/ethnicity, religiosity, relationship status, living on campus, and financial situation. The prevalence of conditions varied substantially across the campuses, although campus-level variation was still a small proportion of overall variation in student mental health. The findings offer a starting point for identifying individual and contextual factors that may be useful to target in intervention strategies.
Wavelet-domain de-noising technique for THz pulsed spectroscopy
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.
2014-09-01
De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.
Li, Ben; Sun, Zhaonan; He, Qing; Zhu, Yu; Qin, Zhaohui S
2016-03-01
Modern high-throughput biotechnologies such as microarray are capable of producing a massive amount of information for each sample. However, in a typical high-throughput experiment, only limited number of samples were assayed, thus the classical 'large p, small n' problem. On the other hand, rapid propagation of these high-throughput technologies has resulted in a substantial collection of data, often carried out on the same platform and using the same protocol. It is highly desirable to utilize the existing data when performing analysis and inference on a new dataset. Utilizing existing data can be carried out in a straightforward fashion under the Bayesian framework in which the repository of historical data can be exploited to build informative priors and used in new data analysis. In this work, using microarray data, we investigate the feasibility and effectiveness of deriving informative priors from historical data and using them in the problem of detecting differentially expressed genes. Through simulation and real data analysis, we show that the proposed strategy significantly outperforms existing methods including the popular and state-of-the-art Bayesian hierarchical model-based approaches. Our work illustrates the feasibility and benefits of exploiting the increasingly available genomics big data in statistical inference and presents a promising practical strategy for dealing with the 'large p, small n' problem. Our method is implemented in R package IPBT, which is freely available from https://github.com/benliemory/IPBT CONTACT: yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The Multiple-Minima Problem in Protein Folding
NASA Astrophysics Data System (ADS)
Scheraga, Harold A.
1991-10-01
The conformational energy surface of a polypeptide or protein has many local minima, and conventional energy minimization procedures reach only a local minimum (near the starting point of the optimization algorithm) instead of the global minimum (the multiple-minima problem). Several procedures have been developed to surmount this problem, the most promising of which are: (a) build up procedure, (b) optimization of electrostatics, (c) Monte Carlo-plus-energy minimization, (d) electrostatically-driven Monte Carlo, (e) inclusion of distance restraints, (f) adaptive importance-sampling Monte Carlo, (g) relaxation of dimensionality, (h) pattern-recognition, and (i) diffusion equation method. These procedures have been applied to a variety of polypeptide structural problems, and the results of such computations are presented. These include the computation of the structures of open-chain and cyclic peptides, fibrous proteins and globular proteins. Present efforts are being devoted to scaling up these procedures from small polypeptides to proteins, to try to compute the three-dimensional structure of a protein from its amino sequence.
Low-Dead-Volume Inlet for Vacuum Chamber
NASA Technical Reports Server (NTRS)
Naylor, Guy; Arkin, C.
2010-01-01
Gas introduction from near-ambient pressures to high vacuum traditionally is accomplished either by multi-stage differential pumping that allows for very rapid response, or by a capillary method that allows for a simple, single-stage introduction, but which often has a delayed response. Another means to introduce the gas sample is to use the multi-stage design with only a single stage. This is accomplished by using a very small conductance limit. The problem with this method is that a small conductance limit will amplify issues associated with dead -volume. As a result, a high -vacuum gas inlet was developed with low dead -volume, allowing the use of a very low conductance limit interface. Gas flows through the ConFlat flange at a relatively high flow rate at orders of magnitude greater than through the conductance limit. The small flow goes through a conductance limit that is a double-sided ConFlat.
Low-Dead-Volume Inlet for Vacuum Chamber
NASA Technical Reports Server (NTRS)
Naylor, Guy; Arkin, C.
2011-01-01
Gas introduction from near-ambient pressures to high vacuum traditionally is accomplished either by multi-stage differential pumping that allows for very rapid response, or by a capillary method that allows for a simple, single-stage introduction, but which often has a delayed response. Another means to introduce the gas sample is to use the multi-stage design with only a single stage. This is accomplished by using a very small conductance limit. The problem with this method is that a small conductance limit will amplify issues associated with dead-volume. As a result, a high-vacuum gas inlet was developed with low dead-volume, allowing the use of a very low conductance limit interface. Gas flows through the ConFlat flange at a relatively high flow rate at orders of magnitude greater than through the conductance limit. The small flow goes through a conductance limit that is a double-sided ConFlat.
Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data
Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.
2013-01-01
Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946
Small Business Management and Ownership. Volume Four. Mini-Problems in Entrepreneurship.
ERIC Educational Resources Information Center
Shuchat, Jo
The mini-problems presented in this volume are provided to augment the introductory course, "Minding Your Own Small Business," and the advanced course, "Something Ventured, Something Gained," in small business ownership and management. They can also be used in conjunction with other instructional materials in small business…
Authorship and sampling practice in selected biomechanics and sports science journals.
Knudson, Duane V
2011-06-01
In some biomedical sciences, changes in patterns of collaboration and authorship have complicated the assignment of credit and responsibility for research. It is unclear if this problem of "promiscuous coauthorship" or "hyperauthorship" (defined as six or more authors) is also apparent in the applied research disciplines within sport and exercise science. This study documented the authorship and sampling of patterns of original research reports in three applied biomechanics (Clinical Biomechanics, Journal of Applied Biomechanics, and Sports Biomechanics) and five similar subdisciplinary journals within sport and exercise science (International Journal of Sports Physiology and Performance, Journal of Sport Rehabilitation, Journal of Teaching Physical Education, Measurement in Physical Education and Exercise Sciences, and Motor Control). Original research reports from the 2009 volumes of these biomechanics and sport and exercise journals were reviewed. Single authorship of papers was rare (2.6%) in these journals, with the mean number of authors ranging from 2.7 to 4.5. Sample sizes and the ratio of sample to authors varied widely, and these variables tended not to be associated with number of authors. Original research reports published in these journals in 2009 tended to be published by small teams of collaborators, so currently there may be few problems with promiscuous coauthorship in these subdisciplines of sport and exercise science.
Predictive value of callous-unemotional traits in a large community sample.
Moran, Paul; Rowe, Richard; Flach, Clare; Briskman, Jacqueline; Ford, Tamsin; Maughan, Barbara; Scott, Stephen; Goodman, Robert
2009-11-01
Callous-unemotional (CU) traits in children and adolescents are increasingly recognized as a distinctive dimension of prognostic importance in clinical samples. Nevertheless, comparatively little is known about the longitudinal effects of these personality traits on the mental health of young people from the general population. Using a large representative sample of children and adolescents living in Great Britain, we set out to examine the effects of CU traits on a range of mental health outcomes measured 3 years after the initial assessment. Parents were interviewed to determine the presence of CU traits in a representative sample of 7,636 children and adolescents. The parents also completed the Strengths and Difficulties Questionnaire, a broad measure of childhood psychopathology. Three years later, parents repeated the Strengths and Difficulties Questionnaire. At 3-year follow-up, CU traits were associated with conduct, hyperactivity, emotional, and total symptom scores. After adjusting for the effects of all covariates, including baseline symptom score, CU traits remained robustly associated with the overall levels of conduct problems and emotional problems and with total psychiatric difficulties at 3-year follow-up. Callous-unemotional traits are independently associated with future psychiatric difficulties in children and adolescents. An assessment of CU traits adds small but significant improvements to the prediction of future psychopathology.
Managing Small Spacecraft Projects: Less is Not Easier
NASA Technical Reports Server (NTRS)
Barley, Bryan; Newhouse, Marilyn
2012-01-01
Managing small, low cost missions (class C or D) is not necessarily easier than managing a full flagship mission. Yet, small missions are typically considered easier to manage and used as a training ground for developing the next generation of project managers. While limited resources can be a problem for small missions, in reality most of the issues inherent in managing small projects are not the direct result of limited resources. Instead, problems encountered by managers of small spacecraft missions often derive from 1) the perception that managing small projects is easier if something is easier it needs less rigor and formality in execution, 2) the perception that limited resources necessitate or validate omitting standard management practices, 3) less stringent or unclear guidelines or policies for small projects, and 4) stakeholder expectations that are not consistent with the size and nature of the project. For example, the size of a project is sometimes used to justify not building a full, detailed integrated master schedule. However, while a small schedule slip may not be a problem for a large mission, it can indicate a serious problem for a small mission with a short development phase, highlighting the importance of the schedule for early identification of potential issues. Likewise, stakeholders may accept a higher risk posture early in the definition of a low-cost mission, but as launch approaches this acceptance may change. This presentation discusses these common misconceptions about managing small, low cost missions, the problems that can result, and possible solutions.
Ventus, Daniel; Jern, Patrick
2016-10-01
Premature ejaculation (PE) is a common sexual problem in men, but its etiology remains uncertain. Lifestyle factors have long been hypothesized to be associated with sexual problems in general and have been proposed as risk factors for PE. To explore associations among physical exercise, alcohol use, body mass index, PE, and erectile dysfunction. A population-based sample of Finnish men and a sample of Finnish men diagnosed with PE were surveyed for statistical comparisons. Participants using selective serotonin reuptake inhibitors or other medications known to affect symptoms of PE were excluded from analyses. Self-report questionnaires: Multiple Indicators of Premature Ejaculation, International Index of Erectile Function-5, Alcohol Use Disorders Identification Test, and Godin Leisure-Time Exercise Questionnaire. The clinical sample reported lower levels of physical exercise (mean = 27.53, SD = 21.01, n = 69) than the population-based sample (mean = 34.68, SD = 22.82, n = 863, t930 = 2.52, P = .012), and the effect size was large (d = 0.85). There was a small negative correlation between levels of physical exercise and symptoms of PE (r = -0.09, P < .01, n = 863) in the population-based sample. The association between physical exercise and PE remained significant after controlling for effects of age, erectile dysfunction, alcohol use, and body mass index. If future studies show that the direction of causality of this association is such that physical activity alleviates PE symptoms, then including physical activity in PE treatment interventions could be a promising addition to treatment regimes. Copyright © 2016 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
Determination of the optimal sample size for a clinical trial accounting for the population size.
Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin
2017-07-01
The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dölitzsch, Claudia; Kölch, Michael; Fegert, Jörg M; Schmeck, Klaus; Schmid, Marc
2016-11-15
The current analyses examined whether the dysregulation profile (DP) 1) could be used to identify children and adolescents at high risk for complex and serious psychopathology and 2) was correlated to other emotional and behavioral problems (such as delinquent behavior or suicide ideation). DP was assessed using both the Child Behavior Checklist (CBCL) and the Youth Self Report (YSR) in a residential care sample. Children and adolescents (N=374) aged 10-18 years living in residential care in Switzerland completed the YSR, and their professional caregivers completed the CBCL. Participants meeting criteria for DP (T-score ≥67 on the anxious/depressed, attention problems, and aggressive behavior scales of the YSR/CBCL) were compared against those who did not for the presence of complex psychopathology (defined as the presence of both emotional and behavioral disorders), and also for the prevalence of several psychiatric diagnoses, suicidal ideation, traumatic experiences, delinquent behaviors, and problems related to quality of life. The diagnostic criteria for CBCL-DP and YSR-DP were met by just 44 (11.8%) and 25 (6.7%) of participants. Only eight participants (2.1%) met the criteria on both instruments. Further analyses were conducted separately for the CBCL-DP and YSR-DP groups. DP was associated with complex psychopathology in only 34.4% of cases according to CBCL and in 60% of cases according to YSR. YSR-DP was somewhat more likely to be associated with psychiatric disorders and associated problems than was the CBCL-DP. Because of the relatively small overlap between the CBCL-DP and YSR-DP, analyses were conducted largely with different samples, likely contributing to the different results. Despite a high rate of psychopathology in the population studied, both the YSR-DP and the CBCL-DP were able to detect only a small proportion of those with complex psychiatric disorders. This result questions the validity of YSR-DP and the CBCL-DP in detecting subjects with complex and serious psychopathology. It is possible that different screening instruments may be more effective. Copyright © 2016 Elsevier B.V. All rights reserved.
Service-Learning General Chemistry: Lead Paint Analyses
NASA Astrophysics Data System (ADS)
Kesner, Laya; Eyring, Edward M.
1999-07-01
Houses painted with lead-based paints are ubiquitous in the United States because the houses and the paint have not worn out two decades after federal regulations prohibited inclusion of lead in paint. Remodeling older homes thus poses a health threat for infants and small children living in those homes. In a service-learning general chemistry class, students disseminate information about this health threat in an older neighborhood. At some of the homes they collect paint samples that they analyze for lead both qualitatively and quantitatively. This service-learning experience generates enthusiasm for general chemistry through the process of working on a "real" problem. Sample collection familiarizes the students with the concept of "representative" sampling. The sample preparation for atomic absorption spectroscopic (AAS) analysis enhances their laboratory skills. The focus of this paper is on the mechanics of integrating this particular service project into the first-term of the normal general chemistry course.
A technique for extracting blood samples from mice in fire toxicity tests
NASA Technical Reports Server (NTRS)
Bucci, T. J.; Hilado, C. J.; Lopez, M. T.
1976-01-01
The extraction of adequate blood samples from moribund and dead mice has been a problem because of the small quantity of blood in each animal and the short time available between the animals' death and coagulation of the blood. These difficulties are particularly critical in fire toxicity tests because removal of the test animals while observing proper safety precautions for personnel is time-consuming. Techniques for extracting blood samples from mice were evaluated, and a technique was developed to obtain up to 0.8 ml of blood from a single mouse after death. The technique involves rapid exposure and cutting of the posterior vena cava and accumulation of blood in the peritoneal space. Blood samples of 0.5 ml or more from individual mice have been consistently obtained as much as 16 minutes after apparent death. Results of carboxyhemoglobin analyses of blood appeared reproducible and consistent with carbon monoxide concentrations in the exposure chamber.
Health-Related Quality of Life of the General German Population in 2015: Results from the EQ-5D-5L.
Huber, Manuel B; Felix, Julia; Vogelmann, Martin; Leidl, Reiner
2017-04-16
The EQ-5D-5L is a widely used generic instrument to measure health-related quality of life. This study evaluates health perception in a representative sample of the general German population from 2015. To compare results over time, a component analysis technique was used that separates changes in the description and valuation of health states. The whole sample and also subgroups, stratified by sociodemographic parameters as well as disease affliction, were analyzed. In total, 2040 questionnaires (48.4% male, mean age 47.3 year) were included. The dimension with the lowest number of reported problems was self-care (93.0% without problems), and the dimension with the highest proportion of impairment was pain/discomfort (71.2% without problems). Some 64.3% of the study population were identified as problem-free. The visual analog scale (VAS) mean for all participants was 85.1. Low education was connected with significantly lower VAS scores, but the effect was small. Depression, heart disease, and diabetes had a strong significant negative effect on reported VAS means. Results were slightly better than those in a similar 2012 survey; the most important driver was the increase in the share of the study population that reported to be problem-free. In international comparisons, health perception of the general German population is relatively high and, compared with previous German studies, fairly stable over recent years. Elderly and sick people continue to report significant reductions in perceived health states.
Psychophysiological Associations with Gastrointestinal Symptomatology in Autism Spectrum Disorder
Ferguson, Bradley J.; Marler, Sarah; Altstein, Lily L.; Lee, Evon Batey; Akers, Jill; Sohl, Kristin; McLaughlin, Aaron; Hartnett, Kaitlyn; Kille, Briana; Mazurek, Micah; Macklin, Eric A.; McDonnell, Erin; Barstow, Mariah; Bauman, Margaret L.; Margolis, Kara Gross; Veenstra-VanderWeele, Jeremy; Beversdorf, David Q.
2017-01-01
Autism spectrum disorder (ASD) is often accompanied by gastrointestinal disturbances, which also may impact behavior. Alterations in autonomic nervous system functioning are also frequently observed in ASD. The relationship between these findings in ASD is not known. We examined the relationship between gastrointestinal symptomatology, examining upper and lower gastrointestinal tract symptomatology separately, and autonomic nervous system functioning, as assessed by heart rate variability and skin conductance level, in a sample of 120 individuals with ASD. Relationships with co-occurring medical and psychiatric symptoms were also examined. While the number of participants with significant upper gastrointestinal tract problems was small in this sample, 42.5% of participants met criteria for functional constipation, a disorder of the lower gastrointestinal tract. Heart rate variability, a measure of parasympathetic modulation of cardiac activity, was found to be positively associated with lower gastrointestinal tract symptomatology at baseline. This relationship was particularly strong for participants with co-occurring diagnoses of anxiety disorder and for those with a history of regressive ASD or loss of previously acquired skills. These findings suggest that autonomic function and gastrointestinal problems are intertwined in children with ASD; although it is not possible to assess causality in this data set. Future work should examine the impact of treatment of gastrointestinal problems on autonomic function and anxiety, as well as the impact of anxiety treatment on gastrointestinal problems. Clinicians should be aware that gastrointestinal problems, anxiety, and autonomic dysfunction may cluster in children with ASD and should be addressed in a multidisciplinary treatment plan. PMID:27321113
Brownlow, Janeese A; Klingaman, Elizabeth A; Boland, Elaine M; Brewster, Glenna S; Gehrman, Philip R
2017-10-15
There has been a great deal of research on the comorbidity of insomnia and psychiatric disorders, but much of the existing data is based on small samples and does not assess the full diagnostic criteria for each disorder. Further, the exact nature of the relationship between these conditions and their impact on cognitive problems are under-researched in military samples. Data were collected from the All Army Study of the Army Study to Assess Risk and Resilience in Service members (unweighted N = 21, 449; weighted N = 674,335; 18-61 years; 13.5% female). Participants completed the Brief Insomnia Questionnaire to assess for insomnia disorder and a self-administered version of the Composite International Diagnostic Interview Screening Scales to assess for psychiatric disorders and cognitive problems. Military soldiers with current major depressive episode (MDE) had the highest prevalence of insomnia disorder (INS; 85.0%), followed by current generalized anxiety disorder (GAD; 82.6%) and current posttraumatic stress disorder (PTSD; 69.7%), respectively. Significant interactions were found between insomnia and psychiatric disorders; specifically, MDE, PTSD, and GAD status influenced the relationship between insomnia and memory/concentration problems. Cross-sectional nature of the assessment and the absence of a comprehensive neurocognitive battery. Psychiatric disorders moderated the relationship between insomnia and memory/concentration problems, suggesting that psychiatric disorders contribute unique variance to cognitive problems even though they are associated with insomnia disorder. Results highlight the importance of considering both insomnia and psychiatric disorders in the diagnosis and treatment of cognitive deficits in military soldiers. Copyright © 2017 Elsevier B.V. All rights reserved.
Creating targeted initial populations for genetic product searches in heterogeneous markets
NASA Astrophysics Data System (ADS)
Foster, Garrett; Turner, Callaway; Ferguson, Scott; Donndelinger, Joseph
2014-12-01
Genetic searches often use randomly generated initial populations to maximize diversity and enable a thorough sampling of the design space. While many of these initial configurations perform poorly, the trade-off between population diversity and solution quality is typically acceptable for small-scale problems. Navigating complex design spaces, however, often requires computationally intelligent approaches that improve solution quality. This article draws on research advances in market-based product design and heuristic optimization to strategically construct 'targeted' initial populations. Targeted initial designs are created using respondent-level part-worths estimated from discrete choice models. These designs are then integrated into a traditional genetic search. Two case study problems of differing complexity are presented to illustrate the benefits of this approach. In both problems, targeted populations lead to computational savings and product configurations with improved market share of preferences. Future research efforts to tailor this approach and extend it towards multiple objectives are also discussed.
Pan, Rui; Wang, Hansheng; Li, Runze
2016-01-01
This paper is concerned with the problem of feature screening for multi-class linear discriminant analysis under ultrahigh dimensional setting. We allow the number of classes to be relatively large. As a result, the total number of relevant features is larger than usual. This makes the related classification problem much more challenging than the conventional one, where the number of classes is small (very often two). To solve the problem, we propose a novel pairwise sure independence screening method for linear discriminant analysis with an ultrahigh dimensional predictor. The proposed procedure is directly applicable to the situation with many classes. We further prove that the proposed method is screening consistent. Simulation studies are conducted to assess the finite sample performance of the new procedure. We also demonstrate the proposed methodology via an empirical analysis of a real life example on handwritten Chinese character recognition. PMID:28127109
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
Kulesz, Paulina A.; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M.; Francis, David J.
2015-01-01
Objective Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Method Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product moment correlation was compared with four robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator Results All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Conclusions Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PMID:25495830
Kulesz, Paulina A; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M; Francis, David J
2015-03-01
Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product-moment correlation was compared with 4 robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator. All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PsycINFO Database Record (c) 2015 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Escudero, Carlos; Jiang, Peng; Pach, Elzbieta; Borondics, Ferenc; West, Mark W; Tuxen, Anders; Chintapalli, Mahati; Carenco, Sophie; Guo, Jinghua; Salmeron, Miquel
2013-05-01
A miniature (1 ml volume) reaction cell with transparent X-ray windows and laser heating of the sample has been designed to conduct X-ray absorption spectroscopy studies of materials in the presence of gases at atmospheric pressures. Heating by laser solves the problems associated with the presence of reactive gases interacting with hot filaments used in resistive heating methods. It also facilitates collection of a small total electron yield signal by eliminating interference with heating current leakage and ground loops. The excellent operation of the cell is demonstrated with examples of CO and H2 Fischer-Tropsch reactions on Co nanoparticles.
A new estimator of the discovery probability.
Favaro, Stefano; Lijoi, Antonio; Prünster, Igor
2012-12-01
Species sampling problems have a long history in ecological and biological studies and a number of issues, including the evaluation of species richness, the design of sampling experiments, and the estimation of rare species variety, are to be addressed. Such inferential problems have recently emerged also in genomic applications, however, exhibiting some peculiar features that make them more challenging: specifically, one has to deal with very large populations (genomic libraries) containing a huge number of distinct species (genes) and only a small portion of the library has been sampled (sequenced). These aspects motivate the Bayesian nonparametric approach we undertake, since it allows to achieve the degree of flexibility typically needed in this framework. Based on an observed sample of size n, focus will be on prediction of a key aspect of the outcome from an additional sample of size m, namely, the so-called discovery probability. In particular, conditionally on an observed basic sample of size n, we derive a novel estimator of the probability of detecting, at the (n+m+1)th observation, species that have been observed with any given frequency in the enlarged sample of size n+m. Such an estimator admits a closed-form expression that can be exactly evaluated. The result we obtain allows us to quantify both the rate at which rare species are detected and the achieved sample coverage of abundant species, as m increases. Natural applications are represented by the estimation of the probability of discovering rare genes within genomic libraries and the results are illustrated by means of two expressed sequence tags datasets. © 2012, The International Biometric Society.
General aviation technology assessment
NASA Technical Reports Server (NTRS)
Jacobson, I. D.
1975-01-01
The existing problem areas in general aviation were investigated in order to identify those which can benefit from technological payoffs. The emphasis was placed on acceptance by the pilot/passenger in areas such as performance, safety, handling qualities, ride quality, etc. Inputs were obtained from three sectors: industry; government; and user, although slanted toward the user group. The results should only be considered preliminary due to the small sample sizes of the data. Trends are evident however and a general methodology for allocating effort in future programs is proposed.
Nanophotonic particle simulation and inverse design using artificial neural networks.
Peurifoy, John; Shen, Yichen; Jing, Li; Yang, Yi; Cano-Renteria, Fidel; DeLacy, Brendan G; Joannopoulos, John D; Tegmark, Max; Soljačić, Marin
2018-06-01
We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical.
NASA Astrophysics Data System (ADS)
González, M.; Montaño, M.; Hoyo, C.
2017-01-01
We have constructed a low cost fluorescence detector model to determine the presence of some heavy metals in an aqueous medium. In particular, we focus on metals which cause public health problems in our country. We did the first tests with standard samples of Hg (II). The innovative features of this instrument are its small dimensions (9 dm3) and the low cost of materials used in its construction.
Walker, Christopher S; Yapuncich, Gabriel S; Sridhar, Shilpa; Cameron, Noël; Churchill, Steven E
2018-02-01
Body mass is an ecologically and biomechanically important variable in the study of hominin biology. Regression equations derived from recent human samples allow for the reasonable prediction of body mass of later, more human-like, and generally larger hominins from hip joint dimensions, but potential differences in hip biomechanics across hominin taxa render their use questionable with some earlier taxa (i.e., Australopithecus spp.). Morphometric prediction equations using stature and bi-iliac breadth avoid this problem, but their applicability to early hominins, some of which differ in both size and proportions from modern adult humans, has not been demonstrated. Here we use mean stature, bi-iliac breadth, and body mass from a global sample of human juveniles ranging in age from 6 to 12 years (n = 530 age- and sex-specific group annual means from 33 countries/regions) to evaluate the accuracy of several published morphometric prediction equations when applied to small humans. Though the body proportions of modern human juveniles likely differ from those of small-bodied early hominins, human juveniles (like fossil hominins) often differ in size and proportions from adult human reference samples and, accordingly, serve as a useful model for assessing the robustness of morphometric prediction equations. Morphometric equations based on adults systematically underpredict body mass in the youngest age groups and moderately overpredict body mass in the older groups, which fall in the body size range of adult Australopithecus (∼26-46 kg). Differences in body proportions, notably the ratio of lower limb length to stature, influence predictive accuracy. Ontogenetic changes in these body proportions likely influence the shift in prediction error (from under- to overprediction). However, because morphometric equations are reasonably accurate when applied to this juvenile test sample, we argue these equations may be used to predict body mass in small-bodied hominins, despite the potential for some error induced by differing body proportions and/or extrapolation beyond the original reference sample range. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
Whitesell, Nancy Rumbaugh; Mitchell, Christina M; Spicer, Paul
2009-01-01
Latent growth curve modeling was used to estimate developmental trajectories of self-esteem and cultural identity among American Indian high school students and to explore the relationships of these trajectories to personal resources, problem behaviors, and academic performance at the end of high school. The sample included 1,611 participants from the Voices of Indian Teens project, a 3-year longitudinal study of adolescents from 3 diverse American Indian cultural groups in the western United States. Trajectories of self-esteem were clearly related to academic achievement; cultural identity, in contrast, was largely unrelated, with no direct effects and only very small indirect effects. The relationships between self-esteem and success were mediated by personal resources and problem behaviors.
Whitesell, Nancy Rumbaugh; Mitchell, Christina M.; Spicer, Paul
2008-01-01
Latent growth curve modeling was used to estimate developmental trajectories of self-esteem and cultural identity among American Indian high school students and to explore the relationships of these trajectories to personal resources, problem behaviors, and academic performance at the end of high school. The sample included 1,611 participants from the Voices of Indian Teens project, a three-year longitudinal study of adolescents from three diverse American Indian cultural groups in the western U.S. Trajectories of self-esteem were clearly related to academic achievement; cultural identity, in contrast, was largely unrelated, with no direct effects and only very small indirect effects. The relationships between self-esteem and success were mediated by personal resources and problem behaviors. PMID:19209979
Gewandter, Jennifer S.; Walker, Joanna; Heckler, Charles E.; Morrow, Gary R.; Ryan, Julie L.
2015-01-01
Background Skin reactions and pain are commonly reported side effects of radiation therapy (RT). Objective To characterize RT-induced symptoms according to treatment site subgroups and identify skin symptoms that correlate with pain. Methods A self-report survey, adapted from the MD Anderson Symptom Inventory and the McGill Pain Questionnaire, assessed RT-induced skin problems, pain, and specific skin symptoms. Wilcoxon Sign Ranked tests compared mean severity of pre- and post-RT pain and skin problems within each RT-site subgroup. Multiple linear regression (MLR) investigated associations between skin symptoms and pain. Results Survey respondents (n=106) were 58% female and on average 64 years old. RT sites included lung, breast, lower abdomen, head/neck/brain, and upper abdomen. Only patients receiving breast RT reported significant increases in treatment site pain and skin problems (p≤0.007). Patients receiving head/neck/brain RT reported increased skin problems (p<0.0009). MLR showed that post-RT skin tenderness and tightness were most strongly associated with post-RT pain (p=0.066 and p=0.122, respectively). Limitations Small sample size, exploratory analyses, and non-validated measure. Conclusions Only patients receiving breast RT reported significant increases in pain and skin problems at the RT site, while patients receiving head/neck/brain RT had increased skin problems, but not pain. These findings suggest that the severity of skin problems is not the only factor that contributes to pain, and interventions should be tailored to specifically target pain at the RT site, possibly by targeting tenderness and tightness. These findings should be confirmed in a larger sampling of RT patients. PMID:24645338
Mayes, Susan Dickerson; Baweja, Raman; Calhoun, Susan L; Syed, Ehsan; Mahr, Fauzia; Siddiqui, Farhat
2014-01-01
Studies of the relationship between bullying and suicide behavior yield mixed results. This is the first study comparing frequencies of suicide behavior in four bullying groups (bully, victim, bully/victim, and neither) in two large psychiatric and community samples of young children and adolescents. Maternal ratings of bullying and suicide ideation and attempts were analyzed for 1,291 children with psychiatric disorders and 658 children in the general population 6-18 years old. For both the psychiatric and community samples, suicide ideation and attempt scores for bully/victims were significantly higher than for victims only and for neither bullies nor victims. Differences between victims only and neither victims nor bullies were nonsignificant. Controlling for sadness and conduct problems, suicide behavior did not differ between the four bullying groups. All children with suicide attempts had a comorbid psychiatric disorder, as did all but two children with suicide ideation. Although the contribution of bullying per se to suicide behavior independent of sadness and conduct problems is small, bullying has obvious negative psychological consequences that make intervention imperative. Interventions need to focus on the psychopathology associated with being a victim and/or perpetrator of bullying in order to reduce suicide behavior.
Rapid screening for perceived cognitive impairment in major depressive disorder.
Iverson, Grant L; Lam, Raymond W
2013-05-01
Subjectively experienced cognitive impairment is common in patients with mood disorders. The British Columbia Cognitive Complaints Inventory (BC-CCI) is a 6-item scale that measures perceived cognitive problems. The purpose of this study is to examine the reliability of the scale in healthy volunteers and depressed patients and to evaluate the sensitivity of the measure to perceived cognitive problems in depression. Participants were 62 physician-diagnosed inpatients or outpatients with depression, who had independently confirmed diagnoses on the Structured Clinical Interview for DSM-IV, and a large sample of healthy community volunteers (n=112). The internal consistency reliability of the BC-CCI was α=.86 for patients with depression and α=.82 for healthy controls. Principal components analyses revealed a one-factor solution accounting for 54% of the total variability in the control sample and a 2-factor solution (cognitive impairment and difficulty with expressive language) accounting for 76% of the variance in the depression sample. The total score difference between the groups was very large (Cohen's d=2.2). The BC-CCI has high internal consistency in both depressed patients and community controls, despite its small number of items. The test is sensitive to cognitive complaints in patients with depression.
Spatial averaging for small molecule diffusion in condensed phase environments
NASA Astrophysics Data System (ADS)
Plattner, Nuria; Doll, J. D.; Meuwly, Markus
2010-07-01
Spatial averaging is a new approach for sampling rare-event problems. The approach modifies the importance function which improves the sampling efficiency while keeping a defined relation to the original statistical distribution. In this work, spatial averaging is applied to multidimensional systems for typical problems arising in physical chemistry. They include (I) a CO molecule diffusing on an amorphous ice surface, (II) a hydrogen molecule probing favorable positions in amorphous ice, and (III) CO migration in myoglobin. The systems encompass a wide range of energy barriers and for all of them spatial averaging is found to outperform conventional Metropolis Monte Carlo. It is also found that optimal simulation parameters are surprisingly similar for the different systems studied, in particular, the radius of the point cloud over which the potential energy function is averaged. For H2 diffusing in amorphous ice it is found that facile migration is possible which is in agreement with previous suggestions from experiment. The free energy barriers involved are typically lower than 1 kcal/mol. Spatial averaging simulations for CO in myoglobin are able to locate all currently characterized metastable states. Overall, it is found that spatial averaging considerably improves the sampling of configurational space.
NASA Astrophysics Data System (ADS)
Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.
2016-12-01
Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.
2018-01-01
The workplace is an ideal setting for health promotion. The regular medical examination of workers enables us to screen for numerous diseases, spread good practices and correct lifestyles, and obtain a favourable risk/benefit ratio. The continuous monitoring of the level of workers’ wellbeing using a holistic approach during medical surveillance enables us to promptly identify problems in work organisation and the company climate. Problems of this kind can be adequately managed by using a participatory approach. The aim of this paper is twofold: to signal this way of proceeding with medical surveillance, and to describe an organisational development intervention. Participatory groups were used to improve occupational life in a small company. After intervention we observed a reduction in levels of perceived occupational stress measured with the Effort/Reward Imbalance questionnaire, and an improvement in psychological wellbeing assessed by means of the Goldberg Anxiety/Depression scale. Although the limited size of the sample and the lack of a control group call for a cautious evaluation of this study, the participatory strategy proved to be a useful tool due to its cost-effectiveness. PMID:29614831
Schumacher, Robin F; Malone, Amelia S
2017-09-01
The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.
Magnavita, Nicola
2018-04-02
The workplace is an ideal setting for health promotion. The regular medical examination of workers enables us to screen for numerous diseases, spread good practices and correct lifestyles, and obtain a favourable risk/benefit ratio. The continuous monitoring of the level of workers' wellbeing using a holistic approach during medical surveillance enables us to promptly identify problems in work organisation and the company climate. Problems of this kind can be adequately managed by using a participatory approach. The aim of this paper is twofold: to signal this way of proceeding with medical surveillance, and to describe an organisational development intervention. Participatory groups were used to improve occupational life in a small company. After intervention we observed a reduction in levels of perceived occupational stress measured with the Effort/Reward Imbalance questionnaire, and an improvement in psychological wellbeing assessed by means of the Goldberg Anxiety/Depression scale. Although the limited size of the sample and the lack of a control group call for a cautious evaluation of this study, the participatory strategy proved to be a useful tool due to its cost-effectiveness.
Turbofan forced mixer lobe flow modeling. 2: Three-dimensional inviscid mixer analysis (FLOMIX)
NASA Technical Reports Server (NTRS)
Barber, T.
1988-01-01
A three-dimensional potential analysis (FLOMIX) was formulated and applied to the inviscid flow over a turbofan foced mixer. The method uses a small disturbance formulation to analytically uncouple the circumferential flow from the radial and axial flow problem, thereby reducing the analysis to the solution of a series of axisymmetric problems. These equations are discretized using a flux volume formulation along a Cartesian grid. The method extends earlier applications of the Cartesian method to complex cambered geometries. The effects of power addition are also included within the potential formulation. Good agreement is obtained with an alternate small disturbance analysis for a high penetration symmetric mixer in a planar duct. In addition, calculations showing pressure distributions and induced secondary vorticity fields are presented for practical trubofan mixer configurations, and where possible, comparison was made with available experimental data. A detailed description of the required data input and coordinate definition is presented along with a sample data set for a practical forced mixer configuration. A brief description of the program structure and subroutines is also provided.
Psychosocial correlates of police-registered youth crime. A Finnish population-based study.
Elonheimo, Henrik; Sourander, Andre; Niemelä, Solja; Nuutila, Ari-Matti; Helenius, Hans; Sillanmäki, Lauri; Ristkari, Terja; Parkkola, Kai
2009-01-01
This study is focused on psychosocial correlates of youth crime in a sample of 2330 Finnish boys born in 1981. Two kinds of data were combined: questionnaires completed by the boys at call-up in 1999 and crime registered in the Finnish National Police Register between 1998 and 2001. One-fifth of the boys were registered to offending during the 4-year period in late adolescence; 14% were registered for one or two offences, 4% for three to five offences, and 3% for more than five offences. Crime accumulated heavily in those with more than five offences, as they accounted for 68% of all crime. Independent correlates of crime were living in a small community, parents' low educational level and divorce, having a regular relationship, self-reported delinquency, daily smoking, and weekly drunkenness, whereas anxious-depressiveness was reversely associated with crime. Most psychosocial problems covaried linearly with offending frequency, being particularly manifested by multiple recidivists. However, recidivists had very rarely used mental health services. The results indicate that offending and various psychosocial problems accumulate in a small minority of boys not reached by mental health services.
Sampling through time and phylodynamic inference with coalescent and birth–death models
Volz, Erik M.; Frost, Simon D. W.
2014-01-01
Many population genetic models have been developed for the purpose of inferring population size and growth rates from random samples of genetic data. We examine two popular approaches to this problem, the coalescent and the birth–death-sampling model (BDM), in the context of estimating population size and birth rates in a population growing exponentially according to the birth–death branching process. For sequences sampled at a single time, we found the coalescent and the BDM gave virtually indistinguishable results in terms of the growth rates and fraction of the population sampled, even when sampling from a small population. For sequences sampled at multiple time points, we find that the birth–death model estimators are subject to large bias if the sampling process is misspecified. Since BDMs incorporate a model of the sampling process, we show how much of the statistical power of BDMs arises from the sequence of sample times and not from the genealogical tree. This motivates the development of a new coalescent estimator, which is augmented with a model of the known sampling process and is potentially more precise than the coalescent that does not use sample time information. PMID:25401173
Small-Scale Hydroelectric Power in the Southwest: New Impetus for an old Energy Source
NASA Astrophysics Data System (ADS)
1980-06-01
A forum was provided for state legislators and other interested persons to discuss the problems facing small scale hydro developers, and to recommend appropriate solutions to resolve those problems. Alternative policy options were recommended for consideration by both state and federal agencies. Emphasis was placed on the legal, institutional, environmental and economic barriers at the state level, as well as the federal delays associated with licensing small scale hydro projects. Legislative resolution of the problems and delays in small scale hydro licensing and development were also stressed.
Akerman, Eva; Fridlund, Bengt; Ersson, Anders; Granberg-Axéll, Anetth
2009-04-01
Current studies reveal a lack of consensus for the evaluation of physical and psychosocial problems after ICU stay and their changes over time. The aim was to develop and evaluate the validity and reliability of a questionnaire for assessing physical and psychosocial problems over time for patients following ICU recovery. Thirty-nine patients completed the questionnaire, 17 were retested. The questionnaire was constructed in three sets: physical problems, psychosocial problems and follow-up care. Face and content validity were tested by nurses, researchers and patients. The questionnaire showed good construct validity in all three sets and had strong factor loadings (explained variance >70%, factor loadings >0.5) for all three sets. There was good concurrent validity compared with the SF 12 (r(s)>0.5). Internal consistency was shown to be reliable (Cronbach's alpha 0.70-0.85). Stability reliability on retesting was good for the physical and psychosocial sets (r(s)>0.5). The 3-set 4P questionnaire was a first step in developing an instrument for assessment of former ICU patients' problems over time. The sample size was small and thus, further studies are needed to confirm these findings.
Nakagawa, Seiji
2011-04-01
Mechanical properties (seismic velocities and attenuation) of geological materials are often frequency dependent, which necessitates measurements of the properties at frequencies relevant to a problem at hand. Conventional acoustic resonant bar tests allow measuring seismic properties of rocks and sediments at sonic frequencies (several kilohertz) that are close to the frequencies employed for geophysical exploration of oil and gas resources. However, the tests require a long, slender sample, which is often difficult to obtain from the deep subsurface or from weak and fractured geological formations. In this paper, an alternative measurement technique to conventional resonant bar tests is presented. This technique uses only a small, jacketed rock or sediment core sample mediating a pair of long, metal extension bars with attached seismic source and receiver-the same geometry as the split Hopkinson pressure bar test for large-strain, dynamic impact experiments. Because of the length and mass added to the sample, the resonance frequency of the entire system can be lowered significantly, compared to the sample alone. The experiment can be conducted under elevated confining pressures up to tens of MPa and temperatures above 100 [ordinal indicator, masculine]C, and concurrently with x-ray CT imaging. The described split Hopkinson resonant bar test is applied in two steps. First, extension and torsion-mode resonance frequencies and attenuation of the entire system are measured. Next, numerical inversions for the complex Young's and shear moduli of the sample are performed. One particularly important step is the correction of the inverted Young's moduli for the effect of sample-rod interfaces. Examples of the application are given for homogeneous, isotropic polymer samples, and a natural rock sample. © 2011 American Institute of Physics
Small Schools in a Big World: Thinking about a Wicked Problem
ERIC Educational Resources Information Center
Corbett, Michael; Tinkham, Jennifer
2014-01-01
The position of small rural schools is precarious in much of rural Canada today. What is to be done about small schools in rural communities which are often experiencing population decline and aging, economic restructuring, and the loss of employment and services? We argue this issue is a classic "wicked" policy problem. Small schools…
NASA Astrophysics Data System (ADS)
Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin
2017-06-01
Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.
Analysis of small crack behavior for airframe applications
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Chan, K. S.; Hudak, S. J., Jr.; Davidson, D. L.
1994-01-01
The small fatigue crack problem is critically reviewed from the perspective of airframe applications. Different types of small cracks-microstructural, mechanical, and chemical-are carefully defined and relevant mechanisms identified. Appropriate analysis techniques, including both rigorous scientific and practical engineering treatments, are briefly described. Important materials data issues are addressed, including increased scatter in small crack data and recommended small crack test methods. Key problems requiring further study are highlighted.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
Stratum variance estimation for sample allocation in crop surveys. [Great Plains Corridor
NASA Technical Reports Server (NTRS)
Perry, C. R., Jr.; Chhikara, R. S. (Principal Investigator)
1980-01-01
The problem of determining stratum variances needed in achieving an optimum sample allocation for crop surveys by remote sensing is investigated by considering an approach based on the concept of stratum variance as a function of the sampling unit size. A methodology using the existing and easily available information of historical crop statistics is developed for obtaining initial estimates of tratum variances. The procedure is applied to estimate stratum variances for wheat in the U.S. Great Plains and is evaluated based on the numerical results thus obtained. It is shown that the proposed technique is viable and performs satisfactorily, with the use of a conservative value for the field size and the crop statistics from the small political subdivision level, when the estimated stratum variances were compared to those obtained using the LANDSAT data.
[Laser microdissection for biology and medicine].
Podgornyĭ, O V; Lazarev, V N; Govorun, V M
2012-01-01
For routine extraction of DNA, RNA, proteins and metabolites, small tissue pieces are placed into lysing solution. These tissue pieces in general contain different cell types. For this reason, lysate contains components of different cell types, which complicates the interpretation of molecular analysis results. The laser microdissection allows overcoming this trouble. The laser microdissection is a method to procure tissue samples contained defined cell subpopulations, individual cells and even subsellular components under direct microscopic visualization. Collected samples can be undergone to different downstream molecular assays: DNA analysis, RNA transcript profiling, cDNA library generation and gene expression analysis, proteomic analysis and metabolite profiling. The laser microdissection has wide applications in oncology (research and routine), cellular and molecular biology, biochemistry and forensics. This paper reviews the principles of different laser microdissection instruments, examples of laser microdissection application and problems of sample preparation for laser microdissection.
Space-Time Smoothing of Complex Survey Data: Small Area Estimation for Child Mortality
Mercer, Laina D; Wakefield, Jon; Pantazis, Athena; Lutambi, Angelina M; Masanja, Honorati; Clark, Samuel
2016-01-01
Many people living in low and middle-income countries are not covered by civil registration and vital statistics systems. Consequently, a wide variety of other types of data including many household sample surveys are used to estimate health and population indicators. In this paper we combine data from sample surveys and demographic surveillance systems to produce small area estimates of child mortality through time. Small area estimates are necessary to understand geographical heterogeneity in health indicators when full-coverage vital statistics are not available. For this endeavor spatio-temporal smoothing is beneficial to alleviate problems of data sparsity. The use of conventional hierarchical models requires careful thought since the survey weights may need to be considered to alleviate bias due to non-random sampling and non-response. The application that motivated this work is estimation of child mortality rates in five-year time intervals in regions of Tanzania. Data come from Demographic and Health Surveys conducted over the period 1991–2010 and two demographic surveillance system sites. We derive a variance estimator of under five years child mortality that accounts for the complex survey weighting. For our application, the hierarchical models we consider include random effects for area, time and survey and we compare models using a variety of measures including the conditional predictive ordinate (CPO). The method we propose is implemented via the fast and accurate integrated nested Laplace approximation (INLA). PMID:27468328
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
Yang, Mingjun; Huang, Jing; MacKerell, Alexander D
2015-06-09
Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.
Hwang, Gwangseok; Chung, Jaehun; Kwon, Ohmyoung
2014-11-01
The application of conventional scanning thermal microscopy (SThM) is severely limited by three major problems: (i) distortion of the measured signal due to heat transfer through the air, (ii) the unknown and variable value of the tip-sample thermal contact resistance, and (iii) perturbation of the sample temperature due to the heat flux through the tip-sample thermal contact. Recently, we proposed null-point scanning thermal microscopy (NP SThM) as a way of overcoming these problems in principle by tracking the thermal equilibrium between the end of the SThM tip and the sample surface. However, in order to obtain high spatial resolution, which is the primary motivation for SThM, NP SThM requires an extremely sensitive SThM probe that can trace the vanishingly small heat flux through the tip-sample nano-thermal contact. Herein, we derive a relation between the spatial resolution and the design parameters of a SThM probe, optimize the thermal and electrical design, and develop a batch-fabrication process. We also quantitatively demonstrate significantly improved sensitivity, lower measurement noise, and higher spatial resolution of the fabricated SThM probes. By utilizing the exceptional performance of these fabricated probes, we show that NP SThM can be used to obtain a quantitative temperature profile with nanoscale resolution independent of the changing tip-sample thermal contact resistance and without perturbation of the sample temperature or distortion due to the heat transfer through the air.
Characteristics of older adult problem gamblers calling a gambling helpline.
Potenza, Marc N; Steinberg, Marvin A; Wu, Ran; Rounsaville, Bruce J; O'malley, Stephanie S
2006-06-01
Few investigations have characterized groups of older adults with gambling problems, and published reports are currently limited by small samples of older adult problem gamblers. Gambling helplines represent a widespread mechanism for assisting problem gamblers to move into treatment settings. Given data from older adult problem gamblers in treatment, we hypothesized that older as compared with younger adult problem gamblers calling a gambling helpline would be less likely to report gambling-related problems. Logistic regression analyses were performed on data obtained from January 1, 2000 to December 31, 2001, inclusive, from callers with gambling problems (N = 1,084) contacting the Connecticut Council on Problem Gambling Helpline. Of the 1,018 phone calls used in the logistic regression analyses, 168 (16.5%) were from older adults and 850 (83.5%) from younger adults. Age-related differences were observed in demographic features, types and patterns of gambling reported as problematic, gambling-related problems and psychiatric symptoms, substance use problems, patterns of indebtedness, and family histories of addictive disorders. Older as compared with younger adult problem gamblers were more likely to report having lower incomes, longer durations of gambling, fewer types of problematic gambling, and problems with casino slot machine gambling and less likely to report gambling-related anxiety, family problems, illegal behaviors and arrests, drug problems, indebtedness to bookies or acquaintances, family histories of drug abuse, and problems with casino table gambling. Older as compared with younger adult problem gamblers calling a gambling helpline differ on many clinically relevant features. The findings suggest the need for improved and unique prevention and treatment strategies for older adults with gambling problems.
MC2-3 / DIF3D Analysis for the ZPPR-15 Doppler and Sodium Void Worth Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Lell, Richard M.; Lee, Changho
This manuscript covers validation efforts for our deterministic codes at Argonne National Laboratory. The experimental results come from the ZPPR-15 work in 1985-1986 which was focused on the accuracy of physics data for the integral fast reactor concept. Results for six loadings are studied in this document and focus on Doppler sample worths and sodium void worths. The ZPPR-15 loadings are modeled using the MC2-3/DIF3D codes developed and maintained at ANL and the MCNP code from LANL. The deterministic models are generated by processing the as-built geometry information, i.e. MCNP input, and generating MC2-3 cross section generation instructions and amore » drawer homogenized equivalence problem. The Doppler reactivity worth measurements are small heated samples which insert very small amounts of reactivity into the system (< 2 pcm). The results generated by the MC2-3/DIF3D codes were excellent for ZPPR-15A and ZPPR-15B and good for ZPPR-15D, compared to the MCNP solutions. In all cases, notable improvements were made over the analysis techniques applied to the same problems in 1987. The sodium void worths from MC2-3/DIF3D were quite good at 37.5 pcm while MCNP result was 33 pcm and the measured result was 31.5 pcm. Copyright © (2015) by the American Nuclear Society All rights reserved.« less
Comparison between Hydrogen, Methane and Ethylene Fuels in a 3-D Scramjet at Mach 8
2016-06-24
characteristics in air. The disadvantage of hydrogen is its low density, which is a particular problem for small vehicles with significant internal...characteristics in air. The disadvantage of hydrogen is its low density, which is a particular problem for small vehicles with significant internal volume...The low energy per unit volume of gaseous hydrogen, however, is a significant problem for small vehicles with internal volume constraints, in addition
A machine for haemodialysing very small infants.
Everdell, Nicholas L; Coulthard, Malcolm G; Crosier, Jean; Keir, Michael J
2005-05-01
Babies weighing under 6 kg are difficult to dialyse, especially those as small as 1 kg. Peritoneal dialysis is easier than haemodialysis, but is not always possible, and clears molecules less efficiently. Two factors complicate haemodialysis. First, extracorporeal circuits are large relative to a baby's blood volume, necessitating priming with fresh or modified blood. Second, blood flow from infants' access vessels is disproportionately low (Poiseuille's law), causing inadequate dialysis, or clotting within the circuit. These problems are minimised by using single lumen access, a very small circuit, and a reservoir syringe to separate the sampling and dialyser blood flow rates. Its manual operation is tedious, so we developed a computer-controlled, pressure-monitored machine to run it, including adjusting the blood withdrawal rate from poorly sampling lines. We have dialysed four babies weighing 0.8-3.4 kg, with renal failure or metabolic disorders. The circuits did not require priming. Clearances of creatinine, urea, potassium, phosphate and ammonia were mean (SD) 0.54 (0.22) ml/min using one dialyser, and 0.98 (0.22) ml/min using two in parallel. Ammonia clearance in a 2.4 kg baby had a 9 h half-life. Ultrafiltration up to 45 ml/h was achieved easily. This device provided infants with immediate, effective and convenient haemodialysis, typically delivered for prolonged periods.
NASA Astrophysics Data System (ADS)
Sack, Patrick J.; Berry, Ron F.; Meffre, Sebastien; Falloon, Trevor J.; Gemmell, J. Bruce; Friedman, Richard M.
2011-05-01
A new U-Pb zircon dating protocol for small (10-50 μm) zircons has been developed using an automated searching method to locate zircon grains in a polished rock mount. The scanning electron microscope-energy-dispersive X ray spectrum-based automated searching method can routinely find in situ zircon grains larger than 5 μm across. A selection of these grains was ablated using a 10 μm laser spot and analyzed in an inductively coupled plasma-quadrupole mass spectrometer (ICP-QMS). The technique has lower precision (˜6% uncertainty at 95% confidence on individual spot analyses) than typical laser ablation ICP-MS (˜2%), secondary ion mass spectrometry (<1%), and isotope dilution-thermal ionization mass spectrometry (˜0.4%) methods. However, it is accurate and has been used successfully on fine-grained lithologies, including mafic rocks from island arcs, ocean basins, and ophiolites, which have traditionally been considered devoid of dateable zircons. This technique is particularly well suited for medium- to fine-grained mafic volcanic rocks where zircon separation is challenging and can also be used to date rocks where only small amounts of sample are available (clasts, xenoliths, dredge rocks). The most significant problem with dating small in situ zircon grains is Pb loss. In our study, many of the small zircons analyzed have high U contents, and the isotopic compositions of these grains are consistent with Pb loss resulting from internal α radiation damage. This problem is not significant in very young rocks and can be minimized in older rocks by avoiding high-U zircon grains.
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
Strugstad, Benedicte; Lau, Bjørn; Glenne Øie, Merete
2018-04-12
The present follow-up study examines the associations between cognition and parent-rated internalizing problems among adolescents with early-onset schizophrenia (EOS) at baseline (T1) and self-rated internalizing problems 13 years later (T2). Twelve individuals (8 male/4 female) with EOS and 30 healthy controls (16 male/14 female) were included in the study. All were between 12 and 18 years of age at T1. Internalizing problems were measured with the Achenbach System of Empirically Based Assessment Internalizing Scale. Cognition was examined with a neuropsychological test battery measuring auditory attention/working memory, visuomotor processing, cognitive flexibility and verbal memory. Compared to healthy controls, the EOS group had significant cognitive deficits and more internalizing problems both at T1 and T2. There was no correlation between parent-rated internalizing problems at T1 and self-rated internalizing problems at T2 in the EOS group. However, deficits in auditory attention/working memory at T1 were significantly associated with internalizing problems at T2. A focus on improving the treatment of cognitive impairments may be important in preventing the development of internalizing problems in young patients with schizophrenia. The small sample size of the study is a limitation and further research is recommended. Copyright © 2018 Elsevier B.V. All rights reserved.
Towards a concept of sensible drinking and an illustration of measure.
Harburg, E; Gleiberman, L; Difranceisco, W; Peele, S
1994-07-01
The major focus of research on alcohol is not on the majority who drink without problems, but on the small minority who have extreme problems. Difficulty in conceiving, measuring, and analyzing non-problem drinking lies in the exclusively problem-drinking orientation of most drinking measures. Drawing on conventionally used scales (e.g. Short Michigan Alcoholism Screening Test) and other established concepts in the alcohol literature (e.g. craving, hangover), a set of 24 items was selected to classify all persons in a sample from Tecumseh, Michigan, as to their alcohol-related behaviors (N = 1266). A Sensible-Problem Drinking Classification (SPDC) was developed with five categories: very sensible, sensible, borderline, problem, and impaired. A variety of known alcohol and psychosocial variables were related monotonically across these categories in expected directions. Ethanol ounces per week was only modestly related to SPDC groups: R2 = 0.09 for women, R2 = 0.21 for men. The positive relationship of problem and non-problem SPDC groups to high and low blood pressure was P = 0.07, while ethanol (oz/week) was uncorrelated to blood pressure (mm Hg) in this subsample (N = 453). The development of SPDC requires additional items measuring self and group regulatory alcohol behavior. However, this initial analysis of no-problem subgroups has direct import for public health regulation of alcohol use by providing a model of a sensible view of alcohol use.
The future of Stardust science
NASA Astrophysics Data System (ADS)
Westphal, A. J.; Bridges, J. C.; Brownlee, D. E.; Butterworth, A. L.; de Gregorio, B. T.; Dominguez, G.; Flynn, G. J.; Gainsforth, Z.; Ishii, H. A.; Joswiak, D.; Nittler, L. R.; Ogliore, R. C.; Palma, R.; Pepin, R. O.; Stephan, T.; Zolensky, M. E.
2017-09-01
Recent observations indicate that >99% of the small bodies in the solar system reside in its outer reaches—in the Kuiper Belt and Oort Cloud. Kuiper Belt bodies are probably the best-preserved representatives of the icy planetesimals that dominated the bulk of the solid mass in the early solar system. They likely contain preserved materials inherited from the protosolar cloud, held in cryogenic storage since the formation of the solar system. Despite their importance, they are relatively underrepresented in our extraterrestrial sample collections by many orders of magnitude ( 1013 by mass) as compared with the asteroids, represented by meteorites, which are composed of materials that have generally been strongly altered by thermal and aqueous processes. We have only begun to scratch the surface in understanding Kuiper Belt objects, but it is already clear that the very limited samples of them that we have in our laboratories hold the promise of dramatically expanding our understanding of the formation of the solar system. Stardust returned the first samples from a known small solar system body, the Jupiter-family comet 81P/Wild 2, and, in a separate collector, the first solid samples from the local interstellar medium. The first decade of Stardust research resulted in more than 142 peer-reviewed publications, including 15 papers in Science. Analyses of these amazing samples continue to yield unexpected discoveries and to raise new questions about the history of the early solar system. We identify nine high-priority scientific objectives for future Stardust analyses that address important unsolved problems in planetary science.
Wu, Baolin
2006-02-15
Differential gene expression detection and sample classification using microarray data have received much research interest recently. Owing to the large number of genes p and small number of samples n (p > n), microarray data analysis poses big challenges for statistical analysis. An obvious problem owing to the 'large p small n' is over-fitting. Just by chance, we are likely to find some non-differentially expressed genes that can classify the samples very well. The idea of shrinkage is to regularize the model parameters to reduce the effects of noise and produce reliable inferences. Shrinkage has been successfully applied in the microarray data analysis. The SAM statistics proposed by Tusher et al. and the 'nearest shrunken centroid' proposed by Tibshirani et al. are ad hoc shrinkage methods. Both methods are simple, intuitive and prove to be useful in empirical studies. Recently Wu proposed the penalized t/F-statistics with shrinkage by formally using the (1) penalized linear regression models for two-class microarray data, showing good performance. In this paper we systematically discussed the use of penalized regression models for analyzing microarray data. We generalize the two-class penalized t/F-statistics proposed by Wu to multi-class microarray data. We formally derive the ad hoc shrunken centroid used by Tibshirani et al. using the (1) penalized regression models. And we show that the penalized linear regression models provide a rigorous and unified statistical framework for sample classification and differential gene expression detection.
Electron spectroscopy analysis
NASA Technical Reports Server (NTRS)
Gregory, John C.
1992-01-01
The Surface Science Laboratories at the University of Alabama in Huntsville (UAH) are equipped with x-ray photoelectron spectroscopy (XPS or ESCA) and Auger electron spectroscopy (AES) facilities. These techniques provide information from the uppermost atomic layers of a sample, and are thus truly surface sensitive. XPS provides both elemental and chemical state information without restriction on the type of material that can be analyzed. The sample is placed into an ultra high vacuum (UHV) chamber and irradiated with x-rays which cause the ejection of photoelectrons from the sample surface. Since x-rays do not normally cause charging problems or beam damage, XPS is applicable to a wide range of samples including metals, polymers, catalysts, and fibers. AES uses a beam of high energy electrons as a surface probe. Following electronic rearrangements within excited atoms by this probe, Auger electrons characteristic of each element present are emitted from the sample. The main advantage of electron induced AES is that the electron beam can be focused down to a small diameter and localized analysis can be carried out. On the rastering of this beam synchronously with a video display using established scanning electron microscopy techniques, physical images and chemical distribution maps of the surface can be produced. Thus very small features, such as electronic circuit elements or corrosion pits in metals, can be investigated. Facilities are available on both XPS and AES instruments for depth-profiling of materials, using a beam of argon ions to sputter away consecutive layers of material to reveal sub-surface (and even semi-bulk) analyses.
The relationship between observational scale and explained variance in benthic communities
Flood, Roger D.; Frisk, Michael G.; Garza, Corey D.; Lopez, Glenn R.; Maher, Nicole P.
2018-01-01
This study addresses the impact of spatial scale on explaining variance in benthic communities. In particular, the analysis estimated the fraction of community variation that occurred at a spatial scale smaller than the sampling interval (i.e., the geographic distance between samples). This estimate is important because it sets a limit on the amount of community variation that can be explained based on the spatial configuration of a study area and sampling design. Six benthic data sets were examined that consisted of faunal abundances, common environmental variables (water depth, grain size, and surficial percent cover), and sonar backscatter treated as a habitat proxy (categorical acoustic provinces). Redundancy analysis was coupled with spatial variograms generated by multiscale ordination to quantify the explained and residual variance at different spatial scales and within and between acoustic provinces. The amount of community variation below the sampling interval of the surveys (< 100 m) was estimated to be 36–59% of the total. Once adjusted for this small-scale variation, > 71% of the remaining variance was explained by the environmental and province variables. Furthermore, these variables effectively explained the spatial structure present in the infaunal community. Overall, no scale problems remained to compromise inferences, and unexplained infaunal community variation had no apparent spatial structure within the observational scale of the surveys (> 100 m), although small-scale gradients (< 100 m) below the observational scale may be present. PMID:29324746
Refined elasticity sampling for Monte Carlo-based identification of stabilizing network patterns.
Childs, Dorothee; Grimbs, Sergio; Selbig, Joachim
2015-06-15
Structural kinetic modelling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a representation of the system's Jacobian matrix that depends solely on the network structure, steady state measurements, and the elasticities at the steady state. For a measured steady state, stability criteria can be derived by generating a large number of SKMs with randomly sampled elasticities and evaluating the resulting Jacobian matrices. The elasticity space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Here, we extend this approach by examining the kinetic feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, we show that the majority of sampled SKMs would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion is formulated that mitigates such infeasible models. After evaluating the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle and the intrinsic mechanisms responsible for their stability or instability. The findings of the statistical elasticity analysis confirm that several elasticities are jointly coordinated to control stability and that the main source for potential instabilities are mutations in the enzyme alpha-ketoglutarate dehydrogenase. © The Author 2015. Published by Oxford University Press.
Nearest neighbor density ratio estimation for large-scale applications in astronomy
NASA Astrophysics Data System (ADS)
Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.
2015-09-01
In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.
Kasahara, Kota; Sakuraba, Shun; Fukuda, Ikuo
2018-03-08
We investigate the problem of artifacts caused by the periodic boundary conditions (PBC) used in molecular simulation studies. Despite the long history of simulations with PBCs, the existence of measurable artifacts originating from PBCs applied to inherently nonperiodic physical systems remains controversial. Specifically, these artifacts appear as differences between simulations of the same system but with different simulation-cell sizes. Earlier studies have implied that, even in the simple case of a small model peptide in water, sampling inefficiency is a major obstacle to understanding these artifacts. In this study, we have resolved the sampling issue using the replica exchange molecular dynamics (REMD) enhanced-sampling method to explore PBC artifacts. Explicitly solvated zwitterionic polyalanine octapeptides with three different cubic-cells, having dimensions of L = 30, 40, and 50 Å, were investigated to elucidate the differences with 64 replica × 500 ns REMD simulations using the AMBER parm99SB force field. The differences among them were not large overall, and the results for the L = 30 and 40 Å simulations in the conformational free energy landscape were found to be very similar at room temperature. However, a small but statistically significant difference was seen for L = 50 Å. We observed that extended conformations were slightly overstabilized in the smaller systems. The origin of these artifacts is discussed by comparison to an electrostatic calculation method without PBCs.
Rational approximations to rational models: alternative algorithms for category learning.
Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J
2010-10-01
Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.
A complex approach to the blue-loop problem
NASA Astrophysics Data System (ADS)
Ostrowski, Jakub; Daszynska-Daszkiewicz, Jadwiga
2015-08-01
The problem of the blue loops during the core helium burning, outstanding for almost fifty years, is one of the most difficult and poorly understood problems in stellar astrophysics. Most of the work focused on the blue loops done so far has been performed with old stellar evolution codes and with limited computational resources. In the end the obtained conclusions were based on a small sample of models and could not have taken into account more advanced effects and interactions between them.The emergence of the blue loops depends on many details of the evolution calculations, in particular on chemical composition, opacity, mixing processes etc. The non-linear interactions between these factors contribute to the statement that in most cases it is hard to predict without a precise stellar modeling whether a loop will emerge or not. The high sensitivity of the blue loops to even small changes of the internal structure of a star yields one more issue: a sensitivity to numerical problems, which are common in calculations of stellar models on advanced stages of the evolution.To tackle this problem we used a modern stellar evolution code MESA. We calculated a large grid of evolutionary tracks (about 8000 models) with masses in the range of 3.0 - 25.0 solar masses from the zero age main sequence to the depletion of helium in the core. In order to make a comparative analysis, we varied metallicity, helium abundance and different mixing parameters resulting from convective overshooting, rotation etc.The better understanding of the properties of the blue loops is crucial for our knowledge of the population of blue supergiants or pulsating variables such as Cepheids, α-Cygni or Slowly Pulsating B-type supergiants. In case of more massive models it is also of great importance for studies of the progenitors of supernovae.
Computerized cognitive training in survivors of childhood cancer: a pilot study.
Hardy, Kristina K; Willard, Victoria W; Bonner, Melanie J
2011-01-01
The objective of the current study was to pilot a computerized cognitive training program, Captain's Log, in a small sample of survivors of childhood cancer. A total of 9 survivors of acute lymphoblastic leukemia and brain tumors with attention and working memory deficits were enrolled in a home-based 12-week cognitive training program. Survivors returned for follow-up assessments postintervention and 3 months later. The intervention was associated with good feasibility and acceptability. Participants exhibited significant increases in working memory and decreases in parent-rated attention problems following the intervention. Findings indicate that home-based, computerized cognitive intervention is a promising intervention for survivors with cognitive late effects; however, further study is warranted with a larger sample.
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
Propagation of Circularly Polarized Light Through a Two-Dimensional Random Medium
NASA Astrophysics Data System (ADS)
Gorodnichev, E. E.
2017-12-01
The problem of small-angle multiple-scattering of circularly polarized light in a two-dimensional medium with large fiberlike inhomogeneities is studied. The attenuation lengths for elements the density matrix are calculated. It is found that with increasing the sample thickness the intensity of waves polarized along the fibers decays faster than the other density matrix elements. With further increase in the thickness, the off-diagonal element which is responsible for correlation between the cross-polarized waves dissapears. In the case of very thick samples the scattered field proves to be polarized perpendicular to the fibers. It is shown that the difference in the attenuation lengths of the density matrix elements results in a non-monotonic depth dependence of the degree of polarization.
Loprinzi, Paul D; Maskalick, Shawn; Brown, Kent; Gilham, Ben
2013-01-01
Few population-based studies examining the association between tinnitus and depression among older adults have been conducted. Therefore, the purpose of this study was to examine the association between tinnitus and depression among a nationally representative sample of US older adults. Data from the 2005-2006 National Health and Nutrition Examination Survey was used. 696 older adults (70-85 yr) completed questionnaires on tinnitus and depression, with depression assessed using the Patient Health Questionnaire-9. After controlling for firearm use, age, gender, race-ethnicity, cardiovascular/stroke history, diabetes, smoking status, body mass index, physical activity, noise exposure and elevated blood pressure, there was a significant positive association (beta coefficient: 1.28, 95% CI: 0.26-2.29, p = 0.01) between depression and tinnitus being at least a moderate problem, suggesting that those who perceived their tinnitus to be a moderate problem were more likely to be depressed than those perceiving it to be a small or no problem. Additionally, after adjustments, those who were bothered by tinnitus when going to bed were 3.06 times more likely to be depressed than those who were not bothered by tinnitus when going to bed (OR = 2.44, 95% CI: 1.03-5.76, p = 0.04). These findings suggest that individuals who perceive their tinnitus to be a problem or have problems with tinnitus when going to bed may be in need of intervention to prevent or reduce their depression symptoms so as to ensure that other areas of their life are not negatively influenced.
The Small Retailer and His Problems
ERIC Educational Resources Information Center
Burstinger, Irving
1975-01-01
This study, through personal interviews, collected data on small retailers for three purposes: (1) to provide informative insights into small-scale retailing in New York City, (2) to explore retailers' opinions as to why customers shop at their stroes, and (3) to ascertain the more common problems experienced by retailers. (Author/BP)
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.
2014-01-01
Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last nine years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification due to the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass-spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet. PMID:24658804
Pfeuffer, Kevin P; Ray, Steven J; Hieftje, Gary M
2014-05-01
Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.
NASA Astrophysics Data System (ADS)
Pfeuffer, Kevin P.; Ray, Steven J.; Hieftje, Gary M.
2014-05-01
Ambient desorption/ionization mass spectrometry (ADI-MS) has developed into an important analytical field over the last 9 years. The ability to analyze samples under ambient conditions while retaining the sensitivity and specificity of mass spectrometry has led to numerous applications and a corresponding jump in the popularity of this field. Despite the great potential of ADI-MS, problems remain in the areas of ion identification and quantification. Difficulties with ion identification can be solved through modified instrumentation, including accurate-mass or MS/MS capabilities for analyte identification. More difficult problems include quantification because of the ambient nature of the sampling process. To characterize and improve sample volatilization, ionization, and introduction into the mass spectrometer interface, a method of visualizing mass transport into the mass spectrometer is needed. Schlieren imaging is a well-established technique that renders small changes in refractive index visible. Here, schlieren imaging was used to visualize helium flow from a plasma-based ADI-MS source into a mass spectrometer while ion signals were recorded. Optimal sample positions for melting-point capillary and transmission-mode (stainless steel mesh) introduction were found to be near (within 1 mm of) the mass spectrometer inlet. Additionally, the orientation of the sampled surface plays a significant role. More efficient mass transport resulted for analyte deposits directly facing the MS inlet. Different surfaces (glass slide and rough surface) were also examined; for both it was found that the optimal position is immediately beneath the MS inlet.
Vinnars, Bo; Thormählen, Barbro; Gallop, Robert; Norén, Kristina; Barber, Jacques P.
2009-01-01
Studies involving patients with personality disorders (PD) have not focused on improvement of core aspects of the PD. This paper examines changes in quality of object relations, interpersonal problems, psychological mindedness, and personality traits in a sample of 156 patients with DSM-IV PD diagnoses being randomized to either manualized or non manualized dynamic psychotherapy. Effect sizes adjusted for symptomatic change and reliable change indices were calculated. We found that both treatments were equally effective at reducing personality pathology. Only in neuroticism did the non manualized group do better during the follow-up period. The largest improvement was found in quality of object relations. For the remaining variables only small and clinically insignificant magnitudes of change were found. PMID:20161588
Single-shot detection of bacterial endospores via coherent Raman spectroscopy.
Pestov, Dmitry; Wang, Xi; Ariunbold, Gombojav O; Murawski, Robert K; Sautenkov, Vladimir A; Dogariu, Arthur; Sokolov, Alexei V; Scully, Marlan O
2008-01-15
Recent advances in coherent Raman spectroscopy hold exciting promise for many potential applications. For example, a technique, mitigating the nonresonant four-wave-mixing noise while maximizing the Raman-resonant signal, has been developed and applied to the problem of real-time detection of bacterial endospores. After a brief review of the technique essentials, we show how extensions of our earlier experimental work [Pestov D, et al. (2007) Science 316:265-268] yield single-shot identification of a small sample of Bacillus subtilis endospores (approximately 10(4) spores). The results convey the utility of the technique and its potential for "on-the-fly" detection of biohazards, such as Bacillus anthracis. The application of optimized coherent anti-Stokes Raman scattering scheme to problems requiring chemical specificity and short signal acquisition times is demonstrated.
Methods to estimate lightning activity using WWLLN and RS data
NASA Astrophysics Data System (ADS)
Baranovskiy, Nikolay V.; Belikova, Marina Yu.; Karanina, Svetlana Yu.; Karanin, Andrey V.; Glebova, Alena V.
2017-11-01
The aim of the work is to develop a comprehensive method for assessing thunderstorm activity using WWLLN and RS data. It is necessary to group lightning discharges to solve practical problems of lightning protection and lightningcaused forest fire danger, as well as climatology problems using information on the spatial and temporal characteristics of thunderstorms. For grouping lightning discharges, it is proposed to use clustering algorithms. The region covering Timiryazevskiy forestry (Tomsk region, borders (55.93 - 56.86)x(83.94 - 85.07)) was selected for the computational experiment. We used the data on lightning discharges registered by the WWLLN network in this region on July 23, 2014. 273 lightning discharges were sampling. A relatively small number of discharges allowed us a visual analysis of solutions obtained during clustering.
No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.
van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B
2016-11-24
Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.
Wang, Fu-Wei; Chiu, Yu-Wen; Tu, Ming-Shium; Chou, Ming-Yueh; Wang, Chao-Ling; Chuang, Hung-Yi
2009-07-01
There has been increasing interest in the occupational health of workers in small enterprises, especially in developing countries. This study examines the association between psychosocial job characteristics and fatigue, and attempts to identify risk factors for fatigue among workers of small enterprises in southern Taiwan. A structured questionnaire was administered to workers receiving regular health examinations between August 2005 and January 2006. The questionnaire collected demographic information and data on working conditions, personal health status and life styles. It also collected information on psychosocial job characteristics, fatigue and psychological distress using three instruments. A total of 647 workers with mean age of 43.7 were completed. Probable fatigue was found in 34.6% of the sample. Fatigue was found by multiple logistic regressions to be associated with the lack of exercise, working in shifts, depression score and lack of social support at workplace. This study found associations between life style, psychosocial job characteristics and fatigue. Because the high prevalence of probable fatigue was found in such small enterprises, the authors suggest that a short interview with some quick questionnaires in health checkup for these small enterprise workers are helpful to early detect psychosocial and fatigue problems.
Wille, Sarah M R; Di Fazio, Vincent; Ramírez-Fernandez, Maria del Mar; Kummer, Natalie; Samyn, Nele
2013-02-01
"Driving under the influence of drugs" (DUID) has a large impact on the worldwide mortality risk. Therefore, DUID legislations based on impairment or analytical limits are adopted. Drug detection in oral fluid is of interest due to the ease of sampling during roadside controls. The prevalence of Δ9-tetrahydrocannabinol (THC) in seriously injured drivers ranges from 0.5% to 7.6% in Europe. For these reasons, the quantification of THC in oral fluid collected with 3 alternative on-site collectors is presented and discussed in this publication. An ultra-performance liquid chromatography-mass spectrometric quantification method for THC in oral fluid samples collected with the StatSure (Diagnostic Systems), Quantisal (Immunalysis), and Certus (Concateno) devices was validated according to the international guidelines. Small sample volumes of 100-200 μL were extracted using hexane. Special attention was paid to factors such as matrix effects, THC adsorption onto the collector, and stability in the collection fluid. A relatively high-throughput analysis was developed and validated according to ISO 17025 requirements. Although the effects of the matrix on the quantification could be minimized using a deuterated internal standard, and stability was acceptable according the validation data, adsorption of THC onto the collectors was a problem. For the StatSure device, THC was totally recovered from the collector pad after storage for 24 hours at room temperature or 7 days at 4°C. A loss of 15%-25% was observed for the Quantisal collector, whereas the recovery from the Certus device was irreproducible (relative standard deviation, 44%-85%) and low (29%-80%). During the roadside setting, a practical problem arose: small volumes of oral fluid (eg, 300 μL) were collected. However, THC was easily detected and concentrations ranged from 8 to 922 ng/mL in neat oral fluid. A relatively high-throughput analysis (40 samples in 4 hours) adapted for routine DUID analysis was developed and validated for THC quantification in oral fluid samples collected from drivers under the influence of cannabis.
Ion beam figuring of small optical components
NASA Astrophysics Data System (ADS)
Drueding, Thomas W.; Fawcett, Steven C.; Wilson, Scott R.; Bifano, Thomas G.
1995-12-01
Ion beam figuring provides a highly deterministic method for the final precision figuring of optical components with advantages over conventional methods. The process involves bombarding a component with a stable beam of accelerated particles that selectively removes material from the surface. Figure corrections are achieved by rastering the fixed-current beam across the workplace at appropriate, time-varying velocities. Unlike conventional methods, ion figuring is a noncontact technique and thus avoids such problems as edge rolloff effects, tool wear, and force loading of the workpiece. This work is directed toward the development of the precision ion machining system at NASA's Marshall Space Flight Center. This system is designed for processing small (approximately equals 10-cm diam) optical components. Initial experiments were successful in figuring 8-cm-diam fused silica and chemical-vapor-deposited SiC samples. The experiments, procedures, and results of figuring the sample workpieces to shallow spherical, parabolic (concave and convex), and non-axially-symmetric shapes are discussed. Several difficulties and limitations encountered with the current system are discussed. The use of a 1-cm aperture for making finer corrections on optical components is also reported.
Hooda, Vinita; Gahlaut, Anjum; Gothwal, Ashish; Hooda, Vikas
2018-04-27
Clinical manifestations of the elevated plasma triacylglycerol (TG) include a greater prevalence of atherosclerotic heart disease, acute pancreatitis, diabetes mellitus, hypertension, and ischemic vascular disease. Hence, these significant health troubles have attracted scientific attention for the precise detection of TG in biological samples. Numerous techniques have been employed to quantify TG over many decades, but biosensors hold the leading position owing to their superior traits such as highly specific recognition for target molecules, accuracy, minituarization, small sample requirement and rapid response. Enzyme-based electrochemical biosensors represent an instantaneous resolution for the foremost bottlenecks constraining laboratory prototypes to reach real time bedside applications. We highlight the choice of transducers and constructive strategies to design high-performance biosensor for the quantification of triglycerides in sera and early diagnosis of health problems related to it. In the present review, a small effort has been made to emphasize the significant role of enzymes, nanostructured metal oxides, graphene, conducting polypyrrole, nanoparticles, porous silicon, EISCAP and ENFET in enabling TG biosensors more proficient and taking a revolutionary step forward.
NASA technology utilization program: The small business market
NASA Technical Reports Server (NTRS)
Vannoy, J. K.; Garcia-Otero, F.; Johnson, F. D.; Staskin, E.
1980-01-01
Technology transfer programs were studied to determine how they might be more useful to the small business community. The status, needs, and technology use patterns of small firms are reported. Small business problems and failures are considered. Innovation, capitalization, R and D, and market share problems are discussed. Pocket, captive, and new markets are summarized. Small manufacturers and technology acquisition are discussed, covering external and internal sources, and NASA technology. Small business and the technology utilization program are discussed, covering publications and industrial applications centers. Observations and recommendations include small business market development and contracting, and NASA management technology.
Taguchi, Y-h; Iwadate, Mitsuo; Umeyama, Hideaki
2015-04-30
Feature extraction (FE) is difficult, particularly if there are more features than samples, as small sample numbers often result in biased outcomes or overfitting. Furthermore, multiple sample classes often complicate FE because evaluating performance, which is usual in supervised FE, is generally harder than the two-class problem. Developing sample classification independent unsupervised methods would solve many of these problems. Two principal component analysis (PCA)-based FE, specifically, variational Bayes PCA (VBPCA) was extended to perform unsupervised FE, and together with conventional PCA (CPCA)-based unsupervised FE, were tested as sample classification independent unsupervised FE methods. VBPCA- and CPCA-based unsupervised FE both performed well when applied to simulated data, and a posttraumatic stress disorder (PTSD)-mediated heart disease data set that had multiple categorical class observations in mRNA/microRNA expression of stressed mouse heart. A critical set of PTSD miRNAs/mRNAs were identified that show aberrant expression between treatment and control samples, and significant, negative correlation with one another. Moreover, greater stability and biological feasibility than conventional supervised FE was also demonstrated. Based on the results obtained, in silico drug discovery was performed as translational validation of the methods. Our two proposed unsupervised FE methods (CPCA- and VBPCA-based) worked well on simulated data, and outperformed two conventional supervised FE methods on a real data set. Thus, these two methods have suggested equivalence for FE on categorical multiclass data sets, with potential translational utility for in silico drug discovery.
Applying Bayesian statistics to the study of psychological trauma: A suggestion for future research.
Yalch, Matthew M
2016-03-01
Several contemporary researchers have noted the virtues of Bayesian methods of data analysis. Although debates continue about whether conventional or Bayesian statistics is the "better" approach for researchers in general, there are reasons why Bayesian methods may be well suited to the study of psychological trauma in particular. This article describes how Bayesian statistics offers practical solutions to the problems of data non-normality, small sample size, and missing data common in research on psychological trauma. After a discussion of these problems and the effects they have on trauma research, this article explains the basic philosophical and statistical foundations of Bayesian statistics and how it provides solutions to these problems using an applied example. Results of the literature review and the accompanying example indicates the utility of Bayesian statistics in addressing problems common in trauma research. Bayesian statistics provides a set of methodological tools and a broader philosophical framework that is useful for trauma researchers. Methodological resources are also provided so that interested readers can learn more. (c) 2016 APA, all rights reserved).
A life detection problem in a High Arctic microbial community
NASA Astrophysics Data System (ADS)
Rogers, J. D.; Perreault, N. N.; Niederberger, T. D.; Lichten, C.; Whyte, L. G.; Nadeau, J. L.
2010-03-01
Fluorescent labeling of bacterial cell walls, DNA, and metabolic processes demonstrates high (potentially single molecule) sensitivity, is non-invasive, and in some cases can differentiate strains and species. Robust microscopes such as the custom instruments presented here can provide good image quality in the field and are potentially suitable for flight. However, ambiguous or false-positive results with bacterial stains can occur and can create difficulties in interpretation even on Earth. We present a "real" life detection problem in a sample of biofilms taken from the Canadian High Arctic. The samples consisted of numerous small sulfur-oxidizing bacteria and larger structures resembling fungi or diatoms. The identity of these latter structures remained ambiguous until electron microscopy and X-ray spectroscopy were performed, indicating that they were unusual sulfur minerals probably precipitated by the bacterial communities. While such mineral structures may possibly serve as biosignatures after the cells have disappeared, it is important that they not be mistaken for cells themselves. It is also possible that unusual mineral structures will be performed under extraterrestrial conditions, so great care is needed to differentiate cell structures from minerals.
Supplee, Lauren H; Skuban, Emily Moye; Trentacosta, Christopher J; Shaw, Daniel S; Stoltz, Emilee
2011-01-01
Little longitudinal research has been conducted on changes in children's emotional self-regulation strategy (SRS) use after infancy, particularly for children at risk. In this study, the authors examined changes in boys' emotional SRS from toddlerhood through preschool. Repeated observational assessments using delay of gratification tasks at ages 2, 3, and 4 years were examined with both variable- and person-oriented analyses in a low-income sample of boys (N = 117) at risk for early problem behavior. Results were consistent with theory on emotional SRS development in young children. Children initially used more emotion-focused SRS (e.g., comfort seeking) and transitioned to greater use of planful SRS (e.g., distraction) by 4 years of age. Person-oriented analysis using trajectory analysis found similar patterns from 2 to 4 years, with small groups of boys showing delayed movement away from emotion-focused strategies or delay in the onset of regular use of distraction. The results provide a foundation for future researchers to examine the development of SRS in low-income young children.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
Computerized adaptive testing: the capitalization on chance problem.
Olea, Julio; Barrada, Juan Ramón; Abad, Francisco J; Ponsoda, Vicente; Cuevas, Lara
2012-03-01
This paper describes several simulation studies that examine the effects of capitalization on chance in the selection of items and the ability estimation in CAT, employing the 3-parameter logistic model. In order to generate different estimation errors for the item parameters, the calibration sample size was manipulated (N = 500, 1000 and 2000 subjects) as was the ratio of item bank size to test length (banks of 197 and 788 items, test lengths of 20 and 40 items), both in a CAT and in a random test. Results show that capitalization on chance is particularly serious in CAT, as revealed by the large positive bias found in the small sample calibration conditions. For broad ranges of theta, the overestimation of the precision (asymptotic Se) reaches levels of 40%, something that does not occur with the RMSE (theta). The problem is greater as the item bank size to test length ratio increases. Potential solutions were tested in a second study, where two exposure control methods were incorporated into the item selection algorithm. Some alternative solutions are discussed.
Supplee, Lauren H.; Skuban, Emily Moye; Trentacosta, Christopher J.; Shaw, Daniel S.; Stoltz, Emilee
2011-01-01
Little longitudinal research has been conducted on changes in children's emotional self-regulation strategy (SRS) use after infancy, particularly for children at risk. The current study examined changes in boys' emotional SRS from toddlerhood through preschool. Repeated observational assessments using delay of gratification tasks at ages 2, 3, and 4 were examined with both variable- and person-oriented analyses in a low-income sample of boys (N = 117) at-risk for early problem behavior. Results were consistent with theory on emotional SRS development in young children. Children initially used more emotion-focused SRS (e.g., comfort seeking) and transitioned to greater use of planful SRS (e.g., distraction) by age 4. Person-oriented analysis using trajectory analysis found similar patterns from 2–4, with small groups of boys showing delayed movement away from emotion-focused strategies or delay in the onset of regular use of distraction. The results provide a foundation for future research to examine the development of SRS in low-income young children. PMID:21675542
Probabilistic generation of random networks taking into account information on motifs occurrence.
Bois, Frederic Y; Gayraud, Ghislaine
2015-01-01
Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli.
Probabilistic Generation of Random Networks Taking into Account Information on Motifs Occurrence
Bois, Frederic Y.
2015-01-01
Abstract Because of the huge number of graphs possible even with a small number of nodes, inference on network structure is known to be a challenging problem. Generating large random directed graphs with prescribed probabilities of occurrences of some meaningful patterns (motifs) is also difficult. We show how to generate such random graphs according to a formal probabilistic representation, using fast Markov chain Monte Carlo methods to sample them. As an illustration, we generate realistic graphs with several hundred nodes mimicking a gene transcription interaction network in Escherichia coli. PMID:25493547
Nanophotonic particle simulation and inverse design using artificial neural networks
Peurifoy, John; Shen, Yichen; Jing, Li; Cano-Renteria, Fidel; DeLacy, Brendan G.; Joannopoulos, John D.; Tegmark, Max
2018-01-01
We propose a method to use artificial neural networks to approximate light scattering by multilayer nanoparticles. We find that the network needs to be trained on only a small sampling of the data to approximate the simulation to high precision. Once the neural network is trained, it can simulate such optical processes orders of magnitude faster than conventional simulations. Furthermore, the trained neural network can be used to solve nanophotonic inverse design problems by using back propagation, where the gradient is analytical, not numerical. PMID:29868640
Dempster, Robert; Davis, Deborah Winders; Faye Jones, V; Keating, Adam; Wildman, Beth
2015-12-01
Significant numbers of children have diagnosable mental health problems, but only a small proportion of them receive appropriate services. Stigma has been associated with help-seeking for adult mental health problems and for Caucasian parents. The current study aims to understand factors, including stigma, associated with African American parents' help-seeking behavior related to perceived child behavior problems. Participants were a community sample of African American parents and/or legal guardians of children ages 3-8 years recruited from an urban primary care setting (N = 101). Variables included child behavior, stigma (self, friends/family, and public), object of stigma (parent or child), obstacles for engagement, intention to attend parenting classes, and demographics. Self-stigma was the strongest predictor of help-seeking among African American parents. The impact of self-stigma on parents' ratings of the likelihood of attending parenting classes increased when parents considered a situation in which their child's behavior was concerning to them. Findings support the need to consider parent stigma in the design of care models to ensure that children receive needed preventative and treatment services for behavioral/mental health problems in African American families.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
Anderson, Eric C; Ng, Thomas C
2016-02-01
We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.
On the classification of normally distributed neurons: an application to human dentate nucleus.
Ristanović, Dušan; Milošević, Nebojša T; Marić, Dušica L
2011-03-01
One of the major goals in cellular neurobiology is the meaningful cell classification. However, in cell classification there are many unresolved issues that need to be addressed. Neuronal classification usually starts with grouping cells into classes according to their main morphological features. If one tries to test quantitatively such a qualitative classification, a considerable overlap in cell types often appears. There is little published information on it. In order to remove the above-mentioned shortcoming, we undertook the present study with the aim to offer a novel method for solving the class overlapping problem. To illustrate our method, we analyzed a sample of 124 neurons from adult human dentate nucleus. Among them we qualitatively selected 55 neurons with small dendritic fields (the small neurons), and 69 asymmetrical neurons with large dendritic fields (the large neurons). We showed that these two samples are normally and independently distributed. By measuring the neuronal soma areas of both samples, we observed that the corresponding normal curves cut each other. We proved that the abscissa of the point of intersection of the curves could represent the boundary between the two adjacent overlapping neuronal classes, since the error done by such division is minimal. Statistical evaluation of the division was also performed.
Small Village Planning Problems in the Netherlands.
ERIC Educational Resources Information Center
Groot, Jacob P.
The problems associated with small villages are among the most difficult in Dutch physical planning, for they encompass the support of minimum social services in small towns and villages; the preservation of areas of ecological and scenic value; the acommodation of a growing population desirous of a home in the country or continued country living;…
Small Business Management Volume III: Curriculum. An Adult Education Program.
ERIC Educational Resources Information Center
Persons, Edgar A.; Swanson, Gordon I.
The small business management adult education program outlined in this curriculum guide is designed to help small business entrepreneurs solve their business management problems and attain the goals they have established for their businesses and their families. (An instructor's manual and practice problems are in separate volumes.) The 3-year…
NASA Technical Reports Server (NTRS)
Sand, F.; Christie, R.
1975-01-01
Extending the crop survey application of remote sensing from small experimental regions to state and national levels requires that a sample of agricultural fields be chosen for remote sensing of crop acreage, and that a statistical estimate be formulated with measurable characteristics. The critical requirements for the success of the application are reviewed in this report. The problem of sampling in the presence of cloud cover is discussed. Integration of remotely sensed information about crops into current agricultural crop forecasting systems is treated on the basis of the USDA multiple frame survey concepts, with an assumed addition of a new frame derived from remote sensing. Evolution of a crop forecasting system which utilizes LANDSAT and future remote sensing systems is projected for the 1975-1990 time frame.
Fast multi-dimensional NMR by minimal sampling
NASA Astrophysics Data System (ADS)
Kupče, Ēriks; Freeman, Ray
2008-03-01
A new scheme is proposed for very fast acquisition of three-dimensional NMR spectra based on minimal sampling, instead of the customary step-wise exploration of all of evolution space. The method relies on prior experiments to determine accurate values for the evolving frequencies and intensities from the two-dimensional 'first planes' recorded by setting t1 = 0 or t2 = 0. With this prior knowledge, the entire three-dimensional spectrum can be reconstructed by an additional measurement of the response at a single location (t1∗,t2∗) where t1∗ and t2∗ are fixed values of the evolution times. A key feature is the ability to resolve problems of overlap in the acquisition dimension. Applied to a small protein, agitoxin, the three-dimensional HNCO spectrum is obtained 35 times faster than systematic Cartesian sampling of the evolution domain. The extension to multi-dimensional spectroscopy is outlined.
Temperament, insecure attachment, impulsivity, and sexuality in women in jail.
Iliceto, Paolo; Pompili, Maurizio; Candilera, Gabriella; Rosafio, Iole; Erbuto, Denise; Battuello, Michele; Lester, David; Girardi, Paolo
2012-03-01
Women constitute only a small proportion of inmates, but several studies have shown that they have higher rates of psychiatric disturbance than incarcerated men and community samples. Mental health treatment is necessary to prevent severe illness and suicide in these women. The convenience sample consisted of 40 female detainees and 40 controls who were administered self-report questionnaires to assess temperament (TEMPS-A), insecure attachment (ECR), impulsivity (BIS-11), and sexual behavior (SESAMO). The incarcerated women had higher levels of affective temperament (except for hyperthymia), avoidance, anxiety, impulsivity, and psychosexual issues than the female community sample. Many interrelated emotional and affective disturbances affect the physical and psychological well-being of women in jail, and it is possible that these problems may lead to suicide. Health professionals need to develop gender-specific therapeutic interventions for women in jail. © 2012 International Association of Forensic Nurses.
Cooper, Ruth E; Tye, Charlotte; Kuntsi, Jonna; Vassos, Evangelos; Asherson, Philip
2016-01-15
A number of randomised controlled trials report a beneficial effect of omega-3 polyunsaturated fatty acid (n-3 PUFA) supplementation on emotional lability (EL) and related domains (e.g. oppositional behaviour, conduct problems). Given that n-3 PUFA supplementation shows a significant effect on reducing symptoms of attention-deficit/hyperactivity disorder (ADHD) and that EL and related behaviours commonly co-occurs with ADHD, it is important that there is a more conclusive picture as to the effect of n-3 PUFA on these co-occurring clinical domains. Databases (Ovid Medline, Embase, Psychinfo) were searched for trials assessing the effects of n-3 PUFA on EL, oppositional behaviour, aggression and conduct problems. We included trials in children who had ADHD or a related neurodevelopmental disorder. Of the 1775 identified studies, 10 were included in the meta-analysis. In the primary analyses n-3 PUFA supplementation did not show improvements in measures of EL, oppositional behaviour, conduct problems or aggression. However subgroup analyses of higher quality studies and those meeting strict inclusion criteria found a significant reduction in EL and oppositional behaviour. A number of treatment effects may have failed to reach statistical significance due to small sample sizes and within and between study heterogeneity in terms of design and study participants. These results exclude the possibility of moderate to large effects. They provide suggestive evidence of small effects of n-3 PUFA on reducing EL and oppositional behaviour in subgroups of children with ADHD. Copyright © 2015 Elsevier B.V. All rights reserved.
Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh
2015-01-01
Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.
ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh
2015-01-01
Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729
Translating learning into practice
Armson, Heather; Kinzie, Sarah; Hawes, Dawnelle; Roder, Stefanie; Wakefield, Jacqueline; Elmslie, Tom
2007-01-01
PROBLEM ADDRESSED The need for effective and accessible educational approaches by which family physicians can maintain practice competence in the face of an overwhelming amount of medical information. OBJECTIVE OF PROGRAM The practice-based small group (PBSG) learning program encourages practice changes through a process of small-group peer discussion—identifying practice gaps and reviewing clinical approaches in light of evidence. PROGRAM DESCRIPTION The PBSG uses an interactive educational approach to continuing professional development. In small, self-formed groups within their local communities, family physicians discuss clinical topics using prepared modules that provide sample patient cases and accompanying information that distils the best evidence. Participants are guided by peer facilitators to reflect on the discussion and commit to appropriate practice changes. CONCLUSION The PBSG has evolved over the past 15 years in response to feedback from members and reflections of the developers. The success of the program is evidenced in effect on clinical practice, a large and increasing number of members, and the growth of interest internationally. PMID:17872876
Conflict Management in "Ad Hoc" Problem-Solving Groups: A Preliminary Investigation.
ERIC Educational Resources Information Center
Wallace, Les; Baxter, Leslie
Full study of small group communication must include consideration of task and socio-emotional dimensions, especially in relation to group problem solving. Thirty small groups were tested for their reactions in various "ad hoc" conflict resolution situations. Instructions to the groups were (1) no problem-solving instructions (control),…
An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups
ERIC Educational Resources Information Center
Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi
2012-01-01
The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…
Small Business Management; Business Education: 7739.11.
ERIC Educational Resources Information Center
McCool, Felix J.
This curriculum guide gives a brief review of the relation of business to the community and an introduction to problems in organizing a small business. These problems include basic long-range decisions: type of financing, need for the business, and method of financing. The document also focuses on the more immediate problems of location, housing,…
Backtrack Programming: A Computer-Based Approach to Group Problem Solving.
ERIC Educational Resources Information Center
Scott, Michael D.; Bodaken, Edward M.
Backtrack problem-solving appears to be a viable alternative to current problem-solving methodologies. It appears to have considerable heuristic potential as a conceptual and operational framework for small group communication research, as well as functional utility for the student group in the small group class or the management team in the…
Emergent Leadership in Children's Cooperative Problem Solving Groups
ERIC Educational Resources Information Center
Sun, Jingjng; Anderson, Richard C.; Perry, Michelle; Lin, Tzu-Jung
2017-01-01
Social skills involved in leadership were examined in a problem-solving activity in which 252 Chinese 5th-graders worked in small groups on a spatial-reasoning puzzle. Results showed that students who engaged in peer-managed small-group discussions of stories prior to problem solving produced significantly better solutions and initiated…
Optimal selection of epitopes for TXP-immunoaffinity mass spectrometry.
Planatscher, Hannes; Supper, Jochen; Poetz, Oliver; Stoll, Dieter; Joos, Thomas; Templin, Markus F; Zell, Andreas
2010-06-25
Mass spectrometry (MS) based protein profiling has become one of the key technologies in biomedical research and biomarker discovery. One bottleneck in MS-based protein analysis is sample preparation and an efficient fractionation step to reduce the complexity of the biological samples, which are too complex to be analyzed directly with MS. Sample preparation strategies that reduce the complexity of tryptic digests by using immunoaffinity based methods have shown to lead to a substantial increase in throughput and sensitivity in the proteomic mass spectrometry approach. The limitation of using such immunoaffinity-based approaches is the availability of the appropriate peptide specific capture antibodies. Recent developments in these approaches, where subsets of peptides with short identical terminal sequences can be enriched using antibodies directed against short terminal epitopes, promise a significant gain in efficiency. We show that the minimal set of terminal epitopes for the coverage of a target protein list can be found by the formulation as a set cover problem, preceded by a filtering pipeline for the exclusion of peptides and target epitopes with undesirable properties. For small datasets (a few hundred proteins) it is possible to solve the problem to optimality with moderate computational effort using commercial or free solvers. Larger datasets, like full proteomes require the use of heuristics.
Child dental fear and general emotional problems: a pilot study.
Krikken, J B; ten Cate, J M; Veerkamp, J S J
2010-12-01
This was to investigate the relation between general emotional and behavioural problems of the child and dental anxiety and dental behavioural management problems. Dental treatment involves many potentially unpleasant stimuli, which all may lead to the development of dental anxiety and behavioural management problems (BMP). It is still unclear why some children get anxious in the dental situation while others, with a comparable dental history, do not. Besides the latent inhibition theory it is suggested that this can be explained by differences in child rearing and personality traits. The sample consisted of 50 children (4-12 years old) and their parents participated in this study. Parents filled out the Child Fear Survey Schedule Dental Subscale (CFSS-DS) and the Child Behaviour Checklist (CBCL) on behalf of their child. Child behaviour during consecutive dental treatments was assessed using the Venham scale. There were 39 children subject to analysis (21 boys) with a mean CFSS score of 40.4. Children aged 4 and 5 years who had sleeping problems, attention problems and aggressive behaviour, as scored by parents on the CBCL, displayed more disruptive behaviour during dental treatment. Children with emotionally/ reactive and attention problems were more anxious. In this pilot study a possible relation between general emotional and behavioural problems of young children and dental anxiety was shown. Also a relation between emotional and behavioural problems and dental behavioural management problems was shown. Because of the small number of subjects in our study, further research will be needed to confirm these results.
Use of an electronic problem list by primary care providers and specialists.
Wright, Adam; Feblowitz, Joshua; Maloney, Francine L; Henkin, Stanislav; Bates, David W
2012-08-01
Accurate patient problem lists are valuable tools for improving the quality of care, enabling clinical decision support, and facilitating research and quality measurement. However, problem lists are frequently inaccurate and out-of-date and use varies widely across providers. Our goal was to assess provider use of an electronic problem list and identify differences in usage between medical specialties. Chart review of a random sample of 100,000 patients who had received care in the past two years at a Boston-based academic medical center. Counts were collected of all notes and problems added for each patient from 1/1/2002 to 4/30/2010. For each entry, the recording provider and the clinic in which the entry was recorded was collected. We used the Healthcare Provider Taxonomy Code Set to categorize each clinic by specialty. We analyzed the problem list use across specialties, controlling for note volume as a proxy for visits. A total of 2,264,051 notes and 158,105 problems were recorded in the electronic medical record for this population during the study period. Primary care providers added 82.3% of all problems, despite writing only 40.4% of all notes. Of all patients, 49.1% had an assigned primary care provider (PCP) affiliated with the hospital; patients with a PCP had an average of 4.7 documented problems compared to 1.5 problems for patients without a PCP. Primary care providers were responsible for the majority of problem documentation; surgical and medical specialists and subspecialists recorded a disproportionately small number of problems on the problem list.
Developmental changes in coping: situational and methodological influences.
Vierhaus, Marc; Lohaus, Arnold; Ball, Juliane
2007-09-01
Previous studies on the development of coping have shown rather inconsistent findings regarding the developmental trajectories for different coping dimensions. The aim of this study is to search for possible influences that might explain these inconsistencies. The analysis focuses on methodological influences (longitudinal vs. cross-sectional assessments) and situational influences. Two samples of children were traced longitudinally with yearly assessments from grade 2 to 5 (sample 1, N =432) and from grade 4 to 7 (sample 2, N =366). A third sample (N =849) was added with cross-sectional assessments from grade 2 to 7. The assessed coping dimensions were related to (a) problem solving, (b) seeking social support, (c) palliative coping, (d) externalizing emotional coping, and (e) avoidant coping. The use of the coping strategies had to be assessed for six stress-evoking situations. The results show only small differences between the longitudinal and the cross-sectional coping assessments. There are, however, clear situational influences on the choice of the coping strategies and also on the resulting developmental trajectories.
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
Schillaci, Michael A; Schillaci, Mario E
2009-02-01
The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (n<10) or very small (n < or = 5) sample sizes. This method can be used by researchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.
Ten-year trends in adolescents' self-reported emotional and behavioral problems in the Netherlands.
Duinhof, Elisa L; Stevens, Gonneke W J M; van Dorsselaer, Saskia; Monshouwer, Karin; Vollebergh, Wilma A M
2015-09-01
Changes in social, cultural, economic, and governmental systems over time may affect adolescents' development. The present study examined 10-year trends in self-reported emotional and behavioral problems among 11- to 16-year-old adolescents in the Netherlands. In addition, gender (girls versus boys), ethnic (Dutch versus non western) and educational (vocational versus academic) differences in these trends were examined. By means of the Strengths and Difficulties Questionnaire, trends in emotional and behavioral problems were studied in adolescents belonging to one of five independent population representative samples (2003: n = 6,904; 2005: n = 5,183; 2007: n = 6,228; 2009: n = 5,559; 2013: n = 5,478). Structural equation models indicated rather stable levels of emotional and behavioral problems over time. Whereas some small changes were found between different time points, these changes did not represent consistent changes in problem levels. Similarly, gender, ethnic and educational differences in self-reported problems on each time point were highly comparable, indicating stable mental health inequalities between groups of adolescents over time. Future internationally comparative studies using multiple measurement moments are needed to monitor whether these persistent mental health inequalities hold over extended periods of time and in different countries.
NASA Astrophysics Data System (ADS)
Uslu, Faruk Sukru
2017-07-01
Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.
NASA Astrophysics Data System (ADS)
Rizkallah, Mohammed W.
While Problem-based Learning (PBL) has been established in the literature in different contexts, there remains few studies on how PBL has an impact on students' attitude towards mathematics and their conceptual understanding of it in Egyptian classrooms. This study was conducted in an international university in Egypt, and the participants were non-science undergraduate students who took a course called "Fun with Problem-Solving" as a requirement core class. The study shows that students' attitude towards mathematics developed throughout the course, and this was tested using the Fennema-Sherman Mathematics Attitude Scale, where students had a pretest and posttest. While the sample size was small, there was statistical significance in the change of the means of how students perceived mathematics as a male domain, and how teachers perceived students' achievements. This notion was coupled with students' development of conceptual understanding, which was tracked throughout the semester by mapping students' work with the Lesh Translation Model.
A Solution to Separation and Multicollinearity in Multiple Logistic Regression
Shen, Jianzhao; Gao, Sujuan
2010-01-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
Shen, Jianzhao; Gao, Sujuan
2008-10-01
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Buelow, Janice M; Johnson, Cynthia S; Perkins, Susan M; Austin, Joan K; Dunn, David W
2013-04-01
Caregivers of children with both epilepsy and learning problems need assistance to manage their child's complex medical and mental health problems. We tested the cognitive behavioral intervention "Creating Avenues for Parent Partnership" (CAPP) which was designed to help caregivers develop knowledge as well as the confidence and skills to manage their child's condition. The CAPP intervention consisted of a one-day cognitive behavioral program and three follow-up group sessions. The sample comprised 31 primary caregivers. Caregivers reported that the program was useful (mean = 3.66 on a 4-point scale), acceptable (mean = 4.28 on a 5-point scale), and "pretty easy" (mean = 1.97 on a 4-point scale). Effect sizes were small to medium in paired t tests (comparison of intervention to control) and paired analysis of key variables in the pre- and post-tests. The CAPP program shows promise in helping caregivers build skills to manage their child's condition. Copyright © 2013 Elsevier Inc. All rights reserved.
Li, Hongde; Stokes, William; Chater, Emily; Roy, Rajat; de Bruin, Elza; Hu, Yili; Liu, Zhigang; Smit, Egbert F; Heynen, Guus Jje; Downward, Julian; Seckl, Michael J; Wang, Yulan; Tang, Huiru; Pardo, Olivier E
2016-01-01
Epidermal growth factor receptor (EGFR) inhibitors such as erlotinib are novel effective agents in the treatment of EGFR-driven lung cancer, but their clinical impact is often impaired by acquired drug resistance through the secondary T790M EGFR mutation. To overcome this problem, we analysed the metabonomic differences between two independent pairs of erlotinib-sensitive/resistant cells and discovered that glutathione (GSH) levels were significantly reduced in T790M EGFR cells. We also found that increasing GSH levels in erlotinib-resistant cells re-sensitised them, whereas reducing GSH levels in erlotinib-sensitive cells made them resistant. Decreased transcription of the GSH-synthesising enzymes (GCLC and GSS) due to the inhibition of NRF2 was responsible for low GSH levels in resistant cells that was directly linked to the T790M mutation. T790M EGFR clinical samples also showed decreased expression of these key enzymes; increasing intra-tumoural GSH levels with a small-molecule GST inhibitor re-sensitised resistant tumours to erlotinib in mice. Thus, we identified a new resistance pathway controlled by EGFR T790M and a therapeutic strategy to tackle this problem in the clinic.
Small Business Policy for California. Report of the Urban Small Business Employment Project.
ERIC Educational Resources Information Center
California State Dept. of Human Resources Development, Sacramento.
This report contains findings and recommendations of a project to identify problems in California's policies and in the administration of its laws regarding small businesses and to examine alternative solutions to those problems. Part 1 consists of the findings of five statewide Task Forces that concentrated on these aspects of operating a small…
NASA Astrophysics Data System (ADS)
Prasad, S.; Bruce, L. M.
2007-04-01
There is a growing interest in using multiple sources for automatic target recognition (ATR) applications. One approach is to take multiple, independent observations of a phenomenon and perform a feature level or a decision level fusion for ATR. This paper proposes a method to utilize these types of multi-source fusion techniques to exploit hyperspectral data when only a small number of training pixels are available. Conventional hyperspectral image based ATR techniques project the high dimensional reflectance signature onto a lower dimensional subspace using techniques such as Principal Components Analysis (PCA), Fisher's linear discriminant analysis (LDA), subspace LDA and stepwise LDA. While some of these techniques attempt to solve the curse of dimensionality, or small sample size problem, these are not necessarily optimal projections. In this paper, we present a divide and conquer approach to address the small sample size problem. The hyperspectral space is partitioned into contiguous subspaces such that the discriminative information within each subspace is maximized, and the statistical dependence between subspaces is minimized. We then treat each subspace as a separate source in a multi-source multi-classifier setup and test various decision fusion schemes to determine their efficacy. Unlike previous approaches which use correlation between variables for band grouping, we study the efficacy of higher order statistical information (using average mutual information) for a bottom up band grouping. We also propose a confidence measure based decision fusion technique, where the weights associated with various classifiers are based on their confidence in recognizing the training data. To this end, training accuracies of all classifiers are used for weight assignment in the fusion process of test pixels. The proposed methods are tested using hyperspectral data with known ground truth, such that the efficacy can be quantitatively measured in terms of target recognition accuracies.
... Infants who receive the abnormal gene from both parents do not often live beyond a few months. ... problems from a small upper airway and from pressure on the area of the brain that controls breathing Lung problems from a small ribcage
NASA Astrophysics Data System (ADS)
Nazarzadeh Zare, Mohsen; Dorrani, Kamal; Gholamali Lavasani, Masoud
2012-11-01
Background and purpose : This study examines the views of farmers and extension agents participating in extension education courses in Dezful, Iran, with regard to problems with these courses. It relies upon a descriptive methodology, using a survey as its instrument. Sample : The statistical population consisted of 5060 farmers and 50 extension agents; all extension agents were studied owing to their small population and a sample of 466 farmers was selected based on the stratified ratio sampling method. For the data analysis, statistical procedures including the t-test and factor analysis were used. Results : The results of factor analysis on the views of farmers indicated that these courses have problems such as inadequate use of instructional materials by extension agents, insufficient employment of knowledgeable and experienced extension agents, bad and inconvenient timing of courses for farmers, lack of logical connection between one curriculum and prior ones, negligence in considering the opinions of farmers in arranging the courses, and lack of information about the time of courses. The findings of factor analysis on the views of extension agents indicated that these courses suffer from problems such as use of consistent methods of instruction for teaching curricula, and lack of continuity between courses and their levels and content. Conclusions : Recommendations include: listening to the views of farmers when planning extension courses; providing audiovisual aids, pamphlets and CDs; arranging courses based on convenient timing for farmers; using incentives to encourage participation; and employing extension agents with knowledge of the latest agricultural issues.
NASA Technical Reports Server (NTRS)
Fleischer, G. E.; Likins, P. W.
1975-01-01
Three computer subroutines designed to solve the vector-dyadic differential equations of rotational motion for systems that may be idealized as a collection of hinge-connected rigid bodies assembled in a tree topology, with an optional flexible appendage attached to each body are reported. Deformations of the appendages are mathematically represented by modal coordinates and are assumed small. Within these constraints, the subroutines provide equation solutions for (1) the most general case of unrestricted hinge rotations, with appendage base bodies nominally rotating at a constant speed, (2) the case of unrestricted hinge rotations between rigid bodies, with the restriction that those rigid bodies carrying appendages are nominally nonspinning, and (3) the case of small hinge rotations and nominally nonrotating appendages. Sample problems and their solutions are presented to illustrate the utility of the computer programs.
Structural nested mean models for assessing time-varying effect moderation.
Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A
2010-03-01
This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.
Satisfaction and sense of well being among Medicaid ICF/MR and HCBS recipients in six states.
Stancliffe, Roger J; Lakin, K Charlie; Taub, Sarah; Chiri, Giuseppina; Byun, Soo-Yong
2009-04-01
Self-reported satisfaction and sense of well-being were assessed in a sample of 1,885 adults with intellectual and developmental disabilities receiving Medicaid Home and Community Based Services (HCBS) and Intermediate Care Facility (ICF/MR) services in 6 states. Questions dealt with such topics as loneliness, feeling afraid at home and in one's neighborhood, feeling happy, feeling that staff are nice and polite, and liking one's home and work/day program. Loneliness was the most widespread problem, and there were also small percentages of people who reported negative views in other areas. Few differences were evident by HCBS and ICF/MR status. The findings document consistent benefits of residential support provided in very small settings-with choices of where and with whom to live-and to individuals living with family.
Heinmüller, Stefan; Schneider, Antonius; Linde, Klaus
2016-04-23
Academic infrastructures and networks for clinical research in primary care receive little funding in Germany. We aimed to provide an overview of the quantity, topics, methods and findings of randomised controlled trials published by German university departments of general practice. We searched Scopus (last search done in April 2015), publication lists of institutes and references of included articles. We included randomised trials published between January 2000 and December 2014 with a first or last author affiliated with a German university department of general practice or family medicine. Risk of bias was assessed with the Cochrane tool, and study findings were quantified using standardised mean differences (SMDs). Thirty-three trials met the inclusion criteria. Seventeen were cluster-randomised trials, with a majority investigating interventions aimed at improving processes compared with usual care. Sample sizes varied between 6 and 606 clusters and 168 and 7807 participants. The most frequent methodological problem was risk of selection bias due to recruitment of individuals after randomisation of clusters. Effects of interventions over usual care were mostly small (SMD <0.3). Sixteen trials randomising individual participants addressed a variety of treatment and educational interventions. Sample sizes varied between 20 and 1620 participants. The methodological quality of the trials was highly variable. Again, effects of experimental interventions over controls were mostly small. Despite limited funding, German university institutes of general practice or family medicine are increasingly performing randomised trials. Cluster-randomised trials on practice improvement are a focus, but problems with allocation concealment are frequent.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Prevalence and onset of comorbidities in the CDKL5 disorder differ from Rett syndrome.
Mangatt, Meghana; Wong, Kingsley; Anderson, Barbara; Epstein, Amy; Hodgetts, Stuart; Leonard, Helen; Downs, Jenny
2016-04-14
Initially described as an early onset seizure variant of Rett syndrome, the CDKL5 disorder is now considered as an independent entity. However, little is currently known about the full spectrum of comorbidities that affect these patients and available literature is limited to small case series. This study aimed to use a large international sample to examine the prevalence in this disorder of comorbidities of epilepsy, gastrointestinal problems including feeding difficulties, sleep and respiratory problems and scoliosis and their relationships with age and genotype. Prevalence and onset were also compared with those occurring in Rett syndrome. Data for the CDKL5 disorder and Rett syndrome were sourced from the International CDKL5 Disorder Database (ICDD), InterRett and the Australian Rett syndrome Database (ARSD). Logistic regression (multivariate and univariate) was used to analyse the relationships between age group, mutation type and the prevalence of various comorbidities. Binary longitudinal data from the ARSD and the equivalent cross-sectional data from ICDD were examined using generalized linear models with generalized estimating equations. The Kaplan-Meier method was used to estimate the failure function for the two disorders and the log-rank test was used to compare the two functions. The likelihood of experiencing epilepsy, GI problems, respiratory problems, and scoliosis in the CDKL5 disorder increased with age and males were more vulnerable to respiratory and sleep problems than females. We did not identify any statistically significant relationships between mutation group and prevalence of comorbidities. Epilepsy, GI problems and sleep abnormalities were more common in the CDKL5 disorder than in Rett syndrome whilst scoliosis and respiratory problems were less prevalent. This study captured a much clearer picture of the CDKL5 disorder than previously possible using the largest sample available to date. There were differences in the presentation of clinical features occurring in the CDKL5 disorder and in Rett syndrome, reinforcing the concept that CDKL5 is an independent disorder with its own distinctive characteristics.
Improved Hybrid Modeling of Spent Fuel Storage Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bibber, Karl van
This work developed a new computational method for improving the ability to calculate the neutron flux in deep-penetration radiation shielding problems that contain areas with strong streaming. The “gold standard” method for radiation transport is Monte Carlo (MC) as it samples the physics exactly and requires few approximations. Historically, however, MC was not useful for shielding problems because of the computational challenge of following particles through dense shields. Instead, deterministic methods, which are superior in term of computational effort for these problems types but are not as accurate, were used. Hybrid methods, which use deterministic solutions to improve MC calculationsmore » through a process called variance reduction, can make it tractable from a computational time and resource use perspective to use MC for deep-penetration shielding. Perhaps the most widespread and accessible of these methods are the Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) methods. For problems containing strong anisotropies, such as power plants with pipes through walls, spent fuel cask arrays, active interrogation, and locations with small air gaps or plates embedded in water or concrete, hybrid methods are still insufficiently accurate. In this work, a new method for generating variance reduction parameters for strongly anisotropic, deep penetration radiation shielding studies was developed. This method generates an alternate form of the adjoint scalar flux quantity, Φ Ω, which is used by both CADIS and FW-CADIS to generate variance reduction parameters for local and global response functions, respectively. The new method, called CADIS-Ω, was implemented in the Denovo/ADVANTG software. Results indicate that the flux generated by CADIS-Ω incorporates localized angular anisotropies in the flux more effectively than standard methods. CADIS-Ω outperformed CADIS in several test problems. This initial work indicates that CADIS- may be highly useful for shielding problems with strong angular anisotropies. This is a benefit to the public by increasing accuracy for lower computational effort for many problems that have energy, security, and economic importance.« less
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
Control of discrete time systems based on recurrent Super-Twisting-like algorithm.
Salgado, I; Kamal, S; Bandyopadhyay, B; Chairez, I; Fridman, L
2016-09-01
Most of the research in sliding mode theory has been carried out to in continuous time to solve the estimation and control problems. However, in discrete time, the results in high order sliding modes have been less developed. In this paper, a discrete time super-twisting-like algorithm (DSTA) was proposed to solve the problems of control and state estimation. The stability proof was developed in terms of the discrete time Lyapunov approach and the linear matrix inequalities theory. The system trajectories were ultimately bounded inside a small region dependent on the sampling period. Simulation results tested the DSTA. The DSTA was applied as a controller for a Furuta pendulum and for a DC motor supplied by a DSTA signal differentiator. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Black, D; Gates, G; Sanders, S; Taylor, L
2000-05-01
This work provides an overview of standard social science data sources that now allow some systematic study of the gay and lesbian population in the United States. For each data source, we consider how sexual orientation can be defined, and we note the potential sample sizes. We give special attention to the important problem of measurement error, especially the extent to which individuals recorded as gay and lesbian are indeed recorded correctly. Our concern is that because gays and lesbians constitute a relatively small fraction of the population, modest measurement problems could lead to serious errors in inference. In examining gays and lesbians in multiple data sets we also achieve a second objective: We provide a set of statistics about this population that is relevant to several current policy debates.
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
Satellite Fault Diagnosis Using Support Vector Machines Based on a Hybrid Voting Mechanism
Yang, Shuqiang; Zhu, Xiaoqian; Jin, Songchang; Wang, Xiang
2014-01-01
The satellite fault diagnosis has an important role in enhancing the safety, reliability, and availability of the satellite system. However, the problem of enormous parameters and multiple faults makes a challenge to the satellite fault diagnosis. The interactions between parameters and misclassifications from multiple faults will increase the false alarm rate and the false negative rate. On the other hand, for each satellite fault, there is not enough fault data for training. To most of the classification algorithms, it will degrade the performance of model. In this paper, we proposed an improving SVM based on a hybrid voting mechanism (HVM-SVM) to deal with the problem of enormous parameters, multiple faults, and small samples. Many experimental results show that the accuracy of fault diagnosis using HVM-SVM is improved. PMID:25215324
Sentse, Miranda; Ormel, Johan; Veenstra, René; Verhulst, Frank C; Oldehinkel, Albertine J
2011-02-01
The potential effect of parental separation during early adolescence on adolescent externalizing and internalizing problems was investigated in a longitudinal sample of adolescents (n = 1274; mean age = 16.27; 52.3% girls). Pre-separation mental health problems were controlled for. Building on a large number of studies that overall showed a small effect of parental separation, it was argued that separation may only or especially have an effect under certain conditions. It was examined whether child temperament (effortful control and fearfulness) moderates the impact of parental separation on specific mental health domains. Hypotheses were derived from a goal-framing theory, with a focus on goals related to satisfying the need for autonomy and the need to belong. Controlling for the overlap between the outcome domains, we found that parental separation led to an increase in externalizing problems but not internalizing problems when interactions with child temperament were ignored. Moreover, child temperament moderated the impact of parental separation, in that it was only related to increased externalizing problems for children low on effortful control, whereas it was only related to increased internalizing problems for children high on fearfulness. The results indicate that person-environment interactions are important for understanding the development of mental health problems and that these interactions can be domain-specific. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Walton, A; Flouri, Eirini
2010-03-01
The objective of this study was to test if emotion regulation mediates the association between mothers' parenting and adolescents' externalizing behaviour problems (conduct problems and hyperactivity). The parenting dimensions were warmth, psychological control and behavioural control (measured with knowledge, monitoring and discipline). Adjustment was made for contextual risk (measured with the number of proximal adverse life events experienced), gender, age and English as an additional language. Data were from a UK community sample of adolescents aged 11-18 from a comprehensive school in a disadvantaged area. At the multivariate level, none of the parenting variables predicted hyperactivity, which was associated only with difficulties in emotion regulation, contextual risk and English as a first language. The parenting variables predicting conduct problems at the multivariate level were warmth and knowledge. Knowledge did not predict emotion regulation. However, warmth predicted emotion regulation, which was negatively associated with conduct problems. Contextual risk was a significant predictor of both difficulties in emotion regulation and externalizing behaviour problems. Its effect on conduct problems was independent of parenting and was not via its association with difficulties in emotion regulation. The findings add to the evidence for the importance of maternal warmth and contextual risk for both regulated emotion and regulated behaviour. The small maternal control effects on both emotion regulation and externalizing behaviour could suggest the importance of paternal control for adolescent outcomes.
Holloway, Edith E; Xie, Jing; Sturrock, Bonnie A; Lamoureux, Ecosse L; Rees, Gwyneth
2015-05-01
To evaluate the effectiveness of problem-solving interventions on psychosocial outcomes in vision impaired adults. A systematic search of randomised controlled trials (RCTs), published between 1990 and 2013, that investigated the impact of problem-solving interventions on depressive symptoms, emotional distress, quality of life (QoL) and functioning was conducted. Two reviewers independently selected and appraised study quality. Data permitting, intervention effects were statistically pooled and meta-analyses were performed, otherwise summarised descriptively. Eleven studies (reporting on eight trials) met inclusion criteria. Pooled analysis showed problem-solving interventions improved vision-related functioning (standardised mean change [SMC]: 0.15; 95% CI: 0.04-0.27) and emotional distress (SMC: -0.36; 95% CI: -0.54 to -0.19). There was no evidence to support improvements in depressive symptoms (SMC: -0.27, 95% CI: -0.66 to 0.12) and insufficient evidence to determine the effectiveness of problem-solving interventions on QoL. The small number of well-designed studies and narrow inclusion criteria limit the conclusions drawn from this review. However, problem-solving skills may be important for nurturing daily functioning and reducing emotional distress for adults with vision impairment. Given the empirical support for the importance of effective problem-solving skills in managing chronic illness, more well-designed RCTs are needed with diverse vision impaired samples. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Detection of small number of Giardia in biological materials prepared from stray dogs.
Esmailikia, Leila; Ebrahimzade, Elahe; Shayan, Parviz; Amininia, Narges
2017-12-20
Giardia lamblia is an intestinal protozoa with intermittent and low shedding especially in dogs, and the detection of Giardia is accompanied with problems such as sampling and diagnostic method. The objective of this study was to detection of Giardia in biological materials with low number of parasite using parasitological and molecular methods, and also to determine whether the examined stray dogs harbor known zoonotic genotype of Giardia. For this aim 85 fecal and duodenal samples were studied from which 1 was positive by Trichrome staining of stool, 4 were positive by staining of duodenal samples. The nested PCR analysis with primers derived from 18 SrRNA showed that the specific PCR product could be amplified in 4 stool and 4 duodenal samples. All positive samples in staining analysis were also positive in nested PCR. No amplification could be observed by nested PCR with primers derived from β giardin gene due to the single copy of gene. Interestingly, the extracted DNA from old fixed stained Giardia positive smears could be also amplified with primers derived from 18SrRNA gene. The sequence analysis of nested PCR products showed that they belong to the genotype D. In conclusion, it is to denote that the Trichrome or Giemsa methods were not suitable for the detection of small number of this parasite in stool and the nested PCR with primers derived from 18S rRNA gene can replace the traditional methods successfully. For detection of Giardia in stool, primers derived from β giardin will not be recommended.
Mental and somatic health in a non-clinical sample 10 years after a diagnosis of encopresis.
Hultén, Ib; Jonsson, Jakob; Jonsson, Carl-Otto
2005-12-01
The aim of this study was to assess the relation between the diagnosis of encopresis at 8 and 10 years of age, and mental and somatic health 10 years later. The importance of type of encopresis (primary or secondary) at 8 years was also studied. Subjects were a non-clinical encopretic sample (N=73) and control subjects (N=75) [2]. Seven assessment variables from conscription surveys provided information about mental and somatic health status at 18 years of age. Former encopretics (n=66) did not differ significantly from the controls (n=67) at 18 years of age, although there were consistent, small negative differences. The boys who at 10 years of age had still been encopretic did not differ significantly at 18 years of age from the boys who at 10 years had recovered from encopresis, and the signs indicating the small differences varied. For former primary and secondary encopretic boys, there were two significant differences, the men in the secondary group being more often exempted from conscription than the primary group and the control cases. The results indicate that boys with non-clinical encopresis show only small, if any, mental and somatic disturbances at the beginning of adulthood. Comprehensive investigations of encopretic patients are recommended as important clinical problems, in addition to encopresis, might be present.
McTwo: a two-step feature selection algorithm based on maximal information coefficient.
Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng
2016-03-23
High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.
Holmes, Tyson H; He, Xiao-Song
2016-10-01
Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n,1
Holmes, Tyson H.; He, Xiao-Song
2016-01-01
Small, wide data sets are commonplace in human immunophenotyping research. As defined here, a small, wide data set is constructed by sampling a small to modest quantity n, 1 < n < 50, of human participants for the purpose of estimating many parameters p, such that n < p < 1,000. We offer a set of prescriptions that are designed to facilitate low-variance (i.e. stable), low-bias, interpretive regression modeling of small, wide data sets. These prescriptions are distinctive in their especially heavy emphasis on minimizing use of out-of-sample information for conducting statistical inference. That allows the working immunologist to proceed without being encumbered by imposed and often untestable statistical assumptions. Problems of unmeasured confounders, confidence-interval coverage, feature selection, and shrinkage/denoising are defined clearly and treated in detail. We propose an extension of an existing nonparametric technique for improved small-sample confidence-interval tail coverage from the univariate case (single immune feature) to the multivariate (many, possibly correlated immune features). An important role for derived features in the immunological interpretation of regression analyses is stressed. Areas of further research are discussed. Presented principles and methods are illustrated through application to a small, wide data set of adults spanning a wide range in ages and multiple immunophenotypes that were assayed before and after immunization with inactivated influenza vaccine (IIV). Our regression modeling prescriptions identify some potentially important topics for future immunological research. 1) Immunologists may wish to distinguish age-related differences in immune features from changes in immune features caused by aging. 2) A form of the bootstrap that employs linear extrapolation may prove to be an invaluable analytic tool because it allows the working immunologist to obtain accurate estimates of the stability of immune parameter estimates with a bare minimum of imposed assumptions. 3) Liberal inclusion of immune features in phenotyping panels can facilitate accurate separation of biological signal of interest from noise. In addition, through a combination of denoising and potentially improved confidence interval coverage, we identify some candidate immune correlates (frequency of cell subset and concentration of cytokine) with B cell response as measured by quantity of IIV-specific IgA antibody-secreting cells and quantity of IIV-specific IgG antibody-secreting cells. PMID:27196789
Allan, Darcey M.; Lonigan, Christopher J.
2014-01-01
Although both the Continuous Performance Test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (Mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An ADHD-rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across four temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to one type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. PMID:25419645
Allan, Darcey M; Lonigan, Christopher J
2015-06-01
Although both the continuous performance test (CPT) and behavior rating scales are used in both practice and research to assess inattentive and hyperactive/impulsive behaviors, the correlations between performance on the CPT and teachers' ratings are typically only small-to-moderate. This study examined trajectories of performance on a low target-frequency visual CPT in a sample of preschool children and how these trajectories were associated with teacher-ratings of problem behaviors (i.e., inattention, hyperactivity/impulsivity [H/I], and oppositional/defiant behavior). Participants included 399 preschool children (mean age = 56 months; 49.4% female; 73.7% White/Caucasian). An attention deficit/hyperactivity disorder (ADHD) rating scale was completed by teachers, and the CPT was completed by the preschoolers. Results showed that children's performance across 4 temporal blocks on the CPT was not stable across the duration of the task, with error rates generally increasing from initial to later blocks. The predictive relations of teacher-rated problem behaviors to performance trajectories on the CPT were examined using growth curve models. Higher rates of teacher-reported inattention and H/I were uniquely associated with higher rates of initial omission errors and initial commission errors, respectively. Higher rates of teacher-reported overall problem behaviors were associated with increasing rates of omission but not commission errors during the CPT; however, the relation was not specific to 1 type of problem behavior. The results of this study indicate that the pattern of errors on the CPT in preschool samples is complex and may be determined by multiple behavioral factors. These findings have implications for the interpretation of CPT performance in young children. (c) 2015 APA, all rights reserved).
ERIC Educational Resources Information Center
Albert, Lawrence S.
If being a competent small group problem solver is difficult, it is even more difficult to impart those competencies to others. Unlike athletic coaches who are near their players during the real game, teachers of small group communication are not typically present for on-the-spot coaching when their students are doing their problem solving. That…
J.M. Linton; H.M. Barnes; R.D. Seale; P.D. Jones; E. Lowell; S.S. Hummel
2010-01-01
Finding alternative uses for raw material from small-diameter trees is a critical problem throughout the United States. In western states, a lack of markets for small-diameter ponderosa pine (Pinus ponderosa) and lodgepole pine (Pinus contorta ) can contribute to problems associated with overstocking. To test the feasibility of...
Zhang, Yong-Tao; Shi, Jing; Shu, Chi-Wang; Zhou, Ye
2003-10-01
A quantitative study is carried out in this paper to investigate the size of numerical viscosities and the resolution power of high-order weighted essentially nonoscillatory (WENO) schemes for solving one- and two-dimensional Navier-Stokes equations for compressible gas dynamics with high Reynolds numbers. A one-dimensional shock tube problem, a one-dimensional example with parameters motivated by supernova and laser experiments, and a two-dimensional Rayleigh-Taylor instability problem are used as numerical test problems. For the two-dimensional Rayleigh-Taylor instability problem, or similar problems with small-scale structures, the details of the small structures are determined by the physical viscosity (therefore, the Reynolds number) in the Navier-Stokes equations. Thus, to obtain faithful resolution to these small-scale structures, the numerical viscosity inherent in the scheme must be small enough so that the physical viscosity dominates. A careful mesh refinement study is performed to capture the threshold mesh for full resolution, for specific Reynolds numbers, when WENO schemes of different orders of accuracy are used. It is demonstrated that high-order WENO schemes are more CPU time efficient to reach the same resolution, both for the one-dimensional and two-dimensional test problems.
Monte Carlo simulation of induction time and metastable zone width; stochastic or deterministic?
NASA Astrophysics Data System (ADS)
Kubota, Noriaki
2018-03-01
The induction time and metastable zone width (MSZW) measured for small samples (say 1 mL or less) both scatter widely. Thus, these two are observed as stochastic quantities. Whereas, for large samples (say 1000 mL or more), the induction time and MSZW are observed as deterministic quantities. The reason for such experimental differences is investigated with Monte Carlo simulation. In the simulation, the time (under isothermal condition) and supercooling (under polythermal condition) at which a first single crystal is detected are defined as the induction time t and the MSZW ΔT for small samples, respectively. The number of crystals just at the moment of t and ΔT is unity. A first crystal emerges at random due to the intrinsic nature of nucleation, accordingly t and ΔT become stochastic. For large samples, the time and supercooling at which the number density of crystals N/V reaches a detector sensitivity (N/V)det are defined as t and ΔT for isothermal and polythermal conditions, respectively. The points of t and ΔT are those of which a large number of crystals have accumulated. Consequently, t and ΔT become deterministic according to the law of large numbers. Whether t and ΔT may stochastic or deterministic in actual experiments should not be attributed to change in nucleation mechanisms in molecular level. It could be just a problem caused by differences in the experimental definition of t and ΔT.
NASA Astrophysics Data System (ADS)
Gao, Yuan; Ma, Jiayi; Yuille, Alan L.
2017-05-01
This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.
Wenzel, Hanne Gro; Oren, Anita; Bakken, Inger Johanne
2008-12-16
Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Men and women 16-74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638). Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?") were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs). Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.
Wenzel, Hanne Gro; Øren, Anita; Bakken, Inger Johanne
2008-01-01
Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638). Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?") were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs). Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members. PMID:19087339
Methodological proposal for the remediation of a site affected by phosphogypsum deposits
NASA Astrophysics Data System (ADS)
Martínez-Sanchez, M. J.; Perez-Sirvent, C.; Bolivar, J. P.; Garcia-Tenorio, R.
2012-04-01
The accumulation of phosphogysum (PY) produces a well known environmental problems. The proposals for the remediation of these sites require multidisciplinary and very specific studies. Since they cover large areas a sampling design specifically outlined for each case is necessary in order the contaminants, transfer pathways and particular processes can be correctly identified. In addition to a suitable sampling of the soil, aquatic medium and biota, appropriate studies of the space-temporal variations by means of control samples are required. Two different stages should be considered: 1.- Diagnostic stage This stage includes preliminary studies, identification of possible sources of radiosotopes, design of the appropriate sampling plan, hydrogeological study, characterization and study of the space-temporal variability of radioisotopes and other contaminants, as well as the risk assessement for health and ecosystems, that depends on the future use of the site. 2.- Remediation proposal stage It comprises the evaluation and comparison of the different procedures for the decontamination/remediation, including models experiments at the laboratory. To this respect, the preparation and detailed study of a small scale pilot project is a task of particular relevance. In this way the suitability of the remediating technology can be checked, and its performance optimized. These two stages allow a technically well-founded proposal to be presented to the Organisms or Institutions in charge of the problem and facilitate decision-making. It both stages be included in a social communication campaign in order the final proposal be accepted by stakeholders.
The effect of ozonization on furniture dust: microbial content and immunotoxicity in vitro.
Huttunen, Kati; Kauhanen, Eeva; Meklin, Teija; Vepsäläinen, Asko; Hirvonen, Maija-Riitta; Hyvärinen, Anne; Nevalainen, Aino
2010-05-01
Moisture and mold problems in buildings contaminate also the furniture and other movable property. If cleaning of the contaminated furniture is neglected, it may continue to cause problems to the occupants even after the moisture-damage repairs. The aim of this study was to determine the effectiveness of high-efficiency ozone treatment in cleaning of the furniture from moisture-damaged buildings. In addition, the effectiveness of two cleaning methods was compared. Samples were vacuumed from the padded areas before and after the treatment. The microbial flora and concentrations in the dust sample were determined by quantitative cultivation and QPCR-methods. The immunotoxic potential of the dust samples was analyzed by measuring effects on cell viability and production of inflammatory mediators in vitro. Concentrations of viable microbes decreased significantly in most of the samples after cleaning. Cleaning with combined steam wash and ozonisation was more effective method than ozonising alone, but the difference was not statistically significant. Detection of fungal species with PCR showed a slight but nonsignificant decrease in concentrations after the cleaning. The immunotoxic potential of the collected dust decreased significantly in most of the samples. However, in a small subgroup of samples, increased concentrations of microbes and immunotoxicological activity were detected. This study shows that a transportable cleaning unit with high-efficiency ozonising is in most cases effective in decreasing the concentrations of viable microbes and immunotoxicological activity of the furniture dust. However, the method does not destroy or remove all fungal material present in the dust, as detected with QPCR analysis, and in some cases the cleaning procedure may increase the microbial concentrations and immunotoxicity of the dust. Copyright 2010 Elsevier B.V. All rights reserved.
Artificial food colors and attention-deficit/hyperactivity symptoms: conclusions to dye for.
Arnold, L Eugene; Lofthouse, Nicholas; Hurt, Elizabeth
2012-07-01
The effect of artificial food colors (AFCs) on child behavior has been studied for more than 35 years, with accumulating evidence from imperfect studies. This article summarizes the history of this controversial topic and testimony to the 2011 Food and Drug Administration Food Advisory Committee convened to evaluate the current status of evidence regarding attention-deficit/hyperactivity disorder (ADHD). Features of ADHD relevant to understanding the AFC literature are explained: ADHD is a quantitative diagnosis, like hypertension, and some individuals near the threshold may be pushed over it by a small symptom increment. The chronicity and pervasiveness make caregiver ratings the most valid measure, albeit subjective. Flaws in many studies include nonstandardized diagnosis, questionable sample selection, imperfect blinding, and nonstandardized outcome measures. Recent data suggest a small but significant deleterious effect of AFCs on children's behavior that is not confined to those with diagnosable ADHD. AFCs appear to be more of a public health problem than an ADHD problem. AFCs are not a major cause of ADHD per se, but seem to affect children regardless of whether or not they have ADHD, and they may have an aggregated effect on classroom climate if most children in the class suffer a small behavioral decrement with additive or synergistic effects. Possible biological mechanisms with published evidence include the effects on nutrient levels, genetic vulnerability, and changes in electroencephalographic beta-band power. A table clarifying the Food and Drug Administration and international naming systems for AFCs, with cross-referencing, is provided.
Student Team Achievement Divisions: Its Effect on Electrical Motor Installation Knowledge Competence
NASA Astrophysics Data System (ADS)
Hanafi, Ahmad; Basuki, Ismet
2018-04-01
Student team achievement division (STAD) was an active learning strategy with the small group inside of the classroom members. The students would work in small heterogeneous groups (of five to six members) and help one another to comprehend the material given. To achieve the objectives of the study, this research aims to know the effect of STAD on competence of electrical motor installation. The objective of the student competence was knowledge competence. The data was collected from 30 students. the participants were the students of second class at electrical installation techniques, SMKN 1 Pungging Indonesia. The design of empirical test in this research was one shot case study. The result of knowledge test would be compared by criteria for minimum competence, which was 75. Knowledge competence was analyzed with one sample t test technique. From the analysis got average 84.93, which meant average of student competence had reached criteria for minimum competence. From that analyze, It could be concluded that STAD was effective on electrical motor installation knowledge competence. STAD could grow student motivation to learn better than other models. But, in the application of cooperative learning teacher should prepare carefully before the learning process to avoid problems that could arise during group learning such as students who were less active in the groups. The problem could be resolved by away the teachers took around to check each group. It was felt could minimize the problems.
Markov Random Field Based Automatic Image Alignment for ElectronTomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moussavi, Farshid; Amat, Fernando; Comolli, Luis R.
2007-11-30
Cryo electron tomography (cryo-ET) is the primary method for obtaining 3D reconstructions of intact bacteria, viruses, and complex molecular machines ([7],[2]). It first flash freezes a specimen in a thin layer of ice, and then rotates the ice sheet in a transmission electron microscope (TEM) recording images of different projections through the sample. The resulting images are aligned and then back projected to form the desired 3-D model. The typical resolution of biological electron microscope is on the order of 1 nm per pixel which means that small imprecision in the microscope's stage or lenses can cause large alignment errors.more » To enable a high precision alignment, biologists add a small number of spherical gold beads to the sample before it is frozen. These beads generate high contrast dots in the image that can be tracked across projections. Each gold bead can be seen as a marker with a fixed location in 3D, which provides the reference points to bring all the images to a common frame as in the classical structure from motion problem. A high accuracy alignment is critical to obtain a high resolution tomogram (usually on the order of 5-15nm resolution). While some methods try to automate the task of tracking markers and aligning the images ([8],[4]), they require user intervention if the SNR of the image becomes too low. Unfortunately, cryogenic electron tomography (or cryo-ET) often has poor SNR, since the samples are relatively thick (for TEM) and the restricted electron dose usually results in projections with SNR under 0 dB. This paper shows that formulating this problem as a most-likely estimation task yields an approach that is able to automatically align with high precision cryo-ET datasets using inference in graphical models. This approach has been packaged into a publicly available software called RAPTOR-Robust Alignment and Projection estimation for Tomographic Reconstruction.« less
NASA Astrophysics Data System (ADS)
Hatakeyama, Rokuro; Yoshizawa, Masazumi; Moriya, Tadashi
2000-11-01
Precise correction for γ-ray attenuation in skull bone has been a significant problem in obtaining quantitative single photon emission computed tomography (SPECT) images. The correction for γ-ray attenuation is approximately proportional to the density and thickness of the bone under investigation. If the acoustic impedance and the speed of sound in bone are measurable using ultrasonic techniques, then the density and thickness of the bone sample can be calculated. Whole bone usually consists of three layers, and each layer has a different ultrasonic character. Thus, the speed of sound must be measured in a small section of each layer in order to determine the overall density of whole bone. It is important to measure the attenuation constant in order to determine the appropriate level for the ultrasonic input signal. We have developed a method for measuring the acoustic impedance, speed of sound, and attenuation constant in a small region of a bone sample using a fused quartz rod as a transmission line. In the present study, we obtained the following results: impedance of compact bone; 5.30(±0.40)× 106 kg/(m2s), speed of sound; 3780± 250 m/s, and attenuation constant; 2.70± 0.50 Np/m. These results were used to obtain the densities of compact bone, spongy bone and bone marrow in a bovine bone sample and as well as the density of pig skull bone, which were found to be 1.40± 0.30 g/cm3, 1.19± 0.50 g/cm3, 0.90± 0.30 g/cm3 and 1.26± 0.30 g/cm3, respectively. Using a thin solid transmission line, the proposed method makes it possible to determine the density of a small region of a bone sample. It is expected that the proposed method, which is based on ultrasonic measurement, will be useful for application in brain SPECT.
Future orientation, school contexts, and problem behaviors: a multilevel study.
Chen, Pan; Vazsonyi, Alexander T
2013-01-01
The association between future orientation and problem behaviors has received extensive empirical attention; however, previous work has not considered school contextual influences on this link. Using a sample of N = 9,163 9th to 12th graders (51.0 % females) from N = 85 high schools of the National Longitudinal Study of Adolescent Health, the present study examined the independent and interactive effects of adolescent future orientation and school contexts (school size, school location, school SES, school future orientation climate) on problem behaviors. Results provided evidence that adolescent future orientation was associated independently and negatively with problem behaviors. In addition, adolescents from large-size schools reported higher levels of problem behaviors than their age mates from small-size schools, controlling for individual-level covariates. Furthermore, an interaction effect between adolescent future orientation and school future orientation climate was found, suggesting influences of school future orientation climate on the link between adolescent future orientation and problem behaviors as well as variations in effects of school future orientation climate across different levels of adolescent future orientation. Specifically, the negative association between adolescent future orientation and problem behaviors was stronger at schools with a more positive climate of future orientation, whereas school future orientation climate had a significant and unexpectedly positive relationship with problem behaviors for adolescents with low levels of future orientation. Findings implicate the importance of comparing how the future orientation-problem behaviors link varies across different ecological contexts and the need to understand influences of school climate on problem behaviors in light of differences in psychological processes among adolescents.
Meier-Hellmann, Andreas
2006-01-01
The choice of catecholamines for hemodynamic stabilisation in septic shock patients has been an ongoing debate for several years. Several studies have investigated the regional effects in septic patients. Because of an often very small sample size, because of inconsistent results and because of methodical problems in the monitoring techniques used in these studies, however, it is not possible to provide clear recommendations concerning the use of catecholamines in sepsis. Prospective and adequate-sized studies are necessary because outcome data are completely lacking.
NASA Astrophysics Data System (ADS)
Jokisch, D. W.; Rajon, D. A.; Bahadori, A. A.; Bolch, W. E.
2011-11-01
Recoiling hydrogen nuclei are a principle mechanism for energy deposition from incident neutrons. For neutrons incident on the human skeleton, the small sizes of two contrasting media (trabecular bone and marrow) present unique problems due to a lack of charged-particle (protons) equilibrium. Specific absorbed fractions have been computed for protons originating in the human skeletal tissues for use in computing neutron dose response functions. The proton specific absorbed fractions were computed using a pathlength-based range-energy calculation in trabecular skeletal samples of a 40 year old male cadaver.
Transfer-function-parameter estimation from frequency response data: A FORTRAN program
NASA Technical Reports Server (NTRS)
Seidel, R. C.
1975-01-01
A FORTRAN computer program designed to fit a linear transfer function model to given frequency response magnitude and phase data is presented. A conjugate gradient search is used that minimizes the integral of the absolute value of the error squared between the model and the data. The search is constrained to insure model stability. A scaling of the model parameters by their own magnitude aids search convergence. Efficient computer algorithms result in a small and fast program suitable for a minicomputer. A sample problem with different model structures and parameter estimates is reported.
NASA Astrophysics Data System (ADS)
Girinoto, Sadik, Kusman; Indahwati
2017-03-01
The National Socio-Economic Survey samples are designed to produce estimates of parameters of planned domains (provinces and districts). The estimation of unplanned domains (sub-districts and villages) has its limitation to obtain reliable direct estimates. One of the possible solutions to overcome this problem is employing small area estimation techniques. The popular choice of small area estimation is based on linear mixed models. However, such models need strong distributional assumptions and do not easy allow for outlier-robust estimation. As an alternative approach for this purpose, M-quantile regression approach to small area estimation based on modeling specific M-quantile coefficients of conditional distribution of study variable given auxiliary covariates. It obtained outlier-robust estimation from influence function of M-estimator type and also no need strong distributional assumptions. In this paper, the aim of study is to estimate the poverty indicator at sub-district level in Bogor District-West Java using M-quantile models for small area estimation. Using data taken from National Socioeconomic Survey and Villages Potential Statistics, the results provide a detailed description of pattern of incidence and intensity of poverty within Bogor district. We also compare the results with direct estimates. The results showed the framework may be preferable when direct estimate having no incidence of poverty at all in the small area.
van der Veen-Mulders, Lianne; van den Hoofdakker, Barbara J; Nauta, Maaike H; Emmelkamp, Paul; Hoekstra, Pieter J
2018-02-01
To compare the effectiveness between parent-child interaction therapy (PCIT) and methylphenidate in preschool children with attention-deficit/hyperactivity disorder (ADHD) symptoms and disruptive behaviors who had remaining significant behavior problems after previous behavioral parent training. We included 35 preschool children, ranging in age between 3.4 and 6.0 years. Participants were randomized to PCIT (n = 18) or methylphenidate (n = 17). Outcome measures were maternal ratings of the intensity and number of behavior problems and severity of ADHD symptoms. Changes from pretreatment to directly posttreatment were compared between groups using two-way mixed analysis of variance. We also made comparisons of both treatments to a nonrandomized care as usual (CAU) group (n = 17) regarding intensity and number of behavior problems. All children who started one of the treatments were included in the analyses. Mothers reported a significantly more decreased intensity of behavior problems after methylphenidate (pre-post effect size d = 1.50) compared with PCIT (d = 0.64). ADHD symptoms reduced significantly over time only after methylphenidate treatment (d = 0.48) and not after PCIT. Changes over time of children in the CAU treatment were nonsignificant. Although methylphenidate was more effective than PCIT, both interventions may be effective in the treatment of preschool children with disruptive behaviors. Our findings are preliminary as our sample size was small and the use of methylphenidate in preschool children lacks profound safety data as reflected by its off-label status. More empirical support is needed from studies with larger sample sizes.
Vogel, J.R.; Brown, G.O.
2003-01-01
Semivariograms of samples of Culebra Dolomite have been determined at two different resolutions for gamma ray computed tomography images. By fitting models to semivariograms, small-scale and large-scale correlation lengths are determined for four samples. Different semivariogram parameters were found for adjacent cores at both resolutions. Relative elementary volume (REV) concepts are related to the stationarity of the sample. A scale disparity factor is defined and is used to determine sample size required for ergodic stationarity with a specified correlation length. This allows for comparison of geostatistical measures and representative elementary volumes. The modifiable areal unit problem is also addressed and used to determine resolution effects on correlation lengths. By changing resolution, a range of correlation lengths can be determined for the same sample. Comparison of voxel volume to the best-fit model correlation length of a single sample at different resolutions reveals a linear scaling effect. Using this relationship, the range of the point value semivariogram is determined. This is the range approached as the voxel size goes to zero. Finally, these results are compared to the regularization theory of point variables for borehole cores and are found to be a better fit for predicting the volume-averaged range.
Self-calibration of robot-sensor system
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1990-01-01
The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.
NASA Astrophysics Data System (ADS)
Becker, Johanna Sabine
2002-12-01
Inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) have been applied as the most important inorganic mass spectrometric techniques having multielemental capability for the characterization of solid samples in materials science. ICP-MS is used for the sensitive determination of trace and ultratrace elements in digested solutions of solid samples or of process chemicals (ultrapure water, acids and organic solutions) for the semiconductor industry with detection limits down to sub-picogram per liter levels. Whereas ICP-MS on solid samples (e.g. high-purity ceramics) sometimes requires time-consuming sample preparation for its application in materials science, and the risk of contamination is a serious drawback, a fast, direct determination of trace elements in solid materials without any sample preparation by LA-ICP-MS is possible. The detection limits for the direct analysis of solid samples by LA-ICP-MS have been determined for many elements down to the nanogram per gram range. A deterioration of detection limits was observed for elements where interferences with polyatomic ions occur. The inherent interference problem can often be solved by applying a double-focusing sector field mass spectrometer at higher mass resolution or by collision-induced reactions of polyatomic ions with a collision gas using an ICP-MS fitted with collision cell. The main problem of LA-ICP-MS is quantification if no suitable standard reference materials with a similar matrix composition are available. The calibration problem in LA-ICP-MS can be solved using on-line solution-based calibration, and different procedures, such as external calibration and standard addition, have been discussed with respect to their application in materials science. The application of isotope dilution in solution-based calibration for trace metal determination in small amounts of noble metals has been developed as a new calibration strategy. This review discusses new analytical developments and possible applications of ICP-MS and LA-ICP-MS for the quantitative determination of trace elements and in surface analysis for materials science.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering
Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro
2017-01-01
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358
Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.
Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru
2017-11-09
Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.
Berk, Lotte; van Boxtel, Martin; van Os, Jim
2017-11-01
An increased need exists to examine factors that protect against age-related cognitive decline. There is preliminary evidence that meditation can improve cognitive function. However, most studies are cross-sectional and examine a wide variety of meditation techniques. This review focuses on the standard eight-week mindfulness-based interventions (MBIs) such as mindfulness-based stress reduction (MBSR) and mindfulness-based cognitive therapy (MBCT). We searched the PsychINFO, CINAHL, Web of Science, COCHRANE, and PubMed databases to identify original studies investigating the effects of MBI on cognition in older adults. Six reports were included in the review of which three were randomized controlled trials. Studies reported preliminary positive effects on memory, executive function and processing speed. However, most reports had a high risk of bias and sample sizes were small. The only study with low risk of bias, large sample size and active control group reported no significant findings. We conclude that eight-week MBI for older adults are feasible, but results on cognitive improvement are inconclusive due a limited number of studies, small sample sizes, and a high risk of bias. Rather than a narrow focus on cognitive training per se, future research may productively shift to investigate MBI as a tool to alleviate suffering in older adults, and to prevent cognitive problems in later life already in younger target populations.
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.
An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.
Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying
2013-09-01
Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.
2018-01-01
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods. PMID:29304512
7 CFR 201.42 - Small containers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Small containers. 201.42 Section 201.42 Agriculture... REGULATIONS Sampling in the Administration of the Act § 201.42 Small containers. In sampling seed in small containers that it is not practical to sample as required in § 201.41, a portion of one unopened container or...
Study of dispersed small wind systems interconnected with a utility distribution system
NASA Astrophysics Data System (ADS)
Curtice, D.; Patton, J.; Bohn, J.; Sechan, N.
1980-03-01
Operating problems for various penetrations of small wind systems connected to the distribution system on a utility are defined. Protection equipment, safety hazards, feeder voltage regulation, line losses, and voltage flicker problems are studied, assuming different small wind systems connected to an existing distribution system. To identify hardware deficiencies, possible solutions provided by off-the-shelf hardware and equipment are assessed. Results of the study indicate that existing techniques are inadequate for detecting isolated operation of a small wind system. Potential safety hazards posed by small wind systems are adequately handled by present work procedures although these procedures require a disconnect device at synchronous generator and self commutated inverter small wind systems.
Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang
2007-11-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.
Oberacher, Herbert
2013-01-01
The “Critical Assessment of Small Molecule Identification” (CASMI) contest was aimed in testing strategies for small molecule identification that are currently available in the experimental and computational mass spectrometry community. We have applied tandem mass spectral library search to solve Category 2 of the CASMI Challenge 2012 (best identification for high resolution LC/MS data). More than 230,000 tandem mass spectra part of four well established libraries (MassBank, the collection of tandem mass spectra of the “NIST/NIH/EPA Mass Spectral Library 2012”, METLIN, and the ‘Wiley Registry of Tandem Mass Spectral Data, MSforID’) were searched. The sample spectra acquired in positive ion mode were processed. Seven out of 12 challenges did not produce putative positive matches, simply because reference spectra were not available for the compounds searched. This suggests that to some extent the limited coverage of chemical space with high-quality reference spectra is still a problem encountered in tandem mass spectral library search. Solutions were submitted for five challenges. Three compounds were correctly identified (kanamycin A, benzyldiphenylphosphine oxide, and 1-isopropyl-5-methyl-1H-indole-2,3-dione). In the absence of any reference spectrum, a false positive identification was obtained for 1-aminoanthraquinone by matching the corresponding sample spectrum to the structurally related compounds N-phenylphthalimide and 2-aminoanthraquinone. Another false positive result was submitted for 1H-benz[g]indole; for the 1H-benz[g]indole-specific sample spectra provided, carbazole was listed as the best matching compound. In this case, the quality of the available 1H-benz[g]indole-specific reference spectra was found to hamper unequivocal identification. PMID:24957994
Lifetime Paid Work and Mental Health Problems among Poor Urban 9-to-13-Year-Old Children in Brazil
Pires, Ivens H.; Paula, Cristiane S.
2013-01-01
Objective. To verify if emotional/behavioral problems are associated with lifetime paid work in poor urban children, when taking into account other potential correlates. Methods. Cross-sectional study focused on 9-to-13-year-old children (n = 212). In a probabilistic sample of clusters of eligible households (women 15–49 years and son/daughter <18 years), one mother-child pair was randomly selected per household (n = 813; response rate = 82.4%). CBCL/6-18 identified child emotional/behavioral problems. Potential correlates include child gender and age, socioeconomic status/SES, maternal education, parental working status, and family social isolation, among others. Multivariate analysis examined the relationship between emotional/behavioral problems and lifetime paid work in the presence of significant correlates. Findings. All work activities were non-harmful (e.g., selling fruits, helping parents at their small business, and baby sitting). Children with lower SES and socially isolated were more involved in paid work than less disadvantaged peers. Children ever exposed to paid work were four times more likely to present anxiety/depression symptoms at a clinical level compared to non-exposed children. Multivariate modeling identified three independent correlates: child pure internalizing problems, social isolation, and low SES. Conclusion. There is an association between lifetime exposure to exclusively non-harmful paid work activities and pure internalizing problems even when considering SES variability and family social isolation. PMID:24302872
Lifetime paid work and mental health problems among poor urban 9-to-13-year-old children in Brazil.
Bordin, Isabel A; Pires, Ivens H; Paula, Cristiane S
2013-01-01
To verify if emotional/behavioral problems are associated with lifetime paid work in poor urban children, when taking into account other potential correlates. Cross-sectional study focused on 9-to-13-year-old children (n = 212). In a probabilistic sample of clusters of eligible households (women 15-49 years and son/daughter <18 years), one mother-child pair was randomly selected per household (n = 813; response rate = 82.4%). CBCL/6-18 identified child emotional/behavioral problems. Potential correlates include child gender and age, socioeconomic status/SES, maternal education, parental working status, and family social isolation, among others. Multivariate analysis examined the relationship between emotional/behavioral problems and lifetime paid work in the presence of significant correlates. All work activities were non-harmful (e.g., selling fruits, helping parents at their small business, and baby sitting). Children with lower SES and socially isolated were more involved in paid work than less disadvantaged peers. Children ever exposed to paid work were four times more likely to present anxiety/depression symptoms at a clinical level compared to non-exposed children. Multivariate modeling identified three independent correlates: child pure internalizing problems, social isolation, and low SES. There is an association between lifetime exposure to exclusively non-harmful paid work activities and pure internalizing problems even when considering SES variability and family social isolation.
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
NASA Technical Reports Server (NTRS)
1973-01-01
The aerodynamic design problems for the Pioneer Venus mission are discussed for a small probe shape that enters the atmosphere, and exhibits good stability for the subsonic portion of the flight. The problems discussed include: heat shield, structures and mechanisms, thermal control, decelerator, probe communication, data handling and command, and electric power.
Implications for the missing low-mass galaxies (satellites) problem from cosmic shear
NASA Astrophysics Data System (ADS)
Jimenez, Raul; Verde, Licia; Kitching, Thomas D.
2018-06-01
The number of observed dwarf galaxies, with dark matter mass ≲ 1011 M⊙ in the Milky Way or the Andromeda galaxy does not agree with predictions from the successful ΛCDM paradigm. To alleviate this problem a suppression of dark matter clustering power on very small scales has been conjectured. However, the abundance of dark matter halos outside our immediate neighbourhood (the Local Group) seem to agree with the ΛCDM-expected abundance. Here we connect these problems to observations of weak lensing cosmic shear, pointing out that cosmic shear can make significant statements about the missing satellites problem in a statistical way. As an example and pedagogical application we use recent constraints on small-scales power suppression from measurements of the CFHTLenS data. We find that, on average, in a region of ˜Gpc3 there is no significant small-scale power suppression. This implies that suppression of small-scale power is not a viable solution to the `missing satellites problem' or, alternatively, that on average in this volume there is no `missing satellites problem' for dark matter masses ≳ 5 × 109 M⊙. Further analysis of current and future weak lensing surveys will probe much smaller scales, k > 10h Mpc-1 corresponding roughly to masses M < 109M⊙.
Negotiations in Small School Districts.
ERIC Educational Resources Information Center
Freers, Ann M.
Four paradigms of labor-management relations are found in American small schools: paternalism, collective bargaining, collegial problem solving, and community problem solving. Examination of the conditions under which each is likely to exist and their unique characteristics, reveals the circumstance which will enhance the effectiveness of each.…
Stresses in adhesively bonded joints: A closed form solution. [plate theory
NASA Technical Reports Server (NTRS)
Delale, F.; Erdogan, F.; Aydinoglu, M. N.
1980-01-01
The plane strain of adhesively bonded structures which consist of two different orthotropic adherents is considered. Assuming that the thicknesses of the adherends are constant and are small in relation to the lateral dimensions of the bonded region, the adherends are treated as plates. The transverse shear effects in the adherends and the in-plane normal strain in the adhesive are taken into account. The problem is reduced to a system of differential equations for the adhesive stresses which is solved in closed form. A single lap joint and a stiffened plate under various loading conditions are considered as examples. To verify the basic trend of the solutions obtained from the plate theory a sample problem is solved by using the finite element method and by treating the adherends and the adhesive as elastic continua. The plate theory not only predicts the correct trend for the adhesive stresses but also gives rather surprisingly accurate results.
Steroid Assays in Paediatric Endocrinology
2010-01-01
Most steroid disorders of the adrenal cortex come to clinical attention in childhood and in order to investigate these problems, there are many challenges to the laboratory which need to be appreciated to a certain extent by clinicians. The analysis of sex steroids in biological fluids from neonates, over adrenarche and puberty present challenges of specificities and concentrations often in small sample sizes. Different reference ranges are also needed for interpretations. For around 40 years, quantitative assays for the steroids and their regulatory peptide hormones have been possible using immunoassay techniques. Problems are recognised and this review aims to summarise the benefits and failings of immunoassays and introduce where tandem mass spectrometry is anticipated to meet the clinical needs for steroid analysis in paediatric endocrine investigations. It is important to keep a dialogue between clinicians and the laboratory, especially when any laboratory result does not make sense in the clinical investigation. Conflict of interest:None declared. PMID:21274330
Specifying the Links Between Household Chaos and Preschool Children’s Development
Martin, Anne; Razza, Rachel; Brooks-Gunn, Jeanne
2011-01-01
Household chaos has been linked to poorer cognitive, behavioral, and self-regulatory outcomes in young children, but the mechanisms responsible remain largely unknown. Using a diverse sample of families in Chicago, the present study tests for the independent contributions made by five indicators of household chaos: noise, crowding, family instability, lack of routine, and television usually on. Chaos was measured at age 2; outcomes measured at age 5 tap receptive vocabulary, attention and behavior problems, and effortful control. Results show that controlling for all other measures of chaos, children with a lack of routine scored lower on receptive vocabulary and delayed gratification, while children whose television was generally on scored higher on aggression and attention problems. The provision of learning materials mediated a small part of the association between television and receptive vocabulary. Family instability, crowding, and noise did not predict any outcomes once other measures of chaos were controlled. PMID:22919120
Statistical analysis of effective singular values in matrix rank determination
NASA Technical Reports Server (NTRS)
Konstantinides, Konstantinos; Yao, Kung
1988-01-01
A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.
Sampling schemes and parameter estimation for nonlinear Bernoulli-Gaussian sparse models
NASA Astrophysics Data System (ADS)
Boudineau, Mégane; Carfantan, Hervé; Bourguignon, Sébastien; Bazot, Michael
2016-06-01
We address the sparse approximation problem in the case where the data are approximated by the linear combination of a small number of elementary signals, each of these signals depending non-linearly on additional parameters. Sparsity is explicitly expressed through a Bernoulli-Gaussian hierarchical model in a Bayesian framework. Posterior mean estimates are computed using Markov Chain Monte-Carlo algorithms. We generalize the partially marginalized Gibbs sampler proposed in the linear case in [1], and build an hybrid Hastings-within-Gibbs algorithm in order to account for the nonlinear parameters. All model parameters are then estimated in an unsupervised procedure. The resulting method is evaluated on a sparse spectral analysis problem. It is shown to converge more efficiently than the classical joint estimation procedure, with only a slight increase of the computational cost per iteration, consequently reducing the global cost of the estimation procedure.
[Hypochondria circumscripta (to the problem of coenesthesiopathic paranoia)].
Smulevich, A B; Frolova, V I
2006-01-01
Hypochondria circumscripta manifests in patients with paranoial personality and signs of somatopsychic accentuation. A sample included 11 patients (6 men, 5 women, mean age 54 years) who referred to dermatologists or had been admitted to gastroenterological and psychiatric units. Pathokinesis of hypochondria circumscripta comprises three stages: idiopathic algias, overmastering sensations and possession of pain. In the latter stage, delusional behavior targeted to the elimination of a part of the body, which is perceived as the source of pain, develops. Psychopathological disorders are realized in limits of coenesthesiopathic spectrum without tendency to interpretive delusion manifestation as well as transformation to systematic delusion of persecution during the disease course. As a consequence of above mentioned peculiarities of psychopathological structure, the stage of possession of pain may be designated as coenesthesiopathic paranoia. Because of the small sample, the findings can be considered as preliminary ones.
NASA Technical Reports Server (NTRS)
Cohen, Barbara A.; Coker, Robert F.
2010-01-01
The South Pole Aitken (SPA) basin is the stratigraphically oldest identifiable lunar basin and is therefore one of the most important targets for absolute age-dating to help understand whether ancient lunar bombardment history smoothly declined or was punctuated by a cataclysm. A feasible near-term approach to this problem is to robotically collect a sample from near the center of the basin, where vertical and lateral mixing provided by post-basin impacts ensures that such a sample will be composed of small rock fragments from SPA itself, from local impact craters, and from faraway giant basins. The range of ages, intermediate spikes in the age distribution, and the oldest ages are all part of the definition of the absolute age and impact history recorded within the SPA basin.
[Imaging Mass Spectrometry in Histopathologic Analysis].
Yamazaki, Fumiyoshi; Seto, Mitsutoshi
2015-04-01
Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.
Development of a noise annoyance sensitivity scale
NASA Technical Reports Server (NTRS)
Bregman, H. L.; Pearson, R. G.
1972-01-01
Examining the problem of noise pollution from the psychological rather than the engineering view, a test of human sensitivity to noise was developed against the criterion of noise annoyance. Test development evolved from a previous study in which biographical, attitudinal, and personality data was collected on a sample of 166 subjects drawn from the adult community of Raleigh. Analysis revealed that only a small subset of the data collected was predictive of noise annoyance. Item analysis yielded 74 predictive items that composed the preliminary noise sensitivity test. This was administered to a sample of 80 adults who later rate the annoyance value of six sounds (equated in terms of peak sound pressure level) presented in a simulated home, living-room environment. A predictive model involving 20 test items was developed using multiple regression techniques, and an item weighting scheme was evaluated.
Quantization of Gaussian samples at very low SNR regime in continuous variable QKD applications
NASA Astrophysics Data System (ADS)
Daneshgaran, Fred; Mondin, Marina
2016-09-01
The main problem for information reconciliation in continuous variable Quantum Key Distribution (QKD) at low Signal to Noise Ratio (SNR) is quantization and assignment of labels to the samples of the Gaussian Random Variables (RVs) observed at Alice and Bob. Trouble is that most of the samples, assuming that the Gaussian variable is zero mean which is de-facto the case, tend to have small magnitudes and are easily disturbed by noise. Transmission over longer and longer distances increases the losses corresponding to a lower effective SNR exasperating the problem. This paper looks at the quantization problem of the Gaussian samples at very low SNR regime from an information theoretic point of view. We look at the problem of two bit per sample quantization of the Gaussian RVs at Alice and Bob and derive expressions for the mutual information between the bit strings as a result of this quantization. The quantization threshold for the Most Significant Bit (MSB) should be chosen based on the maximization of the mutual information between the quantized bit strings. Furthermore, while the LSB string at Alice and Bob are balanced in a sense that their entropy is close to maximum, this is not the case for the second most significant bit even under optimal threshold. We show that with two bit quantization at SNR of -3 dB we achieve 75.8% of maximal achievable mutual information between Alice and Bob, hence, as the number of quantization bits increases beyond 2-bits, the number of additional useful bits that can be extracted for secret key generation decreases rapidly. Furthermore, the error rates between the bit strings at Alice and Bob at the same significant bit level are rather high demanding very powerful error correcting codes. While our calculations and simulation shows that the mutual information between the LSB at Alice and Bob is 0.1044 bits, that at the MSB level is only 0.035 bits. Hence, it is only by looking at the bits jointly that we are able to achieve a mutual information of 0.2217 bits which is 75.8% of maximum achievable. The implication is that only by coding both MSB and LSB jointly can we hope to get close to this 75.8% limit. Hence, non-binary codes are essential to achieve acceptable performance.
Driskell, Jeff; Bradford, Judith
2012-01-01
While we know that minority status differentiates the experience of aging, little research has been done to examine the ways in which patterns of successful aging may differ in diverse subgroups of older adults. In this exploratory study, we investigated and described experiences of successful aging in a sample of lesbian, gay, bisexual and transgender (LGBT) older adults. Directed by a community-based participatory research process, we conducted semi-structured in-depth interviews with 22 LGBT adults, age 60 and older. We took an inductive, grounded theory approach to analyze the taped and transcribed interviews. We coded respondent experiences in four domains: physical health, mental health, emotional state and social engagement. Four gradations of successful aging emerged. Very few in our sample met the bar for “traditional success” characterized by the absence of problems in all four domains of health. Most of the sample was coping to a degree with problems and were categorized in one of two gradations on a continuum of successful aging: “surviving and thriving” and “working at it.” A small number was “ailing”: not coping well with problems. Some of the experiences that respondents described were related to LGBT status; others were related to more general processes of aging. The research suggests that a successful aging framework that is modified to include coping can better describe the experiences of LGBT older adults. The modified conceptual model outlined here may be useful in future research on this population, as well as more broadly for diverse populations of adults, and may be adapted for use in practice to assess and improve health and well-being. PMID:23273552
NASA Astrophysics Data System (ADS)
Avdyushev, Victor A.
2017-12-01
Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the method of disturbed observations, we conclude that it practically should be still entirely acceptable to adequately describe the orbital uncertainty since, from a geometrical point of view, the efficiency of the method directly depends only on the nonflatness of the estimation subspace and it gets higher as the nonflatness decreases.
A fast least-squares algorithm for population inference
2013-01-01
Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408
A fast least-squares algorithm for population inference.
Parry, R Mitchell; Wang, May D
2013-01-23
Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.
Brewer, S.K.; Rabeni, C.F.; Papoulias, D.M.
2008-01-01
We compared gonadosomatic index (GSI) and histological analysis of ovaries for identifying reproductive periods of fishes to determine the validity of using GSI in future studies. Four small-bodied riverine species were examined in our comparison of the two methods. Mean GSI was significantly different between all histological stages for suckermouth minnow and red shiner. Mean GSI was significantly different between most stages for slenderhead darter; whereas stages 3 and 6 were not significantly different, the time period when these stages are present would allow fisheries biologists to distinguish between the two stages. Mean GSI was not significantly different for many histological stages in stonecat. Difficulties in distinguishing between histological stages and GSI associated with stonecat illustrate potential problems obtaining appropriate sample sizes from species that move to alternative habitats to spawn. We suggest that GSI would be a useful tool in identifying mature ovaries in many small-bodied, multiple-spawning fishes. This information could be combined with data from histology during mature periods to pinpoint specific spawning events. ?? 2007 Blackwell Munksgaard.
Methodology for astronaut reconditioning research.
Beard, David J; Cook, Jonathan A
2017-01-01
Space medicine offers some unique challenges, especially in terms of research methodology. A specific challenge for astronaut reconditioning involves identification of what aspects of terrestrial research methodology hold and which require modification. This paper reviews this area and presents appropriate solutions where possible. It is concluded that spaceflight rehabilitation research should remain question/problem driven and is broadly similar to the terrestrial equivalent on small populations, such as rare diseases and various sports. Astronauts and Medical Operations personnel should be involved at all levels to ensure feasibility of research protocols. There is room for creative and hybrid methodology but careful systematic observation is likely to be more achievable and fruitful than complex trial based comparisons. Multi-space agency collaboration will be critical to pool data from small groups of astronauts with the accepted use of standardised outcome measures across all agencies. Systematic reviews will be an essential component. Most limitations relate to the inherent small sample size available for human spaceflight research. Early adoption of a co-operative model for spaceflight rehabilitation research is therefore advised. Copyright © 2016 Elsevier Ltd. All rights reserved.
Current trends in nanobiosensor technology
Wu, Diana; Langer, Robert S
2014-01-01
The development of tools and processes used to fabricate, measure, and image nanoscale objects has lead to a wide range of work devoted to producing sensors that interact with extremely small numbers (or an extremely small concentration) of analyte molecules. These advances are particularly exciting in the context of biosensing, where the demands for low concentration detection and high specificity are great. Nanoscale biosensors, or nanobiosensors, provide researchers with an unprecedented level of sensitivity, often to the single molecule level. The use of biomolecule-functionalized surfaces can dramatically boost the specificity of the detection system, but can also yield reproducibility problems and increased complexity. Several nanobiosensor architectures based on mechanical devices, optical resonators, functionalized nanoparticles, nanowires, nanotubes, and nanofibers have been demonstrated in the lab. As nanobiosensor technology becomes more refined and reliable, it is likely it will eventually make its way from the lab to the clinic, where future lab-on-a-chip devices incorporating an array of nanobiosensors could be used for rapid screening of a wide variety of analytes at low cost using small samples of patient material. PMID:21391305
Rain Erosion Studies of Sapphire, Aluminum Oxynitride, Spinel, Lanthana- Doped Yttria, and TAF Glass
1990-07-01
small , there is little change in average scatter for any material in any test. CONCLUSIONS AND DISCUSSION The principal conclusions are 1. ALON...20 Sample broke erosion damage 10 Slight pitting, 20 No change erosion damage 15 Pitting, cratering, 20 Small surface pits erosion damage 15 Pitting...Sample broke 10 No damage 15 Sample pitted, small edge fracture 15 Slight pitting, 1 crater, 20 Sample pitted, erosion damage small edge fracture 15 SUght
NASA Astrophysics Data System (ADS)
Gries, Katharina Ines; Schlechtweg, Julian; Hille, Pascal; Schörmann, Jörg; Eickhoff, Martin; Volz, Kerstin
2017-10-01
Scanning transmission electron microscopy is an extremely useful method to image small features with a size in the range of a few nanometers and below. But it must be taken into account that such images are projections of the sample and do not necessarily represent the real three dimensional structure of the specimen. By applying electron tomography this problem can be overcome. In our work GaN nanowires including InGaN nanodisks were investigated. To reduce the effect of the missing wedge a single nanowire was removed from the underlying silicon substrate using a manipulator needle and attached to a tomography holder. Since this sample exhibits the same thickness of few tens of nanometers in all directions normal to the tilt axis, this procedure allows a sample tilt of ±90°. Reconstruction of the received data reveals a split of the InGaN nanodisks into a horizontal continuation of the (0 0 0 1 bar) central facet and a declined {1 0 1 bar l} facet (with l = -2 or -3).
How effective are expressive writing interventions for adolescents? A meta-analytic review.
Travagin, Gabriele; Margola, Davide; Revenson, Tracey A
2015-03-01
This meta-analysis evaluated the effects of the expressive writing intervention (EW; Pennebaker & Beall, 1986) among adolescents. Twenty-one independent studies that assessed the efficacy of expressive writing on youth samples aged 10-18 ears were collected and analyzed. Results indicated an overall mean g-effect size that was positive in direction but relatively small (0.127), as well as significant g-effect sizes ranging from 0.107 to 0.246 for the outcome domains of Emotional Distress, Problem Behavior, Social Adjustment, and School Participation. Few significant effects were found within specific outcome domains for putative moderator variables that included characteristics of the participants, intervention instructions, or research design. Studies involving adolescents with high levels of emotional problems at baseline reported larger effects on school performance. Studies that implemented a higher dosage intervention (i.e., greater number and, to some extent, greater spacing of sessions) reported larger effects on somatic complaints. Overall, the findings suggest that expressive writing tends to produce small yet significant improvements on adolescents' well-being. The findings highlight the importance of modifying the traditional expressive writing protocol to enhance its efficacy and reduce potential detrimental effects. At this stage of research the evidence on expressive writing as a viable intervention for adolescents is promising but not decisive. Copyright © 2015 Elsevier Ltd. All rights reserved.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Yazdi, Maryam; Roohafza, Hamidreza; Feizi, Awat; Rabiei, Katayoon; Sarafzadegan, Nizal
2018-08-15
Psychological problems affect many employees and their job performance. Although, the association of diet and stress, as modifiable risk factors, with psychological problems have been investigated separately, however their simultaneous impacts have not been studied. The present study aimed at reinvestigating the association of major dietary patterns and stressful life events with intensity of psychological problems in a large sample of Iranian industrial employees. In a cross-sectional study, 3063 employees in an industrial unit in Isfahan, Iran were investigated. Psychological problems profile as a latent construct was extracted from three common psychological problems; depression, anxiety and psychological distress. Depression and anxiety were measured by Persian validated version of Hospital Anxiety and Depression Scale (HADS) and psychological distress by the 12 items General Health Questionnaires (GHQ). Major dietary patterns were derived from a validated short form of semi-quantitative Food Frequency Questionnaire (FFQ) using explanatory factor analysis. Stressful life events dimensions were extracted based on factor analysis, from self-perceived frequency and intensity of Stressful Life Events (SLE) questionnaire. Associations of the obtained factors were investigated in a latent structural modeling framework. Three dietary patterns i.e. western, traditional and healthy and two stressors dimensions including personal life and socioeconomics were extracted. Greater adherence to healthy diet was protectively associated with psychological problems profile scores (β = -0.54; 95% CI: -0.74, -0.34). Adherence to western (β = 0.23; 95% CI: 0.02, 0.45) and Iranian traditional (β = 0.48; 95% CI: 0.28, 0.68) dietary patterns were positively associated with higher psychological problems scores in employees. But after adjustment for life stressors only adherence to a healthy diet remained significantly associated with psychological problems profile (β = -0.43; 95% CI: -0.59, -0.27). Also, personal life stressors (β = 0.81; 95% CI: 0.63, 0.99) and socioeconomics stressors (β = 0.12; 95% CI: 0.08, 0.16) had significantly direct association with psychological problems profile scores. Variables assessment by self-reported questionnaires, not affording causality because of cross sectional design, not adjusting the nutrients intake in association analyses, relatively small sample size of women. Life stressors particularly personal stressors have negative direct association with psychological health of employees. Adherence to a healthy diet can be related to improvement of psychological health in employees. The results can be useful in occupational health planning in order to improve mental health and job productivity. Copyright © 2018 Elsevier B.V. All rights reserved.
Doshi, Urmi; Hamelberg, Donald
2015-05-01
Accelerated molecular dynamics (aMD) has been proven to be a powerful biasing method for enhanced sampling of biomolecular conformations on general-purpose computational platforms. Biologically important long timescale events that are beyond the reach of standard molecular dynamics can be accessed without losing the detailed atomistic description of the system in aMD. Over other biasing methods, aMD offers the advantages of tuning the level of acceleration to access the desired timescale without any advance knowledge of the reaction coordinate. Recent advances in the implementation of aMD and its applications to small peptides and biological macromolecules are reviewed here along with a brief account of all the aMD variants introduced in the last decade. In comparison to the original implementation of aMD, the recent variant in which all the rotatable dihedral angles are accelerated (RaMD) exhibits faster convergence rates and significant improvement in statistical accuracy of retrieved thermodynamic properties. RaMD in conjunction with accelerating diffusive degrees of freedom, i.e. dual boosting, has been rigorously tested for the most difficult conformational sampling problem, protein folding. It has been shown that RaMD with dual boosting is capable of efficiently sampling multiple folding and unfolding events in small fast folding proteins. RaMD with the dual boost approach opens exciting possibilities for sampling multiple timescales in biomolecules. While equilibrium properties can be recovered satisfactorily from aMD-based methods, directly obtaining dynamics and kinetic rates for larger systems presents a future challenge. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.
Separation in Logistic Regression: Causes, Consequences, and Control.
Mansournia, Mohammad Ali; Geroldinger, Angelika; Greenland, Sander; Heinze, Georg
2018-04-01
Separation is encountered in regression models with a discrete outcome (such as logistic regression) where the covariates perfectly predict the outcome. It is most frequent under the same conditions that lead to small-sample and sparse-data bias, such as presence of a rare outcome, rare exposures, highly correlated covariates, or covariates with strong effects. In theory, separation will produce infinite estimates for some coefficients. In practice, however, separation may be unnoticed or mishandled because of software limits in recognizing and handling the problem and in notifying the user. We discuss causes of separation in logistic regression and describe how common software packages deal with it. We then describe methods that remove separation, focusing on the same penalized-likelihood techniques used to address more general sparse-data problems. These methods improve accuracy, avoid software problems, and allow interpretation as Bayesian analyses with weakly informative priors. We discuss likelihood penalties, including some that can be implemented easily with any software package, and their relative advantages and disadvantages. We provide an illustration of ideas and methods using data from a case-control study of contraceptive practices and urinary tract infection.
A model for effective planning of SME support services.
Rakićević, Zoran; Omerbegović-Bijelović, Jasmina; Lečić-Cvetković, Danica
2016-02-01
This paper presents a model for effective planning of support services for small and medium-sized enterprises (SMEs). The idea is to scrutinize and measure the suitability of support services in order to give recommendations for the improvement of a support planning process. We examined the applied support services and matched them with the problems and needs of SMEs, based on the survey conducted in 2013 on a sample of 336 SMEs in Serbia. We defined and analysed the five research questions that refer to support services, their consistency with the SMEs' problems and needs, and the relation between the given support and SMEs' success. The survey results have shown a statistically significant connection between them. Based on this result, we proposed an eight-phase model as a method for the improvement of support service planning for SMEs. This model helps SMEs to plan better their requirements in terms of support; government and administration bodies at all levels and organizations that provide support services to understand better SMEs' problems and needs for support. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
Profeta, Gerson S.; Pereira, Jessica A. S.; Costa, Samara G.; Azambuja, Patricia; Garcia, Eloi S.; Moraes, Caroline da Silva; Genta, Fernando A.
2017-01-01
Glycoside Hydrolases (GHs) are enzymes able to recognize and cleave glycosidic bonds. Insect GHs play decisive roles in digestion, in plant-herbivore, and host-pathogen interactions. GH activity is normally measured by the detection of a release from the substrate of products as sugars units, colored, or fluorescent groups. In most cases, the conditions for product release and detection differ, resulting in discontinuous assays. The current protocols result in using large amounts of reaction mixtures for the obtainment of time points in each experimental replica. These procedures restrain the analysis of biological materials with limited amounts of protein and, in the case of studies regarding small insects, implies in the pooling of samples from several individuals. In this respect, most studies do not assess the variability of GH activities across the population of individuals from the same species. The aim of this work is to approach this technical problem and have a deeper understanding of the variation of GH activities in insect populations, using as models the disease vectors Rhodnius prolixus (Hemiptera: Triatominae) and Lutzomyia longipalpis (Diptera: Phlebotominae). Here we standardized continuous assays using 4-methylumbelliferyl derived substrates for the detection of α-Glucosidase, β-Glucosidase, α-Mannosidase, N-acetyl-hexosaminidase, β-Galactosidase, and α-Fucosidase in the midgut of R. prolixus and L. longipalpis with results similar to the traditional discontinuous protocol. The continuous assays allowed us to measure GH activities using minimal sample amounts with a higher number of measurements, resulting in data that are more reliable and less time and reagent consumption. The continuous assay also allows the high-throughput screening of GH activities in small insect samples, which would be not applicable to the previous discontinuous protocol. We applied continuous GH measurements to 90 individual samples of R. prolixus anterior midgut homogenates using a high-throughput protocol. α-Glucosidase and α-Mannosidase activities showed the normal distribution in the population. β-Glucosidase, β-Galactosidase, N-acetyl-hexosaminidase, and α-Fucosidase activities showed non-normal distributions. These results indicate that GHs fluorescent-based high-throughput assays apply to insect samples and that the frequency distribution of digestive activities should be considered in data analysis, especially if a small number of samples is used. PMID:28553236
A Methodological Approach for Training Analysts of Small Business Problems.
ERIC Educational Resources Information Center
Mackness, J. R.
1986-01-01
Steps in a small business analysis are discussed: understand how company activities interact internally and with markets and suppliers; know the relative importance of controllable management variables; understand the social atmosphere within the company; analyze the operations of the company; define main problem areas; identify possible actions…
Enhancing the Values of Intercollegiate Athletics at Small Colleges.
ERIC Educational Resources Information Center
Moulton, Phillips P.
Solutions to specific problems associated with intercollegiate athletics, primarily men's spectator sports and particularly football, are proposed in order to enhance the values of the sports programs at small colleges. After a historical summary of recurrent problems, recent proposals are noted. It is argued that most proposals for dealing with…
Sampling Mars: Analytical requirements and work to do in advance
NASA Technical Reports Server (NTRS)
Koeberl, Christian
1988-01-01
Sending a mission to Mars to collect samples and return them to the Earth for analysis is without doubt one of the most exciting and important tasks for planetary science in the near future. Many scientifically important questions are associated with the knowledge of the composition and structure of Martian samples. Amongst the most exciting questions is the clarification of the SNC problem- to prove or disprove a possible Martian origin of these meteorites. Since SNC meteorites have been used to infer the chemistry of the planet Mars, and its evolution (including the accretion history), it would be important to know if the whole story is true. But before addressing possible scientific results, we have to deal with the analytical requirements, and with possible pre-return work. It is unlikely to expect that a possible Mars sample return mission will bring back anything close to the amount returned by the Apollo missions. It will be more like the amount returned by the Luna missions, or at least in that order of magnitude. This requires very careful sample selection, and very precise analytical techniques. These techniques should be able to use minimal sample sizes and on the other hand optimize the scientific output. The possibility to work with extremely small samples should not obstruct another problem: possible sampling errors. As we know from terrestrial geochemical studies, sampling procedures are quite complicated and elaborate to ensure avoiding sampling errors. The significance of analyzing a milligram or submilligram sized sample and putting that in relationship with the genesis of whole planetary crusts has to be viewed with care. This leaves a dilemma on one hand, to minimize the sample size as far as possible in order to have the possibility of returning as many different samples as possible, and on the other hand to take a sample large enough to be representative. Whole rock samples are very useful, but should not exceed the 20 to 50 g range, except in cases of extreme inhomogeneity, because for larger samples the information tends to become redundant. Soil samples should be in the 2 to 10 g range, permitting the splitting of the returned samples for studies in different laboratories with variety of techniques.
Carpenter, Danielle; Walker, Susan; Prescott, Natalie; Schalkwijk, Joost; Armour, John Al
2011-08-18
Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion.
2011-01-01
Background Copy number variation (CNV) contributes to the variation observed between individuals and can influence human disease progression, but the accurate measurement of individual copy numbers is technically challenging. In the work presented here we describe a modification to a previously described paralogue ratio test (PRT) method for genotyping the CCL3L1/CCL4L1 copy variable region, which we use to ascertain CCL3L1/CCL4L1 copy number in 1581 European samples. As the products of CCL3L1 and CCL4L1 potentially play a role in autoimmunity we performed case control association studies with Crohn's disease, rheumatoid arthritis and psoriasis clinical cohorts. Results We evaluate the PRT methodology used, paying particular attention to accuracy and precision, and highlight the problems of differential bias in copy number measurements. Our PRT methods for measuring copy number were of sufficient precision to detect very slight but systematic differential bias between results from case and control DNA samples in one study. We find no evidence for an association between CCL3L1 copy number and Crohn's disease, rheumatoid arthritis or psoriasis. Conclusions Differential bias of this small magnitude, but applied systematically across large numbers of samples, would create a serious risk of false positive associations in copy number, if measured using methods of lower precision, or methods relying on single uncorroborated measurements. In this study the small differential bias detected by PRT in one sample set was resolved by a simple pre-treatment by restriction enzyme digestion. PMID:21851606
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed, Abdul Aziz; Al Rashid Megat Ahmad, Megat Harun; Md Idris, Faridah
2010-01-05
Malaysian Nuclear Agency's (Nuclear Malaysia) Small Angle Neutron Scattering (SANS) facility--(MYSANS)--is utilizing low flux of thermal neutron at the agency's 1 MW TRIGA reactor. As the design nature of the 8 m SANS facility can allow object resolution in the range between 5 and 80 nm to be obtained. It can be used to study alloys, ceramics and polymers in certain area of problems that relate to samples containing strong scatterers or contrast. The current SANS system at Malaysian Nuclear Agency is only capable to measure Q in limited range with a PSD (128x128) fixed at 4 m from themore » sample. The existing reactor hall that incorporate this MYSANS facility has a layout that prohibits the rebuilding of MYSANS therefore the position between the wavelength selector (HOPG) and sample and the PSD cannot be increased for wider Q range. The flux of the neutron at current sample holder is very low which around 10{sup 3} n/cm{sup 2}/sec. Thus it is important to rebuild the MYSANS to maximize the utilization of neutron. Over the years, the facility has undergone maintenance and some changes have been made. Modification on secondary shutter and control has been carried out to improve the safety level of the instrument. A compact micro-focus SANS method can suit this objective together with an improve cryostat system. This paper will explain some design concept and approaches in achieving higher flux and the modification needs to establish the micro-focused SANS.« less
Mapping raised bogs with an iterative one-class classification approach
NASA Astrophysics Data System (ADS)
Mack, Benjamin; Roscher, Ribana; Stenzel, Stefanie; Feilhauer, Hannes; Schmidtlein, Sebastian; Waske, Björn
2016-10-01
Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative OCC approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative OCC outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier.
Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S
2016-11-01
There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.
Multisystemic Therapy for social, emotional, and behavioral problems in youth aged 10-17.
Littell, J H; Popa, M; Forsythe, B
2005-10-19
Multisystemic Therapy (MST) is an intensive, home-based intervention for families of youth with social, emotional, and behavioral problems. Masters-level therapists engage family members in identifying and changing individual, family, and environmental factors thought to contribute to problem behavior. Intervention may include efforts to improve communication, parenting skills, peer relations, school performance, and social networks. Most MST trials were conducted by program developers in the USA; results of one independent trial are available and others are in progress. To provide unbiased estimates of the impacts of MST on restrictive out-of-home living arrangements, crime and delinquency, and other behavioral and psychosocial outcomes for youth and families. Electronic searches were made of bibliographic databases (including the Cochrane Library, C2-SPECTR, PsycINFO, Science Direct and Sociological Abstracts) as well as government and professional websites, from 1985 to January 2003. Reference lists of articles were examined, and experts were contacted. Studies where youth (age 10-17) with social, emotional, and/or behavioral problems were randomised to licensed MST programs or other conditions (usual services or alternative treatments). Two reviewers independently reviewed 266 titles and abstracts; 95 full-text reports were retrieved, and 35 unique studies were identified. Two reviewers independently read all study reports for inclusion. Eight studies were eligible for inclusion. Two reviewers independently assessed study quality and extracted data from these studies. Significant heterogeneity among studies was identified (assessed using Chi-square and I(2)), hence random effects models were used to pool data across studies. Odds ratios were used in analyses of dichotomous outcomes; standardised mean differences were used with continuous outcomes. Adjustments were made for small sample sizes (using Hedges g). Pooled estimates were weighted with inverse variance methods, and 95% confidence intervals were used. The most rigorous (intent-to-treat) analysis found no significant differences between MST and usual services in restrictive out-of-home placements and arrests or convictions. Pooled results that include studies with data of varying quality tend to favor MST, but these relative effects are not significantly different from zero. The study sample size is small and effects are not consistent across studies; hence, it is not clear whether MST has clinically significant advantages over other services. There is inconclusive evidence of the effectiveness of MST compared with other interventions with youth. There is no evidence that MST has harmful effects.
An assessment of stream water quality of the Rio San Juan, Nuevo Leon, Mexico, 1995-1996.
Flores Laureano, José Santos; Návar, José
2002-01-01
Good water quality of the Rio San Juan is critical for economic development of northeastern Mexico. However, water quality of the river has rapidly degraded during the last few decades. Societal concerns include indications of contamination problems and increased water diversions for agriculture, residential, and industrial water supplies. Eight sampling sites were selected along the river where water samples were collected monthly for 10 mo (October 1995-July 1996). The concentration of heavy metals and chemical constituents and measurements of bacteriological and physical parameters were determined on water samples. In addition, river discharge was recorded. Constituent concentrations in 18.7% of all samples exceeded at least one water quality standard. In particular, concentrations of fecal and total coliform bacteria, sulfate, detergent, dissolved solids, Al, Ba, Cr, Fe, and Cd, exceeded several water quality standards. Pollution showed spatial and temporal variations and trends. These variations were statistically explained by spatial and temporal changes of constituent inputs and discharge. Samples collected from the site upstream of El Cuchillo reservoir had large constituent concentrations when discharge was small; this reservoir supplies domestic and industrial water to the city of Monterrey.
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Propulsion engineering study for small-scale Mars missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, J.
1995-09-12
Rocket propulsion options for small-scale Mars missions are presented and compared, particularly for the terminal landing maneuver and for sample return. Mars landing has a low propulsive {Delta}v requirement on a {approximately}1-minute time scale, but at a high acceleration. High thrust/weight liquid rocket technologies, or advanced pulse-capable solids, developed during the past decade for missile defense, are therefore more appropriate for small Mars landers than are conventional space propulsion technologies. The advanced liquid systems are characterize by compact lightweight thrusters having high chamber pressures and short lifetimes. Blowdown or regulated pressure-fed operation can satisfy the Mars landing requirement, but hardwaremore » mass can be reduced by using pumps. Aggressive terminal landing propulsion designs can enable post-landing hop maneuvers for some surface mobility. The Mars sample return mission requires a small high performance launcher having either solid motors or miniature pump-fed engines. Terminal propulsion for 100 kg Mars landers is within the realm of flight-proven thruster designs, but custom tankage is desirable. Landers on a 10 kg scale also are feasible, using technology that has been demonstrated but not previously flown in space. The number of sources and the selection of components are extremely limited on this smallest scale, so some customized hardware is required. A key characteristic of kilogram-scale propulsion is that gas jets are much lighter than liquid thrusters for reaction control. The mass and volume of tanks for inert gas can be eliminated by systems which generate gas as needed from a liquid or a solid, but these have virtually no space flight history. Mars return propulsion is a major engineering challenge; earth launch is the only previously-solved propulsion problem requiring similar or greater performance.« less
Zhang, Di; Wang, Xingxiang; Zhou, Zhigao
2017-01-01
Industrialized small-scale pig farming has been rapidly developed in developing regions such as China and Southeast Asia, but the environmental problems accompanying pig farming have not been fully recognized. This study investigated 168 small-scale pig farms and 29 example pig farms in Yujiang County of China to examine current and potential impacts of pig wastes on soil, water and crop qualities in the hilly red soil region, China. The results indicated that the small-scale pig farms produced considerable annual yields of wastes, with medians of 216, 333 and 773 ton yr−1 per pig farm for manure, urine and washing wastewater, respectively, which has had significant impact on surface water quality. Taking NH4+-N, total nitrogen (TN) or total phosphorus (TP) as a criterion to judge water quality, the proportions of Class III and below Class III waters in the local surface waters were 66.2%, 78.7% and 72.5%. The well water (shallow groundwater) quality near these pig farms met the water quality standards by a wide margin. The annual output of pollutants from pig farms was the most important factor correlated with the nutrients and heavy metals in soils, and the relationship can be described by a linear equation. The impact on croplands was marked by the excessive accumulation of available phosphorus and heavy metals such as Cu and Zn. For crop safety, the over-limit ratio of Zn in vegetable samples reached 60%, other heavy metals in vegetable and rice samples tested met the food safety standard at present. PMID:29211053
Zhang, Di; Wang, Xingxiang; Zhou, Zhigao
2017-12-06
Industrialized small-scale pig farming has been rapidly developed in developing regions such as China and Southeast Asia, but the environmental problems accompanying pig farming have not been fully recognized. This study investigated 168 small-scale pig farms and 29 example pig farms in Yujiang County of China to examine current and potential impacts of pig wastes on soil, water and crop qualities in the hilly red soil region, China. The results indicated that the small-scale pig farms produced considerable annual yields of wastes, with medians of 216, 333 and 773 ton yr -1 per pig farm for manure, urine and washing wastewater, respectively, which has had significant impact on surface water quality. Taking NH₄⁺-N, total nitrogen (TN) or total phosphorus (TP) as a criterion to judge water quality, the proportions of Class III and below Class III waters in the local surface waters were 66.2%, 78.7% and 72.5%. The well water (shallow groundwater) quality near these pig farms met the water quality standards by a wide margin. The annual output of pollutants from pig farms was the most important factor correlated with the nutrients and heavy metals in soils, and the relationship can be described by a linear equation. The impact on croplands was marked by the excessive accumulation of available phosphorus and heavy metals such as Cu and Zn. For crop safety, the over-limit ratio of Zn in vegetable samples reached 60%, other heavy metals in vegetable and rice samples tested met the food safety standard at present.
NASA Astrophysics Data System (ADS)
Yuan, Chao; Chareyre, Bruno; Darve, Félix
2016-09-01
A pore-scale model is introduced for two-phase flow in dense packings of polydisperse spheres. The model is developed as a component of a more general hydromechanical coupling framework based on the discrete element method, which will be elaborated in future papers and will apply to various processes of interest in soil science, in geomechanics and in oil and gas production. Here the emphasis is on the generation of a network of pores mapping the void space between spherical grains, and the definition of local criteria governing the primary drainage process. The pore space is decomposed by Regular Triangulation, from which a set of pores connected by throats are identified. A local entry capillary pressure is evaluated for each throat, based on the balance of capillary pressure and surface tension at equilibrium. The model reflects the possible entrapment of disconnected patches of the receding wetting phase. It is validated by a comparison with drainage experiments. In the last part of the paper, a series of simulations are reported to illustrate size and boundary effects, key questions when studying small samples made of spherical particles be it in simulations or experiments. Repeated tests on samples of different sizes give evolution of water content which are not only scattered but also strongly biased for small sample sizes. More than 20,000 spheres are needed to reduce the bias on saturation below 0.02. Additional statistics are generated by subsampling a large sample of 64,000 spheres. They suggest that the minimal sampling volume for evaluating saturation is one hundred times greater that the sampling volume needed for measuring porosity with the same accuracy. This requirement in terms of sample size induces a need for efficient computer codes. The method described herein has a low algorithmic complexity in order to satisfy this requirement. It will be well suited to further developments toward coupled flow-deformation problems in which evolution of the microstructure require frequent updates of the pore network.
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie
2018-01-01
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km2, 4.50 km2, and 1.87 km2, respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content. PMID:29652811
Zhang, Zhenming; Zhou, Yunchao; Wang, Shijie; Huang, Xianfei
2018-04-13
Karst areas are typical ecologically fragile areas, and stony desertification has become the most serious ecological and economic problems in these areas worldwide as well as a source of disasters and poverty. A reasonable sampling scale is of great importance for research on soil science in karst areas. In this paper, the spatial distribution of stony desertification characteristics and its influencing factors in karst areas are studied at different sampling scales using a grid sampling method based on geographic information system (GIS) technology and geo-statistics. The rock exposure obtained through sampling over a 150 m × 150 m grid in the Houzhai River Basin was utilized as the original data, and five grid scales (300 m × 300 m, 450 m × 450 m, 600 m × 600 m, 750 m × 750 m, and 900 m × 900 m) were used as the subsample sets. The results show that the rock exposure does not vary substantially from one sampling scale to another, while the average values of the five subsamples all fluctuate around the average value of the entire set. As the sampling scale increases, the maximum value and the average value of the rock exposure gradually decrease, and there is a gradual increase in the coefficient of variability. At the scale of 150 m × 150 m, the areas of minor stony desertification, medium stony desertification, and major stony desertification in the Houzhai River Basin are 7.81 km², 4.50 km², and 1.87 km², respectively. The spatial variability of stony desertification at small scales is influenced by many factors, and the variability at medium scales is jointly influenced by gradient, rock content, and rock exposure. At large scales, the spatial variability of stony desertification is mainly influenced by soil thickness and rock content.
An optical fiber-based LSPR aptasensor for simple and rapid in-situ detection of ochratoxin A.
Lee, Bobin; Park, Jin-Ho; Byun, Ju-Young; Kim, Joon Heon; Kim, Min-Gon
2018-04-15
Label-free biosensing methods that rely on the use of localized surface plasmon resonance (LSPR) have attracted great attention as a result of their simplicity, high sensitivity, and relatively low cost. However, in-situ analysis of real samples using these techniques has remained challenging because colloidal nanoparticles (NPs) can be unstable at certain levels of pH and salt concentration. Even in the case of a chip-type LSPR sensor that can resolve the instability problem by employing NPs immobilized on the substrate, loading of a sample to sensor chip with exact volume control can be difficult for unskilled users. Herein, we report an optical-fiber-based LSPR aptasensor that can avoid these problems and serve as a portable and simple system for sensitive detection of a small mycotoxin, ochratoxin A (OTA), in real samples. The optical fiber coated with aptamer-modified gold nanorods (GNRs) is simply dipped into a solution containing OTA and subjected to LSPR analysis. Quantitative analysis of OTA is performed by measuring the spectral red shift of the LSPR peak of GNRs. Under optimized conditions, the LSPR peak shift displays a linear response (R 2 = 0.9887) to OTA in the concentration range from 10pM to 100nM, with a limit of detection of 12.0pM (3S). The developed sensor shows a high selectivity for OTA over other mycotoxins such as zearalenone (ZEN) and ochratoxin B (OTB), and shows an accurate detection capability for OTA in real grape juice samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Development and Psychometric Evaluation of the Brief Adolescent Gambling Screen (BAGS)
Stinchfield, Randy; Wynne, Harold; Wiebe, Jamie; Tremblay, Joel
2017-01-01
The purpose of this study was to develop and evaluate the initial reliability, validity and classification accuracy of a new brief screen for adolescent problem gambling. The three-item Brief Adolescent Gambling Screen (BAGS) was derived from the nine-item Gambling Problem Severity Subscale (GPSS) of the Canadian Adolescent Gambling Inventory (CAGI) using a secondary analysis of existing CAGI data. The sample of 105 adolescents included 49 females and 56 males from Canada who completed the CAGI, a self-administered measure of DSM-IV diagnostic criteria for Pathological Gambling, and a clinician-administered diagnostic interview including the DSM-IV diagnostic criteria for Pathological Gambling (both of which were adapted to yield DSM-5 Gambling Disorder diagnosis). A stepwise multivariate discriminant function analysis selected three GPSS items as the best predictors of a diagnosis of Gambling Disorder. The BAGS demonstrated satisfactory estimates of reliability, validity and classification accuracy and was equivalent to the nine-item GPSS of the CAGI and the BAGS was more accurate than the SOGS-RA. The BAGS estimates of classification accuracy include hit rate = 0.95, sensitivity = 0.88, specificity = 0.98, false positive rate = 0.02, and false negative rate = 0.12. Since these classification estimates are preliminary, derived from a relatively small sample size, and based upon the same sample from which the items were selected, it will be important to cross-validate the BAGS with larger and more diverse samples. The BAGS should be evaluated for use as a screening tool in both clinical and school settings as well as epidemiological surveys. PMID:29312064
Robinson, N J; Dean, R S; Cobb, M; Brennan, M L
2016-09-01
It is currently unclear how frequently a diagnosis is made during small-animal consultations or how much of a role making a diagnosis plays in veterinary decision-making. Understanding more about the diagnostic process will help direct future research towards areas relevant to practicing veterinary surgeons. The aim of this study was to determine the frequency with which a diagnosis was made, classify the types of diagnosis made (and the factors influencing these) and determine which specific diagnoses were made for health problems discussed during small-animal consultations. Data were gathered during real-time direct observation of small-animal consultations in eight practices in the United Kingdom. Data collected included characteristics of the consultation (e.g. consultation type), patient (e.g. breed), and each problem discussed (e.g. new or pre-existing problem). Each problem discussed was classified into one of the following diagnosis types: definitive; working; presumed; open; previous. A three-level multivariable logistic-regression model was developed, with problem (Level 1) nested within patient (Level 2) nested within consulting veterinary surgeon (Level 3). Problems without a previous diagnosis, in cats and dogs only, were included in the model, which had a binary outcome variable of definitive diagnosis versus no definitive diagnosis. Data were recorded for 1901 animals presented, and data on diagnosis were gathered for 3192 health problems. Previous diagnoses were the most common diagnosis type (n=1116/3192; 35.0%), followed by open (n=868/3192; 27.2%) then definitive (n=660/3192; 20.7%). The variables remaining in the final model were patient age, problem history, consultation type, who raised the problem, and body system affected. New problems, problems in younger animals, and problems raised by the veterinary surgeon were more likely to result in a definitive diagnosis than pre-existing problems, problems in older animals, and problems raised by the owner. The most common diagnoses made were overweight/obese and periodontal disease (both n=210; 6.6%). Definitive diagnoses are rarely made during small-animal consultations, with much of the veterinary caseload involving management of ongoing problems or making decisions around new problems prior to a diagnosis being made. This needs to be taken into account when considering future research priorities, and it may be necessary to conduct research focused on the approach to common clinical presentations, rather than purely on the common diagnoses made. Examining how making a diagnosis affects the actions taken during the consultation may shed further light on the role of diagnosis in the clinical decision-making process. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Radar system components to detect small and fast objects
NASA Astrophysics Data System (ADS)
Hülsmann, Axel; Zech, Christian; Klenner, Mathias; Tessmann, Axel; Leuther, Arnulf; Lopez-Diaz, Daniel; Schlechtweg, Michael; Ambacher, Oliver
2015-05-01
Small and fast objects, for example bullets of caliber 5 to 10 mm, fired from guns like AK-47, can cause serious problems to aircrafts in asymmetric warfare. Especially slow and big aircrafts, like heavy transport helicopters are an easy mark of small caliber hand fire weapons. These aircrafts produce so much noise, that the crew is not able to recognize an attack unless serious problems occur and important systems of the aircraft fail. This is just one of many scenarios, where the detection of fast and small objects is desirable. Another scenario is the collision of space debris particles with satellites.
Shi, Zumin; Lien, Nanna; Kumar, Bernadette Nirmal; Dalen, Ingvild; Holmboe-Ottesen, Gerd
2005-10-01
The objective of this article was to describe the relationship between sociodemographic factors and nutritional status (body mass index [BMI], height for age, and anemia) in adolescents. In 2002, a cross-sectional study comprising 824 students aged 12 to 14 years from 8 schools in 2 prefectures in Jiangsu province of China had their height, weight, and hemoglobin level measured. Self-administered questionnaires were used to collect sociodemographic information. The prevalence of underweight was low in the overall sample (5.2%). The prevalence of stunting also was low (2.9%), and the differences between residential areas and sociodemographic groups were small. The percentage of overweight/obesity was higher among boys (17.9%) than girls (8.9%). Male students having fathers with a high educational level had the highest percentage of overweight and obesity (27.8%). Household socioeconomic status (SES) was associated positively with BMI. Family size, gender, and the father's level of education also were related to BMI. The percentage of anemia was somewhat higher among girls (23.4%) than boys (17.2%). Anemia coexisted with underweight. No urban/rural or SES differences in the percentage of students with anemia were observed in the sample, but differences between regions and schools were very significant. Undernutrition was not a problem in the research area. Nutritional status was associated with SES and region. Overnutrition and anemia in adolescents are important nutritional problems in Jiangsu, China. Intervention programs are needed to address these problems.
Systems Biology and Ratio-Based, Real-Time Disease Surveillance.
Fair, J M; Rivas, A L
2015-08-01
Most infectious disease surveillance methods are not well fit for early detection. To address such limitation, here we evaluated a ratio- and Systems Biology-based method that does not require prior knowledge on the identity of an infective agent. Using a reference group of birds experimentally infected with West Nile virus (WNV) and a problem group of unknown health status (except that they were WNV-negative and displayed inflammation), both groups were followed over 22 days and tested with a system that analyses blood leucocyte ratios. To test the ability of the method to discriminate small data sets, both the reference group (n = 5) and the problem group (n = 4) were small. The questions of interest were as follows: (i) whether individuals presenting inflammation (disease-positive or D+) can be distinguished from non-inflamed (disease-negative or D-) birds, (ii) whether two or more D+ stages can be detected and (iii) whether sample size influences detection. Within the problem group, the ratio-based method distinguished the following: (i) three (one D- and two D+) data classes; (ii) two (early and late) inflammatory stages; (iii) fast versus regular or slow responders; and (iv) individuals that recovered from those that remained inflamed. Because ratios differed in larger magnitudes (up to 48 times larger) than percentages, it is suggested that data patterns are likely to be recognized when disease surveillance methods are designed to measure inflammation and utilize ratios. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Microstructural abnormalities of the brain white matter in attention-deficit/hyperactivity disorder
Chen, Lizhou; Huang, Xiaoqi; Lei, Du; He, Ning; Hu, Xinyu; Chen, Ying; Li, Yuanyuan; Zhou, Jinbo; Guo, Lanting; Kemp, Graham J.; Gong, Qiyong
2015-01-01
Background Attention-deficit/hyperactivity disorder (ADHD) is an early-onset neurodevelopmental disorder with multiple behavioural problems and executive dysfunctions for which neuroimaging studies have reported a variety of abnormalities, with inconsistencies partly owing to confounding by medication and concurrent psychiatric disease. We aimed to investigate the microstructural abnormalities of white matter in unmedicated children and adolescents with pure ADHD and to explore the association between these abnormalities and behavioural symptoms and executive functions. Methods We assessed children and adolescents with ADHD and healthy controls using psychiatric interviews. Behavioural problems were rated using the revised Conners’ Parent Rating Scale, and executive functions were measured using the Stroop Colour-Word Test and the Wisconsin Card Sorting test. We acquired diffusion tensor imaging data using a 3 T MRI system, and we compared diffusion parameters, including fractional anisotropy (FA) and mean, axial and radial diffusivities, between the 2 groups. Results Thirty-three children and adolescents with ADHD and 35 healthy controls were included in our study. In patients compared with controls, FA was increased in the left posterior cingulum bundle as a result of both increased axial diffusivity and decreased radial diffusivity. In addition, the averaged FA of the cluster in this region correlated with behavioural measures as well as executive function in patients with ADHD. Limitations This study was limited by its cross-sectional design and small sample size. The cluster size of the significant result was small. Conclusion Our findings suggest that white matter abnormalities within the limbic network could be part of the neural underpinning of behavioural problems and executive dysfunction in patients with ADHD. PMID:25853285
Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B
2015-06-01
The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.
Bovea, María D; Pérez-Belis, Victoria; Quemades-Beltrán, Pilar
2017-07-01
The European legal framework for Electrical and Electronic Equipment (EEE) and Waste Electrical and Electronic Equipment (WEEE) (Directive 2012/19/EU) prioritises reuse strategies against other valorisation options. Along these lines, this paper examines the awareness and perceptions of reusing small household EEE from the viewpoint of the different stakeholders involved in its end-of-life: repair centres, second-hand shops and consumers. Direct interviews were conducted in which an intended survey, designed specifically for each stakeholder, was answered by a representative sample of each one. The results obtained from repair centres show that small household EEE are rarely repaired, except for minor repairs such as replacing cables, and that heaters, toasters and vacuum cleaners were those most frequently repaired. The difficulty of accessing cheap spare parts or difficulties during the disassembly process are the commonest problems observed by repair technicians. The results obtained from second-hand shops show that irons, vacuum cleaners and heaters are the small household EEE that are mainly received and sold. The results according to consumers indicate that 9.6% of them take their small household EEE to be repaired, while less than 1% has ever bought a second-hand small household EEE. The main arguments for this attitude are they thought that the repair cost would be similar to the price of a new one (for repairs), and hygiene and cleaning reasons (for second-hand sales). Copyright © 2017 Elsevier Ltd. All rights reserved.
Small Group Teaching: A Trouble-Shooting Guide, Monograph Series/22.
ERIC Educational Resources Information Center
Tiberius, Richard G.
This guidebook focuses on the problems facing teachers in small group teaching. It is organized in three parts, each dealing with a specific common problem. Part One deals with clarifying instructional goals for the group. Part Two concerns interaction within the group and between the students and the teacher. Part Three encompasses the…
Managing Smallness: Promising Fiscal Practices for Rural School District Administrators.
ERIC Educational Resources Information Center
Freitas, Deborah Inman
Based on a mail survey of over 100 rural school administrators in 34 states, this handbook outlines common problems and successful strategies in the financial management of rural, small school districts. Major problems are related to revenue and cash flow, increasing expenditures, providing quality education programs, and staffing to handle the…
Compressive Sensing via Nonlocal Smoothed Rank Function
Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le
2016-01-01
Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683
Natural fracture systems on planetary surfaces: Genetic classification and pattern randomness
NASA Technical Reports Server (NTRS)
Rossbacher, Lisa A.
1987-01-01
One method for classifying natural fracture systems is by fracture genesis. This approach involves the physics of the formation process, and it has been used most frequently in attempts to predict subsurface fractures and petroleum reservoir productivity. This classification system can also be applied to larger fracture systems on any planetary surface. One problem in applying this classification system to planetary surfaces is that it was developed for ralatively small-scale fractures that would influence porosity, particularly as observed in a core sample. Planetary studies also require consideration of large-scale fractures. Nevertheless, this system offers some valuable perspectives on fracture systems of any size.
Big assumptions for small samples in crop insurance
Ashley Elaine Hungerford; Barry Goodwin
2014-01-01
The purpose of this paper is to investigate the effects of crop insurance premiums being determined by small samples of yields that are spatially correlated. If spatial autocorrelation and small sample size are not properly accounted for in premium ratings, the premium rates may inaccurately reflect the risk of a loss.
Standard Deviation for Small Samples
ERIC Educational Resources Information Center
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Challenging Conventional Wisdom for Multivariate Statistical Models with Small Samples
ERIC Educational Resources Information Center
McNeish, Daniel
2017-01-01
In education research, small samples are common because of financial limitations, logistical challenges, or exploratory studies. With small samples, statistical principles on which researchers rely do not hold, leading to trust issues with model estimates and possible replication issues when scaling up. Researchers are generally aware of such…
Zörnig, Peter
2015-08-01
We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.
Illman, Nathan A; Brown, June S L
2016-09-01
Problem anger is frequently experienced by the general population and is known to cause significant problems for the individual and those around them. Whilst psychological treatments for problem anger are becoming increasingly established, this is still an under-researched area of mental health. We present an evaluation of a series of one-day anger management workshops for the public, targeting problem anger with a cognitive-behavioural approach. The main aim was to evaluate the effectiveness of a brief group-based anger intervention in terms of subjectively reported anger provocation levels and of depression and anxiety. Workshop participants completed a number of questionnaire measures at baseline before the intervention and at 1 month follow-up. The key questionnaires measured self-reported anger provocation levels (Novaco Anger Scale-Provocation Inventory), depressive symptomatology (PHQ-9) and symptoms of generalized anxiety (GAD-7). Change scores were analysed using repeated measures analyses. We found a significant reduction in anger provocation among workshop participants at 1 month follow-up (p = .03). Reductions in depression and anxiety were not statistically significant. We conclude that this brief psychoeducational anger intervention was effective in a small community sample and suggest future work should assess the effectiveness on similar brief interventions using a larger client group and examine outcomes on a broader range of anger measures.
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
NASA Technical Reports Server (NTRS)
Itoh, T.; Kubo, H.; Honda, H.; Tominaga, T.; Makide, Y.; Yakohata, A.; Sakai, H.
1985-01-01
Measurements of concentrations of chlorofluoromethanes (CFMs), carbon dioxide and carbon isotope ratio in stratospheric and tropospheric air by grab-sampling systems are reported. The balloon-borne grab-sampling system has been launched from Sanriku Balloon Center three times since 1981. It consists of: (1) six sampling cylinders, (2) eight motor driven values, (3) control and monitor circuits, and (4) pressurized housing. Particular consideration is paid to the problem of contamination. Strict requirements are placed on the choice of materials and components, construction methods, cleaning techniques, vacuum integrity, and sampling procedures. An aluminum pressurized housing and a 4-m long inlet line are employed to prevent the sampling air from contamination by outgassing of sampling and control devices. The sampling is performed during the descent of the system. Vertical profiles of mixing ratios of CF2Cl2, CFCl3 and CH4 are given. Mixing ratios of CF2Cl2 and CFCl3 in the stratosphere do not show the discernible effect of the increase of those in the ground level background, and decrease with altitude. Decreasing rate of CFCl3 is larger than that of CF2Cl2. CH4 mixing ratio, on the other hand, shows diffusive equilibrium, as the photodissociation cross section of CH4 is small and concentrations of OH radical and 0(sup I D) are low.
Smartphone-enabled optofluidic exosome diagnostic for concussion recovery
NASA Astrophysics Data System (ADS)
Ko, Jina; Hemphill, Matthew A.; Gabrieli, David; Wu, Leon; Yelleswarapu, Venkata; Lawrence, Gladys; Pennycooke, Wesley; Singh, Anup; Meaney, Dave F.; Issadore, David
2016-08-01
A major impediment to improving the treatment of concussion is our current inability to identify patients that will experience persistent problems after the injury. Recently, brain-derived exosomes, which cross the blood-brain barrier and circulate following injury, have shown great potential as a noninvasive biomarker of brain recovery. However, clinical use of exosomes has been constrained by their small size (30-100 nm) and the extensive sample preparation (>24 hr) needed for traditional exosome measurements. To address these challenges, we developed a smartphone-enabled optofluidic platform to measure brain-derived exosomes. Sample-to-answer on our chip is 1 hour, 10x faster than conventional techniques. The key innovation is an optofluidic device that can detect enzyme amplified exosome biomarkers, and is read out using a smartphone camera. Using this approach, we detected and profiled GluR2+ exosomes in the post-injury state using both in vitro and murine models of concussion.
Directional view interpolation for compensation of sparse angular sampling in cone-beam CT.
Bertram, Matthias; Wiegert, Jens; Schafer, Dirk; Aach, Til; Rose, Georg
2009-07-01
In flat detector cone-beam computed tomography and related applications, sparse angular sampling frequently leads to characteristic streak artifacts. To overcome this problem, it has been suggested to generate additional views by means of interpolation. The practicality of this approach is investigated in combination with a dedicated method for angular interpolation of 3-D sinogram data. For this purpose, a novel dedicated shape-driven directional interpolation algorithm based on a structure tensor approach is developed. Quantitative evaluation shows that this method clearly outperforms conventional scene-based interpolation schemes. Furthermore, the image quality trade-offs associated with the use of interpolated intermediate views are systematically evaluated for simulated and clinical cone-beam computed tomography data sets of the human head. It is found that utilization of directionally interpolated views significantly reduces streak artifacts and noise, at the expense of small introduced image blur.
Smartphone-enabled optofluidic exosome diagnostic for concussion recovery.
Ko, Jina; Hemphill, Matthew A; Gabrieli, David; Wu, Leon; Yelleswarapu, Venkata; Lawrence, Gladys; Pennycooke, Wesley; Singh, Anup; Meaney, Dave F; Issadore, David
2016-08-08
A major impediment to improving the treatment of concussion is our current inability to identify patients that will experience persistent problems after the injury. Recently, brain-derived exosomes, which cross the blood-brain barrier and circulate following injury, have shown great potential as a noninvasive biomarker of brain recovery. However, clinical use of exosomes has been constrained by their small size (30-100 nm) and the extensive sample preparation (>24 hr) needed for traditional exosome measurements. To address these challenges, we developed a smartphone-enabled optofluidic platform to measure brain-derived exosomes. Sample-to-answer on our chip is 1 hour, 10x faster than conventional techniques. The key innovation is an optofluidic device that can detect enzyme amplified exosome biomarkers, and is read out using a smartphone camera. Using this approach, we detected and profiled GluR2+ exosomes in the post-injury state using both in vitro and murine models of concussion.
Thermoluminescence properties of CaO powder obtained from chicken eggshells
NASA Astrophysics Data System (ADS)
Nagabhushana, K. R.; Lokesha, H. S.; Satyanarayana Reddy, S.; Prakash, D.; Veerabhadraswamy, M.; Bhagyalakshmi, H.; Jayaramaiah, J. R.
2017-09-01
Eggshell wastage has created serious problem in disposal of the food processing industry which has been triggered the thoughts of researchers to use wasted eggshells as good source of calcium. In the present work, calcium oxide (CaO) has been synthesized by combustion process in furnace (F-CaO) and microwave oven (M-CaO) using the source of chicken eggshells. The obtained F-CaO and M-CaO are characterized by XRD, SEM with EDX and thermoluminescence (TL) technique. XRD pattern of both the samples show cubic phase with crystallite size 45-52 nm. TL glow curves are recorded for various gamma radiation dose (300-4000 Gy). Two TL glows, a small peak at 424 K and stronger peak at 597 K are observed. TL response of M-CaO is 2.67 times higher than F-CaO sample. TL kinetic parameters are calculated by computerized curve deconvolution analysis (CCDA) and discussed.
Warth, Arne; Muley, Thomas; Meister, Michael; Weichert, Wilko
2015-01-01
Preanalytic sampling techniques and preparation of tissue specimens strongly influence analytical results in lung tissue diagnostics both on the morphological but also on the molecular level. However, in contrast to analytics where tremendous achievements in the last decade have led to a whole new portfolio of test methods, developments in preanalytics have been minimal. This is specifically unfortunate in lung cancer, where usually only small amounts of tissue are at hand and optimization in all processing steps is mandatory in order to increase the diagnostic yield. In the following, we provide a comprehensive overview on some aspects of preanalytics in lung cancer from the method of sampling over tissue processing to its impact on analytical test results. We specifically discuss the role of preanalytics in novel technologies like next-generation sequencing and in the state-of the-art cytology preparations. In addition, we point out specific problems in preanalytics which hamper further developments in the field of lung tissue diagnostics.
Zarei, S.; Mortazavi, S. M. J.; Mehdizadeh, A. R.; Jalalipour, M.; Borzou, S.; Taeb, S.; Haghani, M.; Mortazavi, S. A. R.; Shojaei-fard, M. B.; Nematollahi, S.; Alighanbari, N.; Jarideh, S.
2015-01-01
Background Nowadays, mothers are continuously exposed to different sources of electromagnetic fields before and even during pregnancy. It has recently been shown that exposure to mobile phone radiation during pregnancy may lead to adverse effects on the brain development in offspring and cause hyperactivity. Researchers have shown that behavioral problems in laboratory animals which have a similar appearance to ADHD are caused by intrauterine exposure to mobile phones. Objective The purpose of this study was to investigate whether the maternal exposure to different sources of electromagnetic fields affect on the rate and severity of speech problems in their offspring. Methods In this study, mothers of 35 healthy 3-5 year old children (control group) and 77 children and diagnosed with speech problems who had been referred to a speech treatment center in Shiraz, Iran were interviewed. These mothers were asked whether they had exposure to different sources of electromagnetic fields such as mobile phones, mobile base stations, Wi-Fi, cordless phones, laptops and power lines. Results We found a significant association between either the call time (P=0.002) or history of mobile phone use (months used) and speech problems in the offspring (P=0.003). However, other exposures had no effect on the occurrence of speech problems. To the best of our knowledge, this is the first study to investigate a possible association between maternal exposure to electromagnetic field and speech problems in the offspring. Although a major limitation in our study is the relatively small sample size, this study indicates that the maternal exposure to common sources of electromagnetic fields such as mobile phones can affect the occurrence of speech problems in the offspring. PMID:26396971
Zarei, S; Mortazavi, S M J; Mehdizadeh, A R; Jalalipour, M; Borzou, S; Taeb, S; Haghani, M; Mortazavi, S A R; Shojaei-Fard, M B; Nematollahi, S; Alighanbari, N; Jarideh, S
2015-09-01
Nowadays, mothers are continuously exposed to different sources of electromagnetic fields before and even during pregnancy. It has recently been shown that exposure to mobile phone radiation during pregnancy may lead to adverse effects on the brain development in offspring and cause hyperactivity. Researchers have shown that behavioral problems in laboratory animals which have a similar appearance to ADHD are caused by intrauterine exposure to mobile phones. The purpose of this study was to investigate whether the maternal exposure to different sources of electromagnetic fields affect on the rate and severity of speech problems in their offspring. In this study, mothers of 35 healthy 3-5 year old children (control group) and 77 children and diagnosed with speech problems who had been referred to a speech treatment center in Shiraz, Iran were interviewed. These mothers were asked whether they had exposure to different sources of electromagnetic fields such as mobile phones, mobile base stations, Wi-Fi, cordless phones, laptops and power lines. We found a significant association between either the call time (P=0.002) or history of mobile phone use (months used) and speech problems in the offspring (P=0.003). However, other exposures had no effect on the occurrence of speech problems. To the best of our knowledge, this is the first study to investigate a possible association between maternal exposure to electromagnetic field and speech problems in the offspring. Although a major limitation in our study is the relatively small sample size, this study indicates that the maternal exposure to common sources of electromagnetic fields such as mobile phones can affect the occurrence of speech problems in the offspring.
Knowledge and beliefs regarding agricultural pesticides in rural Guatemala
NASA Astrophysics Data System (ADS)
Popper, Roger; Andino, Karla; Bustamante, Mario; Hernandez, Beatriz; Rodas, Luis
1996-03-01
Throughout Central America, the United States Agency for International Development (USAID), and the Zamorano Pan-American Agricultural School support a Safe Pesticide Use program. In 1993, a study of results was carried out among farmers and housewives in eastern Guatemala. Aspects of the methodology included: (1) participation of extension workers in all aspects of the study; (2) small, region-focused samples (eight cells, 30 interviews per cell); (3) comparison to control groups of untrained farmers and housewives; (4) a traditional questionnaire for studying acquisition of specific knowledge; and (5) a flexible instrument for building a cognitive map of knowledge and beliefs regarding pesticides. The cognitive map is a step toward applying modern psychocultural scaling, an approach already well developed for medicine and public health, to environmental problems. Positive results detected include progress at learning the meaning of colors on containers that denote toxicity and where to store pesticides. Pesticide application problems detected were mention by farmers of highly toxic, restricted pesticides as appropriate for most pest problems and of insecticides as the correct solution to fungus problems, and the widespread belief that correct pesticide dosage depends on number of pests seen rather than on land or foliage surface. Health-related problems detected were admission by a vast majority of housewives that they apply highly toxic pesticides to combat children's head-lice; low awareness that pesticides cause health problems more serious than nausea, dizziness, and headaches; and a common belief that lemonade and coffee are effective medicines for pesticide poisoning.
Working with women prisoners who seriously harm themselves: ratings of staff expressed emotion (EE).
Moore, Estelle; Andargachew, Sara; Taylor, Pamela J
2011-02-01
Prison staff are repeatedly exposed to prisoners' suicidal behaviours; this may impair their capacity to care. Expressed emotion (EE), as a descriptor of the 'emotional climate' between people, has been associated with challenging behaviour in closed environments, but not previously applied to working alliances in a prison. To investigate the feasibility of rating EE between staff and suicidal women in prison; to test the hypothesis that most such staff-inmate alliances would be rated high EE. All regular staff on two small UK prison units with high suicidal behaviour rates were invited to participate. An audiotaped five-minute speech sample (FMSS) about work with one nominated suicidal prisoner was embedded in a longer research interview, then rated by two trained raters, independent of the interview process and the prison. Seven prison officers and 8 clinically qualified staff completed interviews; 3 refused, but 17 others were not interviewed, reasons including not having worked long enough with any one such prisoner. Participants and non-participants had similar relevant backgrounds. Contrary to our hypothesis, EE ratings were generally 'low'. As predicted, critical comments were directed at high frequency oppositional behaviour. EE assessments with prison staff are feasible, but our sample was small and turnover of prisoners high, so the study needs replication. Attributions about problem behaviour to illness, and/or traumatic life experience, tend to confirm generally supportive working relationships in this sample. Copyright © 2010 John Wiley & Sons, Ltd.
Li, Yulong; Zhang, Rui; Peng, Rongxue; Ding, Jiansheng; Han, Yanxi; Wang, Guojing; Zhang, Kuo; Lin, Guigao; Li, Jinming
2016-06-01
Currently, several approaches are being used to detect echinoderm microtubule associated protein like 4 gene (EML4)-anaplastic lymphoma receptor tyrosine kinase gene (ALK) rearrangement, but the performance of laboratories in China is unknown. To evaluate the proficiency of different laboratories in detecting EML4-ALK rearrangement, we organized a proficiency test (PT). We prepared formalin-fixed, paraffin-embedded samples derived from the xenograft tumor tissue of three non-small cell lung cancer cell lines with different EML4-ALK rearrangements and used PTs to evaluate the detection performance of laboratories in China. We received results from 94 laboratories that used different methods. Of the participants, 75.53% correctly identified all samples in the PT panel. Among the errors made by participants, false-negative errors were likely to occur. According to the methodology applied, 82.86%, 76.67%, 77.78%, and 66.67% of laboratories using reverse transcriptase polymerase chain reaction, fluorescence in situ hybridization, next-generation sequencing, and immunohistochemical analysis, respectively, could analyze all the samples correctly. Moreover, we have found that the laboratories' genotyping capacity is high, especially for variant 3. Our PT survey revealed that the performance and methodological problems of laboratories must be addressed to further increase the reproducibility and accuracy of detection of EML4-ALK rearrangement to ensure reliable results for selection of appropriate patients. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.