NASA Astrophysics Data System (ADS)
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Initial evaluation of discrete orthogonal basis reconstruction of ECT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E.B.; Donohue, K.D.
1996-12-31
Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less
Network-based statistical comparison of citation topology of bibliographic databases
Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko
2014-01-01
Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk
We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less
NASA Technical Reports Server (NTRS)
1990-01-01
Structural Reliability Consultants' computer program creates graphic plots showing the statistical parameters of glue laminated timbers, or 'glulam.' The company president, Dr. Joseph Murphy, read in NASA Tech Briefs about work related to analysis of Space Shuttle surface tile strength performed for Johnson Space Center by Rockwell International Corporation. Analysis led to a theory of 'consistent tolerance bounds' for statistical distributions, applicable in industrial testing where statistical analysis can influence product development and use. Dr. Murphy then obtained the Tech Support Package that covers the subject in greater detail. The TSP became the basis for Dr. Murphy's computer program PC-DATA, which he is marketing commercially.
Homologues of insulinase, a new superfamily of metalloendopeptidases.
Rawlings, N D; Barrett, A J
1991-01-01
On the basis of a statistical analysis of an alignment of the amino acid sequences, a new superfamily of metalloendopeptidases is proposed, consisting of human insulinase, Escherichia coli protease III and mitochondrial processing endopeptidases from Saccharomyces and Neurospora. These enzymes do not contain the 'HEXXH' consensus sequence found in all previously recognized zinc metalloendopeptidases. PMID:2025223
Detecting changes in dynamic and complex acoustic environments
Boubenec, Yves; Lawlor, Jennifer; Górska, Urszula; Shamma, Shihab; Englitz, Bernhard
2017-01-01
Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI: http://dx.doi.org/10.7554/eLife.24910.001 PMID:28262095
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, B.; Erni, W.; Krusche, B.
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Singh, B.; Erni, W.; Krusche, B.; ...
2016-10-28
Simulation results for future measurements of electromagnetic proton form factors atmore » $$\\overline{\\rm P}$$ANDA (FAIR) within the PandaRoot software framework are reported. The statistical precision with which the proton form factors can be determined is estimated. The signal channel p¯p → e +e – is studied on the basis of two different but consistent procedures. The suppression of the main background channel, i.e. p¯p → π +π –, is studied. Furthermore, the background versus signal efficiency, statistical and systematical uncertainties on the extracted proton form factors are evaluated using two different procedures. The results are consistent with those of a previous simulation study using an older, simplified framework. Furthermore, a slightly better precision is achieved in the PandaRoot study in a large range of momentum transfer, assuming the nominal beam conditions and detector performance.« less
Climate Considerations Of The Electricity Supply Systems In Industries
NASA Astrophysics Data System (ADS)
Asset, Khabdullin; Zauresh, Khabdullina
2014-12-01
The study is focused on analysis of climate considerations of electricity supply systems in a pellet industry. The developed analysis model consists of two modules: statistical data of active power losses evaluation module and climate aspects evaluation module. The statistical data module is presented as a universal mathematical model of electrical systems and components of industrial load. It forms a basis for detailed accounting of power loss from the voltage levels. On the basis of the universal model, a set of programs is designed to perform the calculation and experimental research. It helps to obtain the statistical characteristics of the power losses and loads of the electricity supply systems and to define the nature of changes in these characteristics. Within the module, several methods and algorithms for calculating parameters of equivalent circuits of low- and high-voltage ADC and SD with a massive smooth rotor with laminated poles are developed. The climate aspects module includes an analysis of the experimental data of power supply system in pellet production. It allows identification of GHG emission reduction parameters: operation hours, type of electrical motors, values of load factor and deviation of standard value of voltage.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
2002-01-01
The work presented here formulates the rigorous statistical basis for the correct estimation of communication link SNR of a BPSK, QPSK, and for that matter, any M-ary phase-modulated digital signal from what is known about its statistical behavior at the output of the receiver demodulator. Many methods to accomplish this have been proposed and implemented in the past but all of them are based on tacit and unwarranted assumptions and are thus defective. However, the basic idea is well founded, i.e., the signal at the output of a communications demodulator has convolved within it the prevailing SNR characteristic of the link. The acquisition of the SNR characteristic is of the utmost importance to a communications system that must remain reliable in adverse propagation conditions. This work provides a correct and consistent mathematical basis for the proper statistical 'deconvolution' of the output of a demodulator to yield a measure of the SNR. The use of such techniques will alleviate the need and expense for a separate propagation link to assess the propagation conditions prevailing on the communications link. Furthermore, they are applicable for every situation involving the digital transmission of data over planetary and space communications links.
Computation of statistical secondary structure of nucleic acids.
Yamamoto, K; Kitamura, Y; Yoshikura, H
1984-01-01
This paper presents a computer analysis of statistical secondary structure of nucleic acids. For a given single stranded nucleic acid, we generated "structure map" which included all the annealing structures in the sequence. The map was transformed into "energy map" by rough approximation; here, the energy level of every pairing structure consisting of more than 2 successive nucleic acid pairs was calculated. By using the "energy map", the probability of occurrence of each annealed structure was computed, i.e., the structure was computed statistically. The basis of computation was the 8-queen problem in the chess game. The validity of our computer programme was checked by computing tRNA structure which has been well established. Successful application of this programme to small nuclear RNAs of various origins is demonstrated. PMID:6198622
The (mis)reporting of statistical results in psychology journals.
Bakker, Marjan; Wicherts, Jelte M
2011-09-01
In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.
Scout trajectory error propagation computer program
NASA Technical Reports Server (NTRS)
Myler, T. R.
1982-01-01
Since 1969, flight experience has been used as the basis for predicting Scout orbital accuracy. The data used for calculating the accuracy consists of errors in the trajectory parameters (altitude, velocity, etc.) at stage burnout as observed on Scout flights. Approximately 50 sets of errors are used in Monte Carlo analysis to generate error statistics in the trajectory parameters. A covariance matrix is formed which may be propagated in time. The mechanization of this process resulted in computer program Scout Trajectory Error Propagation (STEP) and is described herein. Computer program STEP may be used in conjunction with the Statistical Orbital Analysis Routine to generate accuracy in the orbit parameters (apogee, perigee, inclination, etc.) based upon flight experience.
Geboy, Nicholas J.; Engle, Mark A.; Hower, James C.
2013-01-01
Several standard methods require coal to be ashed prior to geochemical analysis. Researchers, however, are commonly interested in the compositional nature of the whole-coal, not its ash. Coal geochemical data for any given sample can, therefore, be reported in the ash basis on which it is analyzed or the whole-coal basis to which the ash basis data are back calculated. Basic univariate (mean, variance, distribution, etc.) and bivariate (correlation coefficients, etc.) measures of the same suite of samples can be very different depending which reporting basis the researcher uses. These differences are not real, but an artifact resulting from the compositional nature of most geochemical data. The technical term for this artifact is subcompositional incoherence. Since compositional data are forced to a constant sum, such as 100% or 1,000,000 ppm, they possess curvilinear properties which make the Euclidean principles on which most statistical tests rely inappropriate, leading to erroneous results. Applying the isometric logratio (ilr) transformation to compositional data allows them to be represented in Euclidean space and evaluated using traditional tests without fear of producing mathematically inconsistent results. When applied to coal geochemical data, the issues related to differences between the two reporting bases are resolved as demonstrated in this paper using major oxide and trace metal data from the Pennsylvanian-age Pond Creek coal of eastern Kentucky, USA. Following ilr transformation, univariate statistics, such as mean and variance, still differ between the ash basis and whole-coal basis, but in predictable and calculated manners. Further, the stability between two different components, a bivariate measure, is identical, regardless of the reporting basis. The application of ilr transformations addresses both the erroneous results of Euclidean-based measurements on compositional data as well as the inconsistencies observed on coal geochemical data reported on different bases.
Anharmonic vibrational spectra and mode-mode couplings analysis of 2-aminopyridine
NASA Astrophysics Data System (ADS)
Faizan, Mohd; Alam, Mohammad Jane; Afroz, Ziya; Bhat, Sheeraz Ahmad; Ahmad, Shabbir
2018-01-01
Vibrational spectra of 2-aminopyridine (2AP) have been analyzed using the vibrational self-consistence field theory (VSCF), correlated corrected vibrational self-consistence field theory (CC-VSCF) and vibrational perturbation theory (VPT2) at B3LYP/6-311G(d,p) framework. The mode-mode couplings affect the vibrational frequencies and intensities. The coupling integrals between pairs of normal modes have been obtained on the basis of quartic force field (2MR-QFF) approximation. The overtone and combination bands are also assigned in the FTIR spectrum with the help of anharmonic calculation at VPT2 method. A statistical analysis of deviations shows that estimated anharmonic frequencies are closer to the experiment over harmonic approximation. Furthermore, the anharmonic correction has also been carried out for the dimeric structure of 2AP. The fundamental vibration bands have been assigned on the basis of potential energy distribution (PED) and visual look over the animated modes. Other important molecular properties such as frontier molecular orbitals and molecular electrostatics potential mapping have also been analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Roekel, Luke
We have conducted a suite of Large Eddy Simulation (LES) to form the basis of a multi-model comparison (left). The results have led to proposed model improvements. We have verified that Eulerian-Lagrangian effective diffusivity estimates of mesoscale mixing are consistent with traditional particle statistics metrics (right). LES and Lagrangian particles will be utilized to better represent the movement of water into and out of the mixed layer.
Nascimento, Monikelly do Carmo Chagas; Boscolo, Solange Maria de Almeida; Haiter-Neto, Francisco; Santos, Emanuela Carla Dos; Lambrichts, Ivo; Pauwels, Ruben; Jacobs, Reinhilde
2017-06-01
The aim of this study was to assess the influence of the number of basis images and the orientation of the skull on the evaluation of cortical alveolar bone in cone beam computed tomography (CBCT). Eleven skulls with a total of 59 anterior teeth were selected. CBCT images were acquired by using 4 protocols, by varying the rotation of the tube-detector arm and the orientation of the skull (protocol 1: 360°/0°; protocol 2: 180°/0°; protocol 3: 180°/90°; protocol 4: 180°/180°). Observers evaluated cortical bone as absent, thin, or thick. Direct observation of the skulls was used as the gold standard. Intra- and interobserver agreement, as well as agreement of scoring between the 3 bone thickness classifications, were calculated by using the κ statistic. The Wilcoxon signed-rank test was used to compare the 4 protocols. For lingual cortical bone, protocol 1 showed no statistical difference from the gold standard. Higher reliability was found in protocol 3 for absent (κ = 0.80) and thin (κ = 0.47) cortices, whereas for thick cortical bone, protocol 2 was more consistent (κ = 0.60). In buccal cortical bone, protocol 1 obtained the highest agreement for absent cortices (κ = 0.61), whereas protocol 4 was better for thin cortical plates (κ = 0.38) and protocol 2 for thick cortical plates (κ = 0.40). No consistent effect of the number of basis images or head orientation for visual detection of alveolar bone was detected, except for lingual cortical bone, for which full rotation scanning showed improved visualization. Copyright © 2017 Elsevier Inc. All rights reserved.
Insights into Corona Formation through Statistical Analyses
NASA Technical Reports Server (NTRS)
Glaze, L. S.; Stofan, E. R.; Smrekar, S. E.; Baloga, S. M.
2002-01-01
Statistical analysis of an expanded database of coronae on Venus indicates that the populations of Type 1 (with fracture annuli) and 2 (without fracture annuli) corona diameters are statistically indistinguishable, and therefore we have no basis for assuming different formation mechanisms. Analysis of the topography and diameters of coronae shows that coronae that are depressions, rimmed depressions, and domes tend to be significantly smaller than those that are plateaus, rimmed plateaus, or domes with surrounding rims. This is consistent with the model of Smrekar and Stofan and inconsistent with predictions of the spreading drop model of Koch and Manga. The diameter range for domes, the initial stage of corona formation, provides a broad constraint on the buoyancy of corona-forming plumes. Coronae are only slightly more likely to be topographically raised than depressions, with Type 1 coronae most frequently occurring as rimmed depressions and Type 2 coronae most frequently occuring with flat interiors and raised rims. Most Type 1 coronae are located along chasmata systems or fracture belts, while Type 2 coronas are found predominantly as isolated features in the plains. Coronae at hotspot rises tend to be significantly larger than coronae in other settings, consistent with a hotter upper mantle at hotspot rises and their active state.
Kappa statistic for clustered matched-pair data.
Yang, Zhao; Zhou, Ming
2014-07-10
Kappa statistic is widely used to assess the agreement between two procedures in the independent matched-pair data. For matched-pair data collected in clusters, on the basis of the delta method and sampling techniques, we propose a nonparametric variance estimator for the kappa statistic without within-cluster correlation structure or distributional assumptions. The results of an extensive Monte Carlo simulation study demonstrate that the proposed kappa statistic provides consistent estimation and the proposed variance estimator behaves reasonably well for at least a moderately large number of clusters (e.g., K ≥50). Compared with the variance estimator ignoring dependence within a cluster, the proposed variance estimator performs better in maintaining the nominal coverage probability when the intra-cluster correlation is fair (ρ ≥0.3), with more pronounced improvement when ρ is further increased. To illustrate the practical application of the proposed estimator, we analyze two real data examples of clustered matched-pair data. Copyright © 2014 John Wiley & Sons, Ltd.
Artificial neural network study on organ-targeting peptides
NASA Astrophysics Data System (ADS)
Jung, Eunkyoung; Kim, Junhyoung; Choi, Seung-Hoon; Kim, Minkyoung; Rhee, Hokyoung; Shin, Jae-Min; Choi, Kihang; Kang, Sang-Kee; Lee, Nam Kyung; Choi, Yun-Jaie; Jung, Dong Hyun
2010-01-01
We report a new approach to studying organ targeting of peptides on the basis of peptide sequence information. The positive control data sets consist of organ-targeting peptide sequences identified by the peroral phage-display technique for four organs, and the negative control data are prepared from random sequences. The capacity of our models to make appropriate predictions is validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC) curve (the ROC score). VHSE descriptor produces statistically significant training models and the models with simple neural network architectures show slightly greater predictive power than those with complex ones. The training and test set statistics indicate that our models could discriminate between organ-targeting and random sequences. We anticipate that our models will be applicable to the selection of organ-targeting peptides for generating peptide drugs or peptidomimetics.
Camps; Prevot
1996-08-09
The statistical characteristics of the local magnetic field of Earth during paleosecular variation, excursions, and reversals are described on the basis of a database that gathers the cleaned mean direction and average remanent intensity of 2741 lava flows that have erupted over the last 20 million years. A model consisting of a normally distributed axial dipole component plus an independent isotropic set of vectors with a Maxwellian distribution that simulates secular variation fits the range of geomagnetic fluctuations, in terms of both direction and intensity. This result suggests that the magnitude of secular variation vectors is independent of the magnitude of Earth's axial dipole moment and that the amplitude of secular variation is unchanged during reversals.
Evaluation of ARCAM Deposited Ti-6Al-4V
NASA Technical Reports Server (NTRS)
Slattery, Kevin; Slaughter, Blake; Speorl, Emily; Good, James; Gilley, Scott; McLemore, Carole
2008-01-01
A wide range of Metal Additive Manufacturing (MAM) technologies are becoming available. One of the challenges in using new technologies for aerospace systems is demonstrating that the process and system has the ability to manufacture components that meet the high quality requirements on a statistically significant basis. The widest-used system for small to medium sized components is the ARCAM system manufactured in Gothenburg, Sweden. This system features a 4kW electron-beam gun, and has a chamber volume of 250mm long x 250mm wide x 250mm to 400mm tall. This paper will describe the basis for the quality and consistency requirements, the experimental and evaluation procedures used for the evaluation, and an analysis of the results for Ti-6Al-4V.
Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K
2006-01-01
Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209
Kampen, Jarl K
2010-06-01
We study the empirical consistency of survey based (micro level) indicators of social capital and local government performance on the one, and municipality based (aggregate level) measures of these two concepts on the other hand. Knowledge about the behavior of these indicators is helpful for evaluating the value of studies carried out in isolated contexts, that is, with access to data on either, but not both, levels. The method is by comparing data collected by Statistics Belgium on Flemish municipalities, to data collected at citizen level by means of a face-to-face survey. The available evidence supplies at best a meager basis for presupposing a shared component of the indicators under study.
2009-01-01
We study the empirical consistency of survey based (micro level) indicators of social capital and local government performance on the one, and municipality based (aggregate level) measures of these two concepts on the other hand. Knowledge about the behavior of these indicators is helpful for evaluating the value of studies carried out in isolated contexts, that is, with access to data on either, but not both, levels. The method is by comparing data collected by Statistics Belgium on Flemish municipalities, to data collected at citizen level by means of a face-to-face survey. The available evidence supplies at best a meager basis for presupposing a shared component of the indicators under study. PMID:20461124
Quality of reporting statistics in two Indian pharmacology journals.
Jaykaran; Yadav, Preeti
2011-04-01
To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. All original articles published since 2002 were downloaded from the journals' (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7-83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of "mean (SD)" or "mean ± SD." Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6-38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Articles published in two Indian pharmacology journals are not devoid of statistical errors.
Timmermans, Catherine; Doffagne, Erik; Venet, David; Desmet, Lieven; Legrand, Catherine; Burzykowski, Tomasz; Buyse, Marc
2016-01-01
Data quality may impact the outcome of clinical trials; hence, there is a need to implement quality control strategies for the data collected. Traditional approaches to quality control have primarily used source data verification during on-site monitoring visits, but these approaches are hugely expensive as well as ineffective. There is growing interest in central statistical monitoring (CSM) as an effective way to ensure data quality and consistency in multicenter clinical trials. CSM with SMART™ uses advanced statistical tools that help identify centers with atypical data patterns which might be the sign of an underlying quality issue. This approach was used to assess the quality and consistency of the data collected in the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, involving 1495 patients across 232 centers in Japan. In the Stomach Cancer Adjuvant Multi-institutional Trial Group Trial, very few atypical data patterns were found among the participating centers, and none of these patterns were deemed to be related to a quality issue that could significantly affect the outcome of the trial. CSM can be used to provide a check of the quality of the data from completed multicenter clinical trials before analysis, publication, and submission of the results to regulatory agencies. It can also form the basis of a risk-based monitoring strategy in ongoing multicenter trials. CSM aims at improving data quality in clinical trials while also reducing monitoring costs.
Correlation consistent basis sets for the atoms In–Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahler, Andrew; Wilson, Angela K., E-mail: akwilson@unt.edu
In this work, the correlation consistent family of Gaussian basis sets has been expanded to include all-electron basis sets for In–Xe. The methodology for developing these basis sets is described, and several examples of the performance and utility of the new sets have been provided. Dissociation energies and bond lengths for both homonuclear and heteronuclear diatomics demonstrate the systematic convergence behavior with respect to increasing basis set quality expected by the family of correlation consistent basis sets in describing molecular properties. Comparison with recently developed correlation consistent sets designed for use with the Douglas-Kroll Hamiltonian is provided.
Handbook of satellite pointing errors and their statistical treatment
NASA Astrophysics Data System (ADS)
Weinberger, M. C.
1980-03-01
This handbook aims to provide both satellite payload and attitude control system designers with a consistent, unambiguous approach to the formulation, definition and interpretation of attitude pointing and measurement specifications. It reviews and assesses the current terminology and practices, and from them establishes a set of unified terminology, giving the user a sound basis to understand the meaning and implications of various specifications and requirements. Guidelines are presented for defining the characteristics of the error sources influencing satellite pointing and attitude measurement, and their combination in performance verification.
Reliability analysis of structural ceramics subjected to biaxial flexure
NASA Technical Reports Server (NTRS)
Chao, Luen-Yuan; Shetty, Dinesh K.
1991-01-01
The reliability of alumina disks subjected to biaxial flexure is predicted on the basis of statistical fracture theory using a critical strain energy release rate fracture criterion. Results on a sintered silicon nitride are consistent with reliability predictions based on pore-initiated penny-shaped cracks with preferred orientation normal to the maximum principal stress. Assumptions with regard to flaw types and their orientations in each ceramic can be justified by fractography. It is shown that there are no universal guidelines for selecting fracture criteria or assuming flaw orientations in reliability analyses.
Grosz, R; Stephanopoulos, G
1983-09-01
The need for the determination of the free energy of formation of biomass in bioreactor second law balances is well established. A statistical mechanical method for the calculation of the free energy of formation of E. coli biomass is introduced. In this method, biomass is modelled to consist of a system of biopolymer networks. The partition function of this system is proposed to consist of acoustic and optical modes of vibration. Acoustic modes are described by Tarasov's model, the parameters of which are evaluated with the aid of low-temperature calorimetric data for the crystalline protein bovine chymotrypsinogen A. The optical modes are described by considering the low-temperature thermodynamic properties of biological monomer crystals such as amino acid crystals. Upper and lower bounds are placed on the entropy to establish the maximum error associated with the statistical method. The upper bound is determined by endowing the monomers in biomass with ideal gas properties. The lower bound is obtained by limiting the monomers to complete immobility. On this basis, the free energy of formation is fixed to within 10%. Proposals are made with regard to experimental verification of the calculated value and extension of the calculation to other types of biomass.
NASA Astrophysics Data System (ADS)
Trostyansky, S. N.; Kalach, A. V.; Lavlinsky, V. V.; Lankin, O. V.
2018-03-01
Based on the analysis of the dynamic model of panel data by region, including fire statistics for surveillance sites and statistics of a set of regional socio-economic indicators, as well as the time of rapid response of the state fire service to fires, the probability of fires in the surveillance sites and the risk of human death in The result of such fires from the values of the corresponding indicators for the previous year, a set of regional social-economics factors, as well as regional indicators time rapid response of the state fire service in the fire. The results obtained are consistent with the results of the application to the fire risks of the model of a rational offender. Estimation of the economic equivalent of human life from data on surveillance objects for Russia, calculated on the basis of the analysis of the presented dynamic model of fire risks, correctly agrees with the known literary data. The results obtained on the basis of the econometric approach to fire risks allow us to forecast fire risks at the supervisory sites in the regions of Russia and to develop management solutions to minimize such risks.
NASA Astrophysics Data System (ADS)
Kushnir, A. F.; Troitsky, E. V.; Haikin, L. M.; Dainty, A.
1999-06-01
A semi-automatic procedure has been developed to achieve statistically optimum discrimination between earthquakes and explosions at local or regional distances based on a learning set specific to a given region. The method is used for step-by-step testing of candidate discrimination features to find the optimum (combination) subset of features, with the decision taken on a rigorous statistical basis. Linear (LDF) and Quadratic (QDF) Discriminant Functions based on Gaussian distributions of the discrimination features are implemented and statistically grounded; the features may be transformed by the Box-Cox transformation z=(1/ α)( yα-1) to make them more Gaussian. Tests of the method were successfully conducted on seismograms from the Israel Seismic Network using features consisting of spectral ratios between and within phases. Results showed that the QDF was more effective than the LDF and required five features out of 18 candidates for the optimum set. It was found that discrimination improved with increasing distance within the local range, and that eliminating transformation of the features and failing to correct for noise led to degradation of discrimination.
NASA Astrophysics Data System (ADS)
Oberlack, Martin; Rosteck, Andreas; Avsarkisov, Victor
2013-11-01
Text-book knowledge proclaims that Lie symmetries such as Galilean transformation lie at the heart of fluid dynamics. These important properties also carry over to the statistical description of turbulence, i.e. to the Reynolds stress transport equations and its generalization, the multi-point correlation equations (MPCE). Interesting enough, the MPCE admit a much larger set of symmetries, in fact infinite dimensional, subsequently named statistical symmetries. Most important, theses new symmetries have important consequences for our understanding of turbulent scaling laws. The symmetries form the essential foundation to construct exact solutions to the infinite set of MPCE, which in turn are identified as classical and new turbulent scaling laws. Examples on various classical and new shear flow scaling laws including higher order moments will be presented. Even new scaling have been forecasted from these symmetries and in turn validated by DNS. Turbulence modellers have implicitly recognized at least one of the statistical symmetries as this is the basis for the usual log-law which has been employed for calibrating essentially all engineering turbulence models. An obvious conclusion is to generally make turbulence models consistent with the new statistical symmetries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, F.A.; Khaleel, M.A.
This paper describes a statistical evaluation of the through-thickness copper variation for welds in reactor pressure vessels, and reviews the historical basis for the static and arrest fracture toughness (K{sub Ic} and K{sub Ia}) equations used in the VISA-II code. Copper variability in welds is due to fabrication procedures with copper contents being randomly distributed, variable from one location to another through the thickness of the vessel. The VISA-II procedure of sampling the copper content from a statistical distribution for every 6.35- to 12.7-mm (1/4- to 1/2-in.) layer through the thickness was found to be consistent with the statistical observations.more » However, the parameters of the VISA-II distribution and statistical limits required further investigation. Copper contents at few locations through the thickness were found to exceed the 0.4% upper limit of the VISA-II code. The data also suggest that the mean copper content varies systematically through the thickness. While, the assumption of normality is not clearly supported by the available data, a statistical evaluation based on all the available data results in mean and standard deviations within the VISA-II code limits.« less
Quality of reporting statistics in two Indian pharmacology journals
Jaykaran; Yadav, Preeti
2011-01-01
Objective: To evaluate the reporting of the statistical methods in articles published in two Indian pharmacology journals. Materials and Methods: All original articles published since 2002 were downloaded from the journals’ (Indian Journal of Pharmacology (IJP) and Indian Journal of Physiology and Pharmacology (IJPP)) website. These articles were evaluated on the basis of appropriateness of descriptive statistics and inferential statistics. Descriptive statistics was evaluated on the basis of reporting of method of description and central tendencies. Inferential statistics was evaluated on the basis of fulfilling of assumption of statistical methods and appropriateness of statistical tests. Values are described as frequencies, percentage, and 95% confidence interval (CI) around the percentages. Results: Inappropriate descriptive statistics was observed in 150 (78.1%, 95% CI 71.7–83.3%) articles. Most common reason for this inappropriate descriptive statistics was use of mean ± SEM at the place of “mean (SD)” or “mean ± SD.” Most common statistical method used was one-way ANOVA (58.4%). Information regarding checking of assumption of statistical test was mentioned in only two articles. Inappropriate statistical test was observed in 61 (31.7%, 95% CI 25.6–38.6%) articles. Most common reason for inappropriate statistical test was the use of two group test for three or more groups. Conclusion: Articles published in two Indian pharmacology journals are not devoid of statistical errors. PMID:21772766
Giambartolomei, Claudia; Vukcevic, Damjan; Schadt, Eric E; Franke, Lude; Hingorani, Aroon D; Wallace, Chris; Plagnol, Vincent
2014-05-01
Genetic association studies, in particular the genome-wide association study (GWAS) design, have provided a wealth of novel insights into the aetiology of a wide range of human diseases and traits, in particular cardiovascular diseases and lipid biomarkers. The next challenge consists of understanding the molecular basis of these associations. The integration of multiple association datasets, including gene expression datasets, can contribute to this goal. We have developed a novel statistical methodology to assess whether two association signals are consistent with a shared causal variant. An application is the integration of disease scans with expression quantitative trait locus (eQTL) studies, but any pair of GWAS datasets can be integrated in this framework. We demonstrate the value of the approach by re-analysing a gene expression dataset in 966 liver samples with a published meta-analysis of lipid traits including >100,000 individuals of European ancestry. Combining all lipid biomarkers, our re-analysis supported 26 out of 38 reported colocalisation results with eQTLs and identified 14 new colocalisation results, hence highlighting the value of a formal statistical test. In three cases of reported eQTL-lipid pairs (SYPL2, IFT172, TBKBP1) for which our analysis suggests that the eQTL pattern is not consistent with the lipid association, we identify alternative colocalisation results with SORT1, GCKR, and KPNB1, indicating that these genes are more likely to be causal in these genomic intervals. A key feature of the method is the ability to derive the output statistics from single SNP summary statistics, hence making it possible to perform systematic meta-analysis type comparisons across multiple GWAS datasets (implemented online at http://coloc.cs.ucl.ac.uk/coloc/). Our methodology provides information about candidate causal genes in associated intervals and has direct implications for the understanding of complex diseases as well as the design of drugs to target disease pathways.
Rapid urban malaria appraisal (RUMA) in sub-Saharan Africa
Wang, Shr-Jie; Lengeler, Christian; Smith, Thomas A; Vounatsou, Penelope; Cissé, Guéladio; Diallo, Diadie A; Akogbeto, Martin; Mtasiwa, Deo; Teklehaimanot, Awash; Tanner, Marcel
2005-01-01
Background The rapid urban malaria appraisal (RUMA) methodology aims to provide a cost-effective tool to conduct rapid assessments of the malaria situation in urban sub-Saharan Africa and to improve the understanding of urban malaria epidemiology. Methods This work was done in Yopougon municipality (Abidjan), Cotonou, Dar es Salaam and Ouagadougou. The study design consists of six components: 1) a literature review, 2) the collection of available health statistics, 3) a risk mapping, 4) school parasitaemia surveys, 5) health facility-based surveys and 6) a brief description of the health care system. These formed the basis of a multi-country evaluation of RUMA's feasibility, consistency and usefulness. Results A substantial amount of literature (including unpublished theses and statistics) was found at each site, providing a good overview of the malaria situation. School and health facility-based surveys provided an overview of local endemicity and the overall malaria burden in different city areas. This helped to identify important problems for in-depth assessment, especially the extent to which malaria is over-diagnosed in health facilities. Mapping health facilities and breeding sites allowed the visualization of the complex interplay between population characteristics, health services and malaria risk. However, the latter task was very time-consuming and required special expertise. RUMA is inexpensive, costing around 8,500–13,000 USD for a six to ten-week period. Conclusion RUMA was successfully implemented in four urban areas with different endemicity and proved to be a cost-effective first approach to study the features of urban malaria and provide an evidence basis for planning control measures. PMID:16153298
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
Trends in stratospheric ozone profiles using functional mixed models
NASA Astrophysics Data System (ADS)
Park, A.; Guillas, S.; Petropavlovskikh, I.
2013-11-01
This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkehr ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed. It penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data-driven basis functions (empirical basis functions) are obtained. The coefficients (principal component scores) corresponding to the empirical basis functions represent dominant temporal evolution in the shape of ozone profiles. We use those time series coefficients in the second statistical step to reveal the important sources of the patterns and variations in the profiles. We estimate the effects of covariates - month, year (trend), quasi-biennial oscillation, the solar cycle, the Arctic oscillation, the El Niño/Southern Oscillation cycle and the Eliassen-Palm flux - on the principal component scores of ozone profiles using additive mixed effects models. The effects are represented as smooth functions and the smooth functions are estimated by penalized regression splines. We also impose a heteroscedastic error structure that reflects the observed seasonality in the errors. The more complex error structure enables us to provide more accurate estimates of influences and trends, together with enhanced uncertainty quantification. Also, we are able to capture fine variations in the time evolution of the profiles, such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder and Arosa, as well as for total column ozone. There are great variations in the trends across altitudes, which highlights the benefits of modeling ozone profiles.
The Evolution of Random Number Generation in MUVES
2017-01-01
mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results identical to the current...MUVES, includ- ing the mathematical basis and statistical justification for algorithms used in the code. The working code provided produces results...questionable numerical and statistical properties. The development of the modern system is traced through software change requests, resulting in a random number
The Statistical Basis of Chemical Equilibria.
ERIC Educational Resources Information Center
Hauptmann, Siegfried; Menger, Eva
1978-01-01
Describes a machine which demonstrates the statistical bases of chemical equilibrium, and in doing so conveys insight into the connections among statistical mechanics, quantum mechanics, Maxwell Boltzmann statistics, statistical thermodynamics, and transition state theory. (GA)
Classical Electrodynamics: Lecture notes
NASA Astrophysics Data System (ADS)
Likharev, Konstantin K.
2018-06-01
Essential Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.
34 CFR 668.46 - Institutional security policies and crime statistics.
Code of Federal Regulations, 2010 CFR
2010-07-01
... a voluntary, confidential basis for inclusion in the annual disclosure of crime statistics, and, if... procedures to report crimes on a voluntary, confidential basis for inclusion in the annual disclosure of... the victim's actual or perceived race, gender, religion, sexual orientation, ethnicity, or disability...
Properties of different selection signature statistics and a new strategy for combining them.
Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H
2015-11-01
Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.
Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies
Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung
2017-01-01
A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods’ performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. PMID:27993672
NASA Astrophysics Data System (ADS)
Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.
2018-05-01
A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.
Basic statistics with Microsoft Excel: a review.
Divisi, Duilio; Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto
2017-06-01
The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel.
Basic statistics with Microsoft Excel: a review
Di Leonardo, Gabriella; Zaccagna, Gino; Crisci, Roberto
2017-01-01
The scientific world is enriched daily with new knowledge, due to new technologies and continuous discoveries. The mathematical functions explain the statistical concepts particularly those of mean, median and mode along with those of frequency and frequency distribution associated to histograms and graphical representations, determining elaborative processes on the basis of the spreadsheet operations. The aim of the study is to highlight the mathematical basis of statistical models that regulate the operation of spreadsheets in Microsoft Excel. PMID:28740690
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Safety design approach for external events in Japan sodium-cooled fast reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamano, H.; Kubo, S.; Tani, A.
2012-07-01
This paper describes a safety design approach for external events in the design study of Japan sodium-cooled fast reactor. An emphasis is introduction of a design extension external condition (DEEC). In addition to seismic design, other external events such as tsunami, strong wind, abnormal temperature, etc. were addressed in this study. From a wide variety of external events consisting of natural hazards and human-induced ones, a screening method was developed in terms of siting, consequence, frequency to select representative events. Design approaches for these events were categorized on the probabilistic, statistical and deterministic basis. External hazard conditions were considered mainlymore » for DEECs. In the probabilistic approach, the DEECs of earthquake, tsunami and strong wind were defined as 1/10 of exceedance probability of the external design bases. The other representative DEECs were also defined based on statistical or deterministic approaches. (authors)« less
Hopkins, William D; Gardner, Molly; Mingle, Morgan; Reamer, Lisa; Schapiro, Steven J
2013-11-01
There remain considerable questions regarding the evidence for population-level handedness in nonhuman primates when compared with humans. One challenge in comparing human and nonhuman primate handedness involves the procedures used to characterize individual handedness. Studies of human handedness use consistency in hand use within and between tasks as a basis for hand preference classification. In contrast, studies of handedness in nonhuman primates use statistical criteria for classifying handedness. In this study, we examined within- and between-task consistency in hand use as a means of characterizing individual handedness in a sample of 300 captive chimpanzees (Pan troglodytes). Chimpanzees showed population-level right-handedness for both within- and between-tasks consistency, though the proportion of right-handed chimpanzees was lower than what has typically been reported for humans. We further found that there were small, but significant, associations in hand use between measures. There were no significant sex or colony effects on the distribution of handedness. The results are discussed in the context of theories on the evolution of handedness in nonhuman primates.
Hopkins, William D.; Gardner, Molly; Mingle, Morgan; Reamer, Lisa; Schapiro, Steven J.
2013-01-01
There remain considerable questions regarding the evidence for population-level handedness in nonhuman primates when compared with humans. One challenge in comparing human and nonhuman primate handedness involves the procedures used to characterize individual handedness. Studies of human handedness use consistency in hand use within and between tasks as a basis for hand preference classification. In contrast, studies of handedness in nonhuman primates use statistical criteria for classifying handedness. In this study, we examined within- and between-task consistency in hand use as a means of characterizing individual handedness in a sample of 300 captive chimpanzees (Pan troglodytes). Chimpanzees showed population-level right-handedness for both within- and between-tasks consistency, though the proportion of right-handed chimpanzees was lower than what has typically been reported for humans. We further found that there were small, but significant, associations in hand use between measures. There were no significant sex or colony effects on the distribution of handedness. The results are discussed in the context of theories on the evolution of handedness in nonhuman primates. PMID:23356440
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
NASA Astrophysics Data System (ADS)
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
[The evaluation of costs: standards of medical care and clinical statistic groups].
Semenov, V Iu; Samorodskaia, I V
2014-01-01
The article presents the comparative analysis of techniques of evaluation of costs of hospital treatment using medical economic standards of medical care and clinical statistical groups. The technique of evaluation of costs on the basis of clinical statistical groups was developed almost fifty years ago and is largely applied in a number of countries. Nowadays, in Russia the payment for completed case of treatment on the basis of medical economic standards is the main mode of payment for medical care in hospital. It is very conditionally a Russian analogue of world-wide prevalent system of diagnostic related groups. The tariffs for these cases of treatment as opposed to clinical statistical groups are counted on basis of standards of provision of medical care approved by Minzdrav of Russia. The information derived from generalization of cases of treatment of real patients is not applied.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Fitting multidimensional splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
This report demonstrates the successful application of statistical variable selection techniques to fit splines. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs using the B-spline basis were developed, and the one for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Lagrangian single-particle turbulent statistics through the Hilbert-Huang transform.
Huang, Yongxiang; Biferale, Luca; Calzavarini, Enrico; Sun, Chao; Toschi, Federico
2013-04-01
The Hilbert-Huang transform is applied to analyze single-particle Lagrangian velocity data from numerical simulations of hydrodynamic turbulence. The velocity trajectory is described in terms of a set of intrinsic mode functions C(i)(t) and of their instantaneous frequency ω(i)(t). On the basis of this decomposition we define the ω-conditioned statistical moments of the C(i) modes, named q-order Hilbert spectra (HS). We show that such quantities have enhanced scaling properties as compared to traditional Fourier transform- or correlation-based (structure functions) statistical indicators, thus providing better insights into the turbulent energy transfer process. We present clear empirical evidence that the energylike quantity, i.e., the second-order HS, displays a linear scaling in time in the inertial range, as expected from a dimensional analysis. We also measure high-order moment scaling exponents in a direct way, without resorting to the extended self-similarity procedure. This leads to an estimate of the Lagrangian structure function exponents which are consistent with the multifractal prediction in the Lagrangian frame as proposed by Biferale et al. [Phys. Rev. Lett. 93, 064502 (2004)].
Insights into the sequence parameters for halophilic adaptation.
Nath, Abhigyan
2016-03-01
The sequence parameters for halophilic adaptation are still not fully understood. To understand the molecular basis of protein hypersaline adaptation, a detailed analysis is carried out, and investigated the likely association of protein sequence attributes to halophilic adaptation. A two-stage strategy is implemented, where in the first stage a supervised machine learning classifier is build, giving an overall accuracy of 86 % on stratified tenfold cross validation and 90 % on blind testing set, which are better than the previously reported results. The second stage consists of statistical analysis of sequence features and possible extraction of halophilic molecular signatures. The results of this study showed that, halophilic proteins are characterized by lower average charge, lower K content, and lower S content. A statistically significant preference/avoidance list of sequence parameters is also reported giving insights into the molecular basis of halophilic adaptation. D, Q, E, H, P, T, V are significantly preferred while N, C, I, K, M, F, S are significantly avoided. Among amino acid physicochemical groups, small, polar, charged, acidic and hydrophilic groups are preferred over other groups. The halophilic proteins also showed a preference for higher average flexibility, higher average polarity and avoidance for higher average positive charge, average bulkiness and average hydrophobicity. Some interesting trends observed in dipeptide counts are also reported. Further a systematic statistical comparison is undertaken for gaining insights into the sequence feature distribution in different residue structural states. The current analysis may facilitate the understanding of the mechanism of halophilic adaptation clearer, which can be further used for rational design of halophilic proteins.
Marginal evidence for cosmic acceleration from Type Ia supernovae
NASA Astrophysics Data System (ADS)
Nielsen, J. T.; Guffanti, A.; Sarkar, S.
2016-10-01
The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion.
Classical Electrodynamics: Problems with solutions; Problems with solutions
NASA Astrophysics Data System (ADS)
Likharev, Konstantin K.
2018-06-01
l Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.
Over ten thousand cases and counting: acidbase.org is serving the critical care community.
Elbers, Paul W G; Van Regenmortel, Niels; Gatz, Rainer
2015-01-01
Acidbase.org has been serving the critical care community for over a decade. The backbone of this online resource consists of Peter Stewart's original text "How to understand Acid-Base" which is freely available to everyone. In addition, Stewart's Textbook of Acid Base, which puts the theory in today's clinical context is available for purchase from the website. However, many intensivists use acidbase.org on a daily basis for its educational content and in particular for its analysis module. This review provides an overview of the history of the website, a tutorial and descriptive statistics of over 10,000 queries submitted to the analysis module.
Marginal evidence for cosmic acceleration from Type Ia supernovae
Nielsen, J. T.; Guffanti, A.; Sarkar, S.
2016-01-01
The ‘standard’ model of cosmology is founded on the basis that the expansion rate of the universe is accelerating at present — as was inferred originally from the Hubble diagram of Type Ia supernovae. There exists now a much bigger database of supernovae so we can perform rigorous statistical tests to check whether these ‘standardisable candles’ indeed indicate cosmic acceleration. Taking account of the empirical procedure by which corrections are made to their absolute magnitudes to allow for the varying shape of the light curve and extinction by dust, we find, rather surprisingly, that the data are still quite consistent with a constant rate of expansion. PMID:27767125
Innovative intelligent technology of distance learning for visually impaired people
NASA Astrophysics Data System (ADS)
Samigulina, Galina; Shayakhmetova, Assem; Nuysuppov, Adlet
2017-12-01
The aim of the study is to develop innovative intelligent technology and information systems of distance education for people with impaired vision (PIV). To solve this problem a comprehensive approach has been proposed, which consists in the aggregate of the application of artificial intelligence methods and statistical analysis. Creating an accessible learning environment, identifying the intellectual, physiological, psychophysiological characteristics of perception and information awareness by this category of people is based on cognitive approach. On the basis of fuzzy logic the individually-oriented learning path of PIV is con- structed with the aim of obtaining high-quality engineering education with modern equipment in the joint use laboratories.
... on National Statistics (CNSTAT) to examine conceptual and methodological issues surrounding survey statistics on rape and sexual assault and to recommend to BJS the best methods for obtaining such statistics on an ongoing basis. ...
Augmented burst-error correction for UNICON laser memory. [digital memory
NASA Technical Reports Server (NTRS)
Lim, R. S.
1974-01-01
A single-burst-error correction system is described for data stored in the UNICON laser memory. In the proposed system, a long fire code with code length n greater than 16,768 bits was used as an outer code to augment an existing inner shorter fire code for burst error corrections. The inner fire code is a (80,64) code shortened from the (630,614) code, and it is used to correct a single-burst-error on a per-word basis with burst length b less than or equal to 6. The outer code, with b less than or equal to 12, would be used to correct a single-burst-error on a per-page basis, where a page consists of 512 32-bit words. In the proposed system, the encoding and error detection processes are implemented by hardware. A minicomputer, currently used as a UNICON memory management processor, is used on a time-demanding basis for error correction. Based upon existing error statistics, this combination of an inner code and an outer code would enable the UNICON system to obtain a very low error rate in spite of flaws affecting the recorded data.
Comparison of fMRI analysis methods for heterogeneous BOLD responses in block design studies.
Liu, Jia; Duffy, Ben A; Bernal-Casas, David; Fang, Zhongnan; Lee, Jin Hyung
2017-02-15
A large number of fMRI studies have shown that the temporal dynamics of evoked BOLD responses can be highly heterogeneous. Failing to model heterogeneous responses in statistical analysis can lead to significant errors in signal detection and characterization and alter the neurobiological interpretation. However, to date it is not clear that, out of a large number of options, which methods are robust against variability in the temporal dynamics of BOLD responses in block-design studies. Here, we used rodent optogenetic fMRI data with heterogeneous BOLD responses and simulations guided by experimental data as a means to investigate different analysis methods' performance against heterogeneous BOLD responses. Evaluations are carried out within the general linear model (GLM) framework and consist of standard basis sets as well as independent component analysis (ICA). Analyses show that, in the presence of heterogeneous BOLD responses, conventionally used GLM with a canonical basis set leads to considerable errors in the detection and characterization of BOLD responses. Our results suggest that the 3rd and 4th order gamma basis sets, the 7th to 9th order finite impulse response (FIR) basis sets, the 5th to 9th order B-spline basis sets, and the 2nd to 5th order Fourier basis sets are optimal for good balance between detection and characterization, while the 1st order Fourier basis set (coherence analysis) used in our earlier studies show good detection capability. ICA has mostly good detection and characterization capabilities, but detects a large volume of spurious activation with the control fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Teaching the Meaning of Statistical Techniques with Microcomputer Simulation.
ERIC Educational Resources Information Center
Lee, Motoko Y.; And Others
Students in an introductory statistics course are often preoccupied with learning the computational routines of specific summary statistics and thereby fail to develop an understanding of the meaning of those statistics or their conceptual basis. To help students develop a better understanding of the meaning of three frequently used statistics,…
49 CFR 1248.1 - Freight commodity statistics.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 9 2011-10-01 2011-10-01 false Freight commodity statistics. 1248.1 Section 1248... STATISTICS § 1248.1 Freight commodity statistics. All class I railroads, as described in § 1240.1 of this... statistics on the basis of the commodity codes named in § 1248.101. Carriers shall report quarterly on the...
49 CFR 1248.1 - Freight commodity statistics.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 9 2010-10-01 2010-10-01 false Freight commodity statistics. 1248.1 Section 1248... STATISTICS § 1248.1 Freight commodity statistics. All class I railroads, as described in § 1240.1 of this... statistics on the basis of the commodity codes named in § 1248.101. Carriers shall report quarterly on the...
NASA Astrophysics Data System (ADS)
Camilo, Joseph A.; Malof, Jordan M.; Torrione, Peter A.; Collins, Leslie M.; Morton, Kenneth D.
2015-05-01
Forward-looking ground penetrating radar (FLGPR) is a remote sensing modality that has recently been investigated for buried threat detection. FLGPR offers greater standoff than other downward-looking modalities such as electromagnetic induction and downward-looking GPR, but it suffers from high false alarm rates due to surface and ground clutter. A stepped frequency FLGPR system consists of multiple radars with varying polarizations and bands, each of which interacts differently with subsurface materials and therefore might potentially be able to discriminate clutter from true buried targets. However, it is unclear which combinations of bands and polarizations would be most useful for discrimination or how to fuse them. This work applies sparse structured basis pursuit, a supervised statistical model which searches for sets of bands that are collectively effective for discriminating clutter from targets. The algorithm works by trying to minimize the number of selected items in a dictionary of signals; in this case the separate bands and polarizations make up the dictionary elements. A structured basis pursuit algorithm is employed to gather groups of modes together in collections to eliminate whole polarizations or sensors. The approach is applied to a large collection of FLGPR data for data around emplaced target and non-target clutter. The results show that a sparse structure basis pursuits outperforms a conventional CFAR anomaly detector while also pruning out unnecessary bands of the FLGPR sensor.
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
Brannigan, V M; Bier, V M; Berg, C
1992-09-01
Toxic torts are product liability cases dealing with alleged injuries due to chemical or biological hazards such as radiation, thalidomide, or Agent Orange. Toxic tort cases typically rely more heavily than other product liability cases on indirect or statistical proof of injury. There have been numerous theoretical analyses of statistical proof of injury in toxic tort cases. However, there have been only a handful of actual legal decisions regarding the use of such statistical evidence, and most of those decisions have been inconclusive. Recently, a major case from the Fifth Circuit, involving allegations that Benedectin (a morning sickness drug) caused birth defects, was decided entirely on the basis of statistical inference. This paper examines both the conceptual basis of that decision, and also the relationships among statistical inference, scientific evidence, and the rules of product liability in general.
“Plateau”-related summary statistics are uninformative for comparing working memory models
van den Berg, Ronald; Ma, Wei Ji
2014-01-01
Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon. Zhang and Luck (2008) and Anderson, Vogel, and Awh (2011) noticed that as more items need to be remembered, “memory noise” seems to first increase and then reach a “stable plateau.” They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided, at most, 0.15% of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99% correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. At realistic numbers of trials, plateau-related summary statistics are completely unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (2011), we found that the evidence in the summary statistics was, at most, 0.12% of the evidence in the raw data and far too weak to warrant any conclusions. These findings call into question claims about working memory that are based on summary statistics. PMID:24719235
Stochastic approach and fluctuation theorem for charge transport in diodes
NASA Astrophysics Data System (ADS)
Gu, Jiayin; Gaspard, Pierre
2018-05-01
A stochastic approach for charge transport in diodes is developed in consistency with the laws of electricity, thermodynamics, and microreversibility. In this approach, the electron and hole densities are ruled by diffusion-reaction stochastic partial differential equations and the electric field generated by the charges is determined with the Poisson equation. These equations are discretized in space for the numerical simulations of the mean density profiles, the mean electric potential, and the current-voltage characteristics. Moreover, the full counting statistics of the carrier current and the measured total current including the contribution of the displacement current are investigated. On the basis of local detailed balance, the fluctuation theorem is shown to hold for both currents.
Horizon Entropy from Quantum Gravity Condensates.
Oriti, Daniele; Pranzetti, Daniele; Sindoni, Lorenzo
2016-05-27
We construct condensate states encoding the continuum spherically symmetric quantum geometry of a horizon in full quantum gravity, i.e., without any classical symmetry reduction, in the group field theory formalism. Tracing over the bulk degrees of freedom, we show how the resulting reduced density matrix manifestly exhibits a holographic behavior. We derive a complete orthonormal basis of eigenstates for the reduced density matrix of the horizon and use it to compute the horizon entanglement entropy. By imposing consistency with the horizon boundary conditions and semiclassical thermodynamical properties, we recover the Bekenstein-Hawking entropy formula for any value of the Immirzi parameter. Our analysis supports the equivalence between the von Neumann (entanglement) entropy interpretation and the Boltzmann (statistical) one.
Nurses' assessment of patients' cognitive orientation in a rehabilitation setting.
Alverzo, J P; Galski, T
1999-01-01
Orientation is a critical determinant of a patient's neurological status and an indicator of change in condition during hospitalization. The ways rehabilitation nurses assess orientation and the manner in which findings are interpreted and reported can have significant implications for the care of neurologically compromised patients. This study used a questionnaire to examine how 52 nurses appraised and reported the results of orientation evaluations. Analyses produced descriptive statistics and correlational measures for determining nurses' tendencies and consistency in evaluating orientation. Most respondents, regardless of their education and experience, used a clinical interview, rather than psychometric tests, as a basis for forming opinions about orientation. Although most evaluations included assessments in terms of person, time, place, and circumstance, no consistent pattern emerged regarding questioning or in the ways results were reported. Findings revealed a significant lack of consensus in terms of assessing and reporting orientation results, which could reflect insufficient awareness about the importance of maintaining consistency in evaluations, the relevance of using standardized evaluations and comparing measures over time, and the necessity of agreeing on how to report cognitive disturbances.
Perceptual basis of evolving Western musical styles
Rodriguez Zivic, Pablo H.; Shifres, Favio; Cecchi, Guillermo A.
2013-01-01
The brain processes temporal statistics to predict future events and to categorize perceptual objects. These statistics, called expectancies, are found in music perception, and they span a variety of different features and time scales. Specifically, there is evidence that music perception involves strong expectancies regarding the distribution of a melodic interval, namely, the distance between two consecutive notes within the context of another. The recent availability of a large Western music dataset, consisting of the historical record condensed as melodic interval counts, has opened new possibilities for data-driven analysis of musical perception. In this context, we present an analytical approach that, based on cognitive theories of music expectation and machine learning techniques, recovers a set of factors that accurately identifies historical trends and stylistic transitions between the Baroque, Classical, Romantic, and Post-Romantic periods. We also offer a plausible musicological and cognitive interpretation of these factors, allowing us to propose them as data-driven principles of melodic expectation. PMID:23716669
Brenčič, Mihael
2016-01-01
Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899–2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future. PMID:27116375
Brenčič, Mihael
2016-01-01
Northern hemisphere elementary circulation mechanisms, defined with the Dzerdzeevski classification and published on a daily basis from 1899-2012, are analysed with statistical methods as continuous categorical time series. Classification consists of 41 elementary circulation mechanisms (ECM), which are assigned to calendar days. Empirical marginal probabilities of each ECM were determined. Seasonality and the periodicity effect were investigated with moving dispersion filters and randomisation procedure on the ECM categories as well as with the time analyses of the ECM mode. The time series were determined as being non-stationary with strong time-dependent trends. During the investigated period, periodicity interchanges with periods when no seasonality is present. In the time series structure, the strongest division is visible at the milestone of 1986, showing that the atmospheric circulation pattern reflected in the ECM has significantly changed. This change is result of the change in the frequency of ECM categories; before 1986, the appearance of ECM was more diverse, and afterwards fewer ECMs appear. The statistical approach applied to the categorical climatic time series opens up new potential insight into climate variability and change studies that have to be performed in the future.
Texture as a basis for acoustic classification of substrate in the nearshore region
NASA Astrophysics Data System (ADS)
Dennison, A.; Wattrus, N. J.
2016-12-01
Segmentation and classification of substrate type from two locations in Lake Superior, are predicted using multivariate statistical processing of textural measures derived from shallow-water, high-resolution multibeam bathymetric data. During a multibeam sonar survey, both bathymetric and backscatter data are collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on substrate type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. Preliminary results from an analysis of bathymetric data and ground-truth samples collected from the Amnicon River, Superior, Wisconsin, and the Lester River, Duluth, Minnesota, demonstrate the ability to process and develop a novel classification scheme of the bottom type in two geomorphologically distinct areas.
On the Bimodality of ENSO Cycle Extremes
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2000-01-01
On the basis of sea surface temperature in the El Nino 3.4 region (5 deg. N.,-5 deg. S., 120-170 deg. W.) during the interval of 1950-1997, Kevin Trenberth previously has identified some 16 El Nino and 10 La Nina, these 26 events representing the extremes of the quasi-periodic El Nino-Southern Oscillation (ENSO) cycle. Runs testing shows that the duration, recurrence period, and sequencing of these extremes vary randomly. Hence, the decade of the 1990's, especially for El Nino, is not significantly different from that of previous decadal epochs, at least, on the basis of the frequency of onsets of ENSO extremes. Additionally, the distribution of duration for both El Nino and La Nina looks strikingly bimodal, each consisting of two preferred modes, about 8- and 16-mo long for El Nino and about 9- and 18-mo long for La Nina, as does the distribution of the recurrence period for El Nino, consisting of two preferred modes about 21- and 50-mo long. Scatterplots of the recurrence period versus duration for El Nino are found to be statistically important, displaying preferential associations that link shorter (longer) duration with shorter (longer) recurrence periods. Because the last onset of El Nino occurred in April 1997 and the event was of longer than average duration, onset of the next anticipated El Nino is not expected until February 2000 or later.
On The Bimodality of ENSO Cycle Extremes
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2000-01-01
On the basis of sea surface temperature in the El Nino 3.4 region (5N.-5S., 120-170W.) during the interval of 1950-1997, Kevin Trenberth previously has identified some 16 El Nino and 10 La Nina, these 26 events representing the extremes of the quasi-periodic El Nino-Southern Oscillation (ENSO) cycle. Runs testing shows that the duration, recurrence period, and sequencing of these extremes vary randomly. Hence, the decade of the 1990's, especially for El Nino, is not significantly different from that of previous decadal epochs, at least, on the basis of the frequency of onsets of ENSO extremes. Additionally, the distribution of duration for both El Nino and La Nina looks strikingly bimodal, each consisting of two preferred modes, about 8- and 16-months long for El Nino and about 9- and 18-months long for La Nina, as does the distribution of the recurrence period for El Nino, consisting of two preferred modes about 21- and 50- mo long. Scatterplots of the recurrence period versus duration for El Nino are found to be statistically important, displaying preferential associations that link shorter (longer) duration with shorter (longer) recurrence periods. Because the last onset of El Nino occurred in April 1997 and the event was of longer than average duration, onset of the next anticipated El Nino is not expected until February 2000 or later.
Loxley, P N
2017-10-01
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Hyvärinen, A
1985-01-01
The main purpose of the present study was to describe the statistical behaviour of daily analytical errors in the dimensions of place and time, providing a statistical basis for realistic estimates of the analytical error, and hence allowing the importance of the error and the relative contributions of its different sources to be re-evaluated. The observation material consists of creatinine and glucose results for control sera measured in daily routine quality control in five laboratories for a period of one year. The observation data were processed and computed by means of an automated data processing system. Graphic representations of time series of daily observations, as well as their means and dispersion limits when grouped over various time intervals, were investigated. For partition of the total variation several two-way analyses of variance were done with laboratory and various time classifications as factors. Pooled sets of observations were tested for normality of distribution and for consistency of variances, and the distribution characteristics of error variation in different categories of place and time were compared. Errors were found from the time series to vary typically between days. Due to irregular fluctuations in general and particular seasonal effects in creatinine, stable estimates of means or of dispersions for errors in individual laboratories could not be easily obtained over short periods of time but only from data sets pooled over long intervals (preferably at least one year). Pooled estimates of proportions of intralaboratory variation were relatively low (less than 33%) when the variation was pooled within days. However, when the variation was pooled over longer intervals this proportion increased considerably, even to a maximum of 89-98% (95-98% in each method category) when an outlying laboratory in glucose was omitted, with a concomitant decrease in the interaction component (representing laboratory-dependent variation with time). This indicates that a substantial part of the variation comes from intralaboratory variation with time rather than from constant interlaboratory differences. Normality and consistency of statistical distributions were best achieved in the long-term intralaboratory sets of the data, under which conditions the statistical estimates of error variability were also most characteristic of the individual laboratories rather than necessarily being similar to one another. Mixing of data from different laboratories may give heterogeneous and nonparametric distributions and hence is not advisable.(ABSTRACT TRUNCATED AT 400 WORDS)
Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-08-16
To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.
Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G
2011-01-01
Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695
14 CFR 234.6 - Baggage-handling statistics.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Baggage-handling statistics. 234.6 Section 234.6 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION... statistics. Each reporting carrier shall report monthly to the Department on a domestic system basis...
Code of Federal Regulations, 2010 CFR
2010-10-01
..., business or operation; (3) The name and address of each United States carrier alleged to be adversely... to petitioner: (i) Statistical data documenting present or prospective cargo loss by United States... on that basis, and the sources of the statistical data; (ii) Statistical data or other information...
14 CFR 234.6 - Baggage-handling statistics.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Baggage-handling statistics. 234.6 Section 234.6 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION... statistics. Each reporting carrier shall report monthly to the Department on a domestic system basis...
19 CFR 351.308 - Determinations on the basis of the facts available.
Code of Federal Regulations, 2010 CFR
2010-04-01
... facts available. (a) Introduction. The Secretary may make determinations on the basis of the facts... limited to, published price lists, official import statistics and customs data, and information obtained...
On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris
NASA Technical Reports Server (NTRS)
Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt
2007-01-01
A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2010 CFR
2010-01-01
... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...
15 CFR 50.30 - Fee structure for foreign trade and shipping statistics.
Code of Federal Regulations, 2011 CFR
2011-01-01
... shipping statistics. 50.30 Section 50.30 Commerce and Foreign Trade Regulations Relating to Commerce and... THE CENSUS § 50.30 Fee structure for foreign trade and shipping statistics. (a) The Bureau of the Census is willing to furnish on a cost basis foreign trade and shipping statistics provided there is no...
15 CFR 50.30 - Fee structure for foreign trade and shipping statistics.
Code of Federal Regulations, 2010 CFR
2010-01-01
... shipping statistics. 50.30 Section 50.30 Commerce and Foreign Trade Regulations Relating to Commerce and... THE CENSUS § 50.30 Fee structure for foreign trade and shipping statistics. (a) The Bureau of the Census is willing to furnish on a cost basis foreign trade and shipping statistics provided there is no...
"Plateau"-related summary statistics are uninformative for comparing working memory models.
van den Berg, Ronald; Ma, Wei Ji
2014-10-01
Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.
28 CFR 22.24 - Information transfer agreement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STATISTICAL INFORMATION § 22.24 Information transfer agreement. Prior to the transfer of any identifiable... identifiable to a private person will be used only for research and statistical purposes. (b) Information...-know basis for research or statistical purposes, provided that such transfer is approved by the person...
Statistically based material properties: A military handbook-17 perspective
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Vangel, Mark G.
1990-01-01
The statistical procedures and their importance in obtaining composite material property values in designing structures for aircraft and military combat systems are described. The property value is such that the strength exceeds this value with a prescribed probability with 95 percent confidence in the assertion. The survival probabilities are the 99th percentile and 90th percentile for the A and B basis values respectively. The basis values for strain to failure measurements are defined in a similar manner. The B value is the primary concern.
Thermodynamic constraints on fluctuation phenomena
NASA Astrophysics Data System (ADS)
Maroney, O. J. E.
2009-12-01
The relationships among reversible Carnot cycles, the absence of perpetual motion machines, and the existence of a nondecreasing globally unique entropy function form the starting point of many textbook presentations of the foundations of thermodynamics. However, the thermal fluctuation phenomena associated with statistical mechanics has been argued to restrict the domain of validity of this basis of the second law of thermodynamics. Here we demonstrate that fluctuation phenomena can be incorporated into the traditional presentation, extending rather than restricting the domain of validity of the phenomenologically motivated second law. Consistency conditions lead to constraints upon the possible spectrum of thermal fluctuations. In a special case this uniquely selects the Gibbs canonical distribution and more generally incorporates the Tsallis distributions. No particular model of microscopic dynamics need be assumed.
Thermodynamic constraints on fluctuation phenomena.
Maroney, O J E
2009-12-01
The relationships among reversible Carnot cycles, the absence of perpetual motion machines, and the existence of a nondecreasing globally unique entropy function form the starting point of many textbook presentations of the foundations of thermodynamics. However, the thermal fluctuation phenomena associated with statistical mechanics has been argued to restrict the domain of validity of this basis of the second law of thermodynamics. Here we demonstrate that fluctuation phenomena can be incorporated into the traditional presentation, extending rather than restricting the domain of validity of the phenomenologically motivated second law. Consistency conditions lead to constraints upon the possible spectrum of thermal fluctuations. In a special case this uniquely selects the Gibbs canonical distribution and more generally incorporates the Tsallis distributions. No particular model of microscopic dynamics need be assumed.
Borehole geophysical logs at Naval Weapons Industrial Reserve Plant, Dallas, Texas
Braun, Christopher L.; Anaya, Roberto; Kuniansky, Eve L.
2000-01-01
A shallow alluvial aquifer at the Naval Weapons Industrial Reserve Plant near Dallas, Texas, has been contaminated by organic solvents used in the fabrication and assembly of aircraft and aircraft parts. Natural gamma-ray and electromagnetic-induction borehole geophysical logs were obtained from 162 poly vinyl-chloride-cased wells at the plant and were integrated with existing lithologic data to improve site characterization of the subsurface alluvium. Software was developed for filtering and classifying the log data and for processing, analyzing, and creating graphical output of the digital data. The alluvium consists of mostly fine-grained low-permeability sediments; however for this study, the alluvium was classified into low, intermediate, and high clay-content sediments on the basis of the gamma-ray logs. The low clay-content sediments were interpreted as being relatively permeable, whereas the high clay-content sediments were interpreted as being relatively impermeable. Simple statistics were used to identify zones of potentially contaminated sediments on the basis of the gamma-ray log classifications and the electromagnetic-induction log conductivity data.
Development and evaluation of the Internalized Racism in Asian Americans Scale (IRAAS).
Choi, Andrew Young; Israel, Tania; Maeda, Hotaka
2017-01-01
This article presents the development and psychometric evaluation of the Internalized Racism in Asian Americans Scale (IRAAS), which was designed to measure the degree to which Asian Americans internalized hostile attitudes and negative messages targeted toward their racial identity. Items were developed on basis of prior literature, vetted through expert feedback and cognitive interviews, and administered to 655 Asian American participants through Amazon Mechanical Turk. Exploratory factor analysis with a random subsample (n = 324) yielded a psychometrically robust preliminary measurement model consisting of 3 factors: Self-Negativity, Weakness Stereotypes, and Appearance Bias. Confirmatory factor analysis with a separate subsample (n = 331) indicated that the proposed correlated factors model was strongly consistent with the observed data. Factor determinacies were high and demonstrated that the specified items adequately measured their intended factors. Bifactor modeling further indicated that this multidimensionality could be univocally represented for the purpose of measurement, including the use of a mean total score representing a single continuum of internalized racism on which individuals vary. The IRAAS statistically predicted depressive symptoms, and demonstrated statistically significant correlations in theoretically expected directions with four dimensions of collective self-esteem. These results provide initial validity evidence supporting the use of the IRAAS to measure aspects of internalized racism in this population. Limitations and research implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Negative values of quasidistributions and quantum wave and number statistics
NASA Astrophysics Data System (ADS)
Peřina, J.; Křepelka, J.
2018-04-01
We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-20
... rent and the core-based statistical area (CBSA) rent as applied to the 40th percentile FMR for that..., calculated on the basis of the core-based statistical area (CBSA) or the metropolitan Statistical Area (MSA... will be ranked according to each of the statistics specified above, and then a weighted average ranking...
The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison
Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth
2006-01-01
Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497
Steganalysis of recorded speech
NASA Astrophysics Data System (ADS)
Johnson, Micah K.; Lyu, Siwei; Farid, Hany
2005-03-01
Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.
de Lima, Júlio C.; de Costa, Fernanda; Füller, Thanise N.; Rodrigues-Corrêa, Kelly C. da Silva; Kerber, Magnus R.; Lima, Mariano S.; Fett, Janette P.; Fett-Neto, Arthur G.
2016-01-01
Pine oleoresin is a major source of terpenes, consisting of turpentine (mono- and sesquiterpenes) and rosin (diterpenes) fractions. Higher oleoresin yields are of economic interest, since oleoresin derivatives make up a valuable source of materials for chemical industries. Oleoresin can be extracted from living trees, often by the bark streak method, in which bark removal is done periodically, followed by application of stimulant paste containing sulfuric acid and other chemicals on the freshly wounded exposed surface. To better understand the molecular basis of chemically-stimulated and wound induced oleoresin production, we evaluated the stability of 11 putative reference genes for the purpose of normalization in studying Pinus elliottii gene expression during oleoresinosis. Samples for RNA extraction were collected from field-grown adult trees under tapping operations using stimulant pastes with different compositions and at various time points after paste application. Statistical methods established by geNorm, NormFinder, and BestKeeper softwares were consistent in pointing as adequate reference genes HISTO3 and UBI. To confirm expression stability of the candidate reference genes, expression profiles of putative P. elliottii orthologs of resin biosynthesis-related genes encoding Pinus contorta β-pinene synthase [PcTPS-(−)β-pin1], P. contorta levopimaradiene/abietadiene synthase (PcLAS1), Pinus taeda α-pinene synthase [PtTPS-(+)αpin], and P. taeda α-farnesene synthase (PtαFS) were examined following stimulant paste application. Increased oleoresin yields observed in stimulated treatments using phytohormone-based pastes were consistent with higher expression of pinene synthases. Overall, the expression of all genes examined matched the expected profiles of oleoresin-related transcript changes reported for previously examined conifers. PMID:27379135
de Lima, Júlio C; de Costa, Fernanda; Füller, Thanise N; Rodrigues-Corrêa, Kelly C da Silva; Kerber, Magnus R; Lima, Mariano S; Fett, Janette P; Fett-Neto, Arthur G
2016-01-01
Pine oleoresin is a major source of terpenes, consisting of turpentine (mono- and sesquiterpenes) and rosin (diterpenes) fractions. Higher oleoresin yields are of economic interest, since oleoresin derivatives make up a valuable source of materials for chemical industries. Oleoresin can be extracted from living trees, often by the bark streak method, in which bark removal is done periodically, followed by application of stimulant paste containing sulfuric acid and other chemicals on the freshly wounded exposed surface. To better understand the molecular basis of chemically-stimulated and wound induced oleoresin production, we evaluated the stability of 11 putative reference genes for the purpose of normalization in studying Pinus elliottii gene expression during oleoresinosis. Samples for RNA extraction were collected from field-grown adult trees under tapping operations using stimulant pastes with different compositions and at various time points after paste application. Statistical methods established by geNorm, NormFinder, and BestKeeper softwares were consistent in pointing as adequate reference genes HISTO3 and UBI. To confirm expression stability of the candidate reference genes, expression profiles of putative P. elliottii orthologs of resin biosynthesis-related genes encoding Pinus contorta β-pinene synthase [PcTPS-(-)β-pin1], P. contorta levopimaradiene/abietadiene synthase (PcLAS1), Pinus taeda α-pinene synthase [PtTPS-(+)αpin], and P. taeda α-farnesene synthase (PtαFS) were examined following stimulant paste application. Increased oleoresin yields observed in stimulated treatments using phytohormone-based pastes were consistent with higher expression of pinene synthases. Overall, the expression of all genes examined matched the expected profiles of oleoresin-related transcript changes reported for previously examined conifers.
A comparison of certified and noncertified pet foods.
Brown, R G
1997-11-01
The market presents the buyer with a wide array of pet food choices. Marketing pet foods has changed in the last decade and today foods may be bought at a variety of outlets. The present study compares nutrient composition, digestibility, and effect on urine pH (cat foods only) of selected certified and noncertified pet foods from different outlets. The selected foods were considered analogous in terms of declared ingredients and macronutrient profiles. The analytical methods used were those of the Association of Official Analytical Chemists as described in the Pet Food Certification Protocol of the Canadian Veterinary Medical Association. The test foods were sampled 4 times from August 1994 to July 1995. Both certified and noncertified products met the nutritional requirements on a consistent basis, although 1 of the noncertified dog foods consistently failed to meet the zinc requirements. This same product also failed to meet the Canadian Veterinary Medical Association's standards for concentrations of protein, calcium, and phosphorus. One of the noncertified cat foods failed to meet the recommended calcium level. With the exception of fat digestion in 1 noncertified food, there were no statistically significant differences in major nutrient digestibility between certified and noncertified pet foods. There were some statistically significant differences in digestibility within both the certified and noncertified groups of foods. The practical significance of any of the statistical differences in digestibility is uncertain. Urine pH observed in cats fed noncertified test diets was variable, with some values greater than 7.0 as a maximum or 6.5 as an average. The general conclusion of this study was that the commonly available certified products were the nutritional equal of those foods that position themselves as "premium."
2017-01-01
Indicators of compliance and efficiency in combatting money laundering, collected by EUROSTAT, are plagued with shortcomings. In this paper, I have carried out a forensic analysis on a 2003–2010 dataset of indicators of compliance and efficiency in combatting money laundering, that European Union member states self-reported to EUROSTAT, and on the basis of which, their efforts were evaluated. I used Benford’s law to detect any anomalous statistical patterns and found that statistical anomalies were also consistent with strategic manipulation. According to Benford’s law, if we pick a random sample of numbers representing natural processes, and look at the distribution of the first digits of these numbers, we see that, contrary to popular belief, digit 1 occurs most often, then digit 2, and so on, with digit 9 occurring in less than 5% of the sample. Without prior knowledge of Benford’s law, since people are not intuitively good at creating truly random numbers, deviations thereof can capture strategic alterations. In order to eliminate other sources of deviation, I have compared deviations in situations where incentives and opportunities for manipulation existed and in situations where they did not. While my results are not a conclusive proof of strategic manipulation, they signal that countries that faced incentives and opportunities to misinform the international community about their efforts to combat money laundering may have manipulated these indicators. Finally, my analysis points to the high potential for disruption that the manipulation of national statistics has, and calls for the acknowledgment that strategic manipulation can be an unintended consequence of the international community’s pressure on countries to put combatting money laundering on the top of their national agenda. PMID:28122058
Deleanu, Ioana Sorina
2017-01-01
Indicators of compliance and efficiency in combatting money laundering, collected by EUROSTAT, are plagued with shortcomings. In this paper, I have carried out a forensic analysis on a 2003-2010 dataset of indicators of compliance and efficiency in combatting money laundering, that European Union member states self-reported to EUROSTAT, and on the basis of which, their efforts were evaluated. I used Benford's law to detect any anomalous statistical patterns and found that statistical anomalies were also consistent with strategic manipulation. According to Benford's law, if we pick a random sample of numbers representing natural processes, and look at the distribution of the first digits of these numbers, we see that, contrary to popular belief, digit 1 occurs most often, then digit 2, and so on, with digit 9 occurring in less than 5% of the sample. Without prior knowledge of Benford's law, since people are not intuitively good at creating truly random numbers, deviations thereof can capture strategic alterations. In order to eliminate other sources of deviation, I have compared deviations in situations where incentives and opportunities for manipulation existed and in situations where they did not. While my results are not a conclusive proof of strategic manipulation, they signal that countries that faced incentives and opportunities to misinform the international community about their efforts to combat money laundering may have manipulated these indicators. Finally, my analysis points to the high potential for disruption that the manipulation of national statistics has, and calls for the acknowledgment that strategic manipulation can be an unintended consequence of the international community's pressure on countries to put combatting money laundering on the top of their national agenda.
A comparison of certified and noncertified pet foods.
Brown, R G
1997-01-01
The market presents the buyer with a wide array of pet food choices. Marketing pet foods has changed in the last decade and today foods may be bought at a variety of outlets. The present study compares nutrient composition, digestibility, and effect on urine pH (cat foods only) of selected certified and noncertified pet foods from different outlets. The selected foods were considered analogous in terms of declared ingredients and macronutrient profiles. The analytical methods used were those of the Association of Official Analytical Chemists as described in the Pet Food Certification Protocol of the Canadian Veterinary Medical Association. The test foods were sampled 4 times from August 1994 to July 1995. Both certified and noncertified products met the nutritional requirements on a consistent basis, although 1 of the noncertified dog foods consistently failed to meet the zinc requirements. This same product also failed to meet the Canadian Veterinary Medical Association's standards for concentrations of protein, calcium, and phosphorus. One of the noncertified cat foods failed to meet the recommended calcium level. With the exception of fat digestion in 1 noncertified food, there were no statistically significant differences in major nutrient digestibility between certified and noncertified pet foods. There were some statistically significant differences in digestibility within both the certified and noncertified groups of foods. The practical significance of any of the statistical differences in digestibility is uncertain. Urine pH observed in cats fed noncertified test diets was variable, with some values greater than 7.0 as a maximum or 6.5 as an average. The general conclusion of this study was that the commonly available certified products were the nutritional equal of those foods that position themselves as "premium." PMID:9360790
Transportation statistics annual report 2000
DOT National Transportation Integrated Search
2001-01-01
The Transportation Statistics Annual Report (TSAR) is a Congressionally mandated publication with wide distribution. The TSAR provides the most comprehensive overview of U.S. transportation that is done on an annual basis. TSAR examines the extent of...
Consistency errors in p-values reported in Spanish psychology journals.
Caperos, José Manuel; Pardo, Antonio
2013-01-01
Recent reviews have drawn attention to frequent consistency errors when reporting statistical results. We have reviewed the statistical results reported in 186 articles published in four Spanish psychology journals. Of these articles, 102 contained at least one of the statistics selected for our study: Fisher-F , Student-t and Pearson-c 2 . Out of the 1,212 complete statistics reviewed, 12.2% presented a consistency error, meaning that the reported p-value did not correspond to the reported value of the statistic and its degrees of freedom. In 2.3% of the cases, the correct calculation would have led to a different conclusion than the reported one. In terms of articles, 48% included at least one consistency error, and 17.6% would have to change at least one conclusion. In meta-analytical terms, with a focus on effect size, consistency errors can be considered substantial in 9.5% of the cases. These results imply a need to improve the quality and precision with which statistical results are reported in Spanish psychology journals.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.
ERIC Educational Resources Information Center
Ojeda, Mario Miguel; Sahai, Hardeo
2002-01-01
Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…
42 CFR 417.568 - Adequate financial records, statistical data, and cost finding.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Cost Basis § 417.568 Adequate... definitions and accounting, statistics, and reporting practices that are widely accepted in the health care... 42 Public Health 3 2010-10-01 2010-10-01 false Adequate financial records, statistical data, and...
42 CFR 417.568 - Adequate financial records, statistical data, and cost finding.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Cost Basis § 417.568 Adequate... definitions and accounting, statistics, and reporting practices that are widely accepted in the health care... 42 Public Health 3 2011-10-01 2011-10-01 false Adequate financial records, statistical data, and...
Statistical Annex: National Report on Schooling in Australia, 1991.
ERIC Educational Resources Information Center
Australian Education Council, Melbourne.
This report enlarges upon the tables and figures in the National Report on Schooling in Australia 1991 and provides a basis for the continuing cooperative development of educational statistics in Australia and better quality statistical information about Australian schooling. The following categories organize the series of figures and tables: (1)…
Using Microsoft Excel to Generate Usage Statistics
ERIC Educational Resources Information Center
Spellman, Rosemary
2011-01-01
At the Libraries Service Center, statistics are generated on a monthly, quarterly, and yearly basis by using four Microsoft Excel workbooks. These statistics provide information about what materials are being requested and by whom. They also give details about why certain requests may not have been filled. Utilizing Excel allows for a shallower…
NASA Astrophysics Data System (ADS)
Wang, Audrey; Price, David T.
2007-03-01
A simple integrated algorithm was developed to relate global climatology to distributions of tree plant functional types (PFT). Multivariate cluster analysis was performed to analyze the statistical homogeneity of the climate space occupied by individual tree PFTs. Forested regions identified from the satellite-based GLC2000 classification were separated into tropical, temperate, and boreal sub-PFTs for use in the Canadian Terrestrial Ecosystem Model (CTEM). Global data sets of monthly minimum temperature, growing degree days, an index of climatic moisture, and estimated PFT cover fractions were then used as variables in the cluster analysis. The statistical results for individual PFT clusters were found consistent with other global-scale classifications of dominant vegetation. As an improvement of the quantification of the climatic limitations on PFT distributions, the results also demonstrated overlapping of PFT cluster boundaries that reflected vegetation transitions, for example, between tropical and temperate biomes. The resulting global database should provide a better basis for simulating the interaction of climate change and terrestrial ecosystem dynamics using global vegetation models.
Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective
Barker, Richard J.; Link, William A.
2015-01-01
Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.
Enhanced Bio-Ethanol Production from Industrial Potato Waste by Statistical Medium Optimization.
Izmirlioglu, Gulten; Demirci, Ali
2015-10-15
Industrial wastes are of great interest as a substrate in production of value-added products to reduce cost, while managing the waste economically and environmentally. Bio-ethanol production from industrial wastes has gained attention because of its abundance, availability, and rich carbon and nitrogen content. In this study, industrial potato waste was used as a carbon source and a medium was optimized for ethanol production by using statistical designs. The effect of various medium components on ethanol production was evaluated. Yeast extract, malt extract, and MgSO₄·7H₂O showed significantly positive effects, whereas KH₂PO₄ and CaCl₂·2H₂O had a significantly negative effect (p-value<0.05). Using response surface methodology, a medium consisting of 40.4 g/L (dry basis) industrial waste potato, 50 g/L malt extract, and 4.84 g/L MgSO₄·7H₂O was found optimal and yielded 24.6 g/L ethanol at 30 °C, 150 rpm, and 48 h of fermentation. In conclusion, this study demonstrated that industrial potato waste can be used effectively to enhance bioethanol production.
Mineral Facilities of Latin America and Canada
Bernstein, Rachel; Eros, Mike; Quintana-Velazquez, Meliany
2006-01-01
This data set consists of records for over 900 mineral facilities in Latin America and Canada. The mineral facilities include mines, plants, smelters, or refineries of aluminum, cement, coal, copper, diamond, gold, iron and steel, nickel, platinum-group metals, salt, and silver, among others. Records include attributes such as commodity, country, location, company name, facility type and capacity if applicable, and generalized coordinates. The data were compiled from multiple sources, including the 2003 and 2004 USGS Minerals Yearbooks (Latin America and Candada volume), data to be published in the 2005 Minerals Yearbook Latin America and Canada Volume, minerals statistics and information from the USGS minerals information Web site (minerals.usgs.gov/minerals), and data collected by USGS minerals information country specialists. Data reflect the most recent published table of industry structure for each country. Other sources include statistical publications of individual countries, annual reports and press releases of operating companies,and trade journals. Due to the sensitivity of some energy commodity data, the quality of these data should be evaluated on a country-by-country basis. Additional information and explanation is available from the country specialists.
Research on Some Bus Transport Networks with Random Overlapping Clique Structure
NASA Astrophysics Data System (ADS)
Yang, Xu-Hua; Wang, Bo; Wang, Wan-Liang; Sun, You-Xian
2008-11-01
On the basis of investigating the statistical data of bus transport networks of three big cities in China, we propose that each bus route is a clique (maximal complete subgraph) and a bus transport network (BTN) consists of a lot of cliques, which intensively connect and overlap with each other. We study the network properties, which include the degree distribution, multiple edges' overlapping time distribution, distribution of the overlap size between any two overlapping cliques, distribution of the number of cliques that a node belongs to. Naturally, the cliques also constitute a network, with the overlapping nodes being their multiple links. We also research its network properties such as degree distribution, clustering, average path length, and so on. We propose that a BTN has the properties of random clique increment and random overlapping clique, at the same time, a BTN is a small-world network with highly clique-clustered and highly clique-overlapped. Finally, we introduce a BTN evolution model, whose simulation results agree well with the statistical laws that emerge in real BTNs.
Nätti, Jouko; Kinnunen, Ulla; Mäkikangas, Anne; Mauno, Saija
2009-04-01
The study investigated the relationship between the type of employment (permanent/temporary) contract and mortality. Factors through which temporary employment was expected to be associated with increased mortality were the degree of satisfaction with the uncertainty related to temporary work situation (Study 1) and the voluntary/involuntary basis for temporary work (Study 2). In Study 1 the data consisted of representative survey on Finnish employees in 1984 (n = 4502), which was merged with register-based follow-up data in Statistics Finland covering years 1985-2000. In Study 2 the data consisted of representative survey on Finnish employees in 1990 (n = 3502) with register-based follow-up data covering years 1991-2000. The relative risk of death was examined by conducting Cox proportional hazards analyses for the permanent and the two temporary employment groups, respectively. In Study 1 temporary employees feeling the insecure situation unsatisfactory had a 1.95-fold higher risk of mortality than permanent employees (95% CI 1.13-3.35) after adjusted for background, health- and work-related factors. In Study 2 employees in the position of having a temporary job on the involuntarily basis had a 2.59-fold higher risk of mortality than permanent employees (95% CI 1.16-5.80). The present study confirmed that temporary employees are not a homogeneous group, which holds true even for mortality. Those temporary employees, who either felt the insecure situation unsatisfactory or who worked in temporary work involuntarily, had higher risk of mortality than permanent employees.
Semistochastic approach to many electron systems
NASA Astrophysics Data System (ADS)
Grossjean, M. K.; Grossjean, M. F.; Schulten, K.; Tavan, P.
1992-08-01
A Pariser-Parr-Pople (PPP) Hamiltonian of an 8π electron system of the molecule octatetraene, represented in a configuration-interaction basis (CI basis), is analyzed with respect to the statistical properties of its matrix elements. Based on this analysis we develop an effective Hamiltonian, which represents virtual excitations by a Gaussian orthogonal ensemble (GOE). We also examine numerical approaches which replace the original Hamiltonian by a semistochastically generated CI matrix. In that CI matrix, the matrix elements of high energy excitations are choosen randomly according to distributions reflecting the statistics of the original CI matrix.
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Petrone, Maria Chiara; Terracciano, Fulvia; Perri, Francesco; Carrara, Silvia; Cavestro, Giulia Martina; Mariani, Alberto; Testoni, Pier Alberto; Arcidiacono, Paolo Giorgio
2014-01-01
The prevalence of nine EUS features of chronic pancreatitis (CP) according to the standard Wiersema classification has been investigated in 489 patients undergoing EUS for an indication not related to pancreatico-biliary disease. We showed that 82 subjects (16.8%) had at least one ductular or parenchymal abnormality. Among them, 18 (3.7% of study population) had ≥3 Wiersema criteria suggestive of CP. Recently, a new classification (Rosemont) of EUS findings consistent, suggestive or indeterminate for CP has been proposed. To stratify healthy subjects into different subgroups on the basis of EUS features of CP according to the Wiersema and Rosemont classifications and to evaluate the agreement in the diagnosis of CP with the two scoring systems. Weighted kappa statistics was computed to evaluate the strength of agreement between the two scoring systems. Univariate and multivariate analysis between any EUS abnormality and habits were performed. Eighty-two EUS videos were reviewed. Using the Wiersema classification, 18 subjects showed ≥3 EUS features suggestive of CP. The EUS diagnosis of CP in these 18 subjects was considered as consistent in only one patient, according to Rosemont classification. Weighted Kappa statistics was 0.34 showing that the strength of agreement was 'fair'. Alcohol use and smoking were identified as risk factors for having pancreatic abnormalities on EUS. The prevalence of EUS features consistent or suggestive of CP in healthy subjects according to the Rosemont classification is lower than that assessed by Wiersema criteria. In that regard the Rosemont classification seems to be more accurate in excluding clinically relevant CP. Overall agreement between the two classifications is fair. Copyright © 2014 IAP and EPC. Published by Elsevier B.V. All rights reserved.
40 CFR 91.512 - Request for public hearing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...
Statistical Measures, Hypotheses, and Tests in Applied Research
ERIC Educational Resources Information Center
Saville, David J.; Rowarth, Jacqueline S.
2008-01-01
This article reviews and discusses the use of statistical concepts in a natural resources and life sciences journal on the basis of a census of the articles published in a recent issue of the "Agronomy Journal" and presents a flow chart and a graph that display the inter-relationships between the most commonly used statistical terms. It also…
NASA Astrophysics Data System (ADS)
van de Giesen, Nicolaas; Hut, Rolf; ten Veldhuis, Marie-claire
2017-04-01
If one can assume that drop size distributions can be effectively described by a generalized gamma function [1], one can estimate this function on the basis of the distribution of time intervals between drops hitting a certain area. The arrival of a single drop is relatively easy to measure with simple consumer devices such as cameras or piezoelectric elements. Here we present an open-hardware design for the electronics and statistical processing of an intervalometer that measures time intervals between drop arrivals. The specific hardware in this case is a piezoelectric element in an appropriate housing, combined with an instrumentation op-amp and an Arduino processor. Although it would not be too difficult to simply register the arrival times of all drops, it is more practical to only report the main statistics. For this purpose, all intervals below a certain threshold during a reporting interval are summed and counted. We also sum the scaled squares, cubes, and fourth powers of the intervals. On the basis of the first four moments, one can estimate the corresponding generalized gamma function and obtain some sense of the accuracy of the underlying assumptions. Special attention is needed to determine the lower threshold of the drop sizes that can be measured. This minimum size often varies over the area being monitored, such as is the case for piezoelectric elements. We describe a simple method to determine these (distributed) minimal drop sizes and present a bootstrap method to make the necessary corrections. Reference [1] Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
Galaxy bias and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assassi, Valentin; Baumann, Daniel; Schmidt, Fabian, E-mail: assassi@ias.edu, E-mail: D.D.Baumann@uva.nl, E-mail: fabians@MPA-Garching.MPG.DE
2015-12-01
We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin ofmore » any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation.« less
Nian, Rui; Liu, Fang; He, Bo
2013-07-16
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs).
Nian, Rui; Liu, Fang; He, Bo
2013-01-01
Underwater vision is one of the dominant senses and has shown great prospects in ocean investigations. In this paper, a hierarchical Independent Component Analysis (ICA) framework has been established to explore and understand the functional roles of the higher order statistical structures towards the visual stimulus in the underwater artificial vision system. The model is inspired by characteristics such as the modality, the redundancy reduction, the sparseness and the independence in the early human vision system, which seems to respectively capture the Gabor-like basis functions, the shape contours or the complicated textures in the multiple layer implementations. The simulation results have shown good performance in the effectiveness and the consistence of the approach proposed for the underwater images collected by autonomous underwater vehicles (AUVs). PMID:23863855
QSAR and 3D QSAR of inhibitors of the epidermal growth factor receptor
NASA Astrophysics Data System (ADS)
Pinto-Bazurco, Mariano; Tsakovska, Ivanka; Pajeva, Ilza
This article reports quantitative structure-activity relationships (QSAR) and 3D QSAR models of 134 structurally diverse inhibitors of the epidermal growth factor receptor (EGFR) tyrosine kinase. Free-Wilson analysis was used to derive the QSAR model. It identified the substituents in aniline, the polycyclic system, and the substituents at the 6- and 7-positions of the polycyclic system as the most important structural features. Comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) were used in the 3D QSAR modeling. The steric and electrostatic interactions proved the most important for the inhibitory effect. Both QSAR and 3D QSAR models led to consistent results. On the basis of the statistically significant models, new structures were proposed and their inhibitory activities were predicted.
de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino
2018-05-01
This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.
On the performance of large Gaussian basis sets for the computation of total atomization energies
NASA Technical Reports Server (NTRS)
Martin, J. M. L.
1992-01-01
The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.
Review of a bituminous concrete statistical specification : final report.
DOT National Transportation Integrated Search
1971-01-01
The statistically oriented specification for bituminous concrete production reviewed in this report was used as a basis for acceptance of more than one million tons of bituminous concrete in 1970. Data obtained from this system were analyzed for grad...
40 CFR 90.712 - Request for public hearing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...
Transportation statistics annual report 1994
DOT National Transportation Integrated Search
1994-01-01
The Transportation Statistics Annual Report (TSAR) provides the most comprehensive overview of U.S. transportation that is done on an annual basis. TSAR examines the extent of the system, how it is used, how well it works, how it affects people and t...
Demir, Aslan; Türker, Polat; Bozkurt, Suheyla Uyar; İlker, Yalcin Nazmi
2015-01-01
In this animal study, we reviewed the histomorphological findings in rabbit kidneys after a high number of high-energy shock wave applications and observed if there were any cumulative effects after repeated sessions. We formed 2 groups, each consisting of 8 rabbits. Group 1 received 1 session and group 2 received 3 sessions of ESWL with a 7 day interval between sessions, consisting of 3500 beats to the left kidney and 5500 beats to the right kidney per session. The specimens of kidneys were examined histomorphologically after bilateral nephrectomy was performed. For statistical analysis, 4 groups of specimens were formed. The first and second groups received 1 session, 3500 and 5500 beats, respectively. The third and fourth groups received 3 sessions, at 3500 and 5500 beats per each session, respectively. The sections were evaluated under a light microscope to determine subcapsular thickening; subcapsular, intratubular and parenchymal hemorrhage; subcapsular, intersitital, perivascular and proximal ureteral fibrosis; paranchymal necrosis; tubular epithelial vacuolization; tubular atrophy; glomerular destruction and calcification. In histopathological examinations capsular thickening, subcapsular hematoma, tubuloepithelial vacuolisation, glomerular destruction, parenchymal hemorrhage, interstitial fibrosis, and perivascular fibrosis were observed in all groups. In statistical analysis, on the basis of perivascular fibrosis and tubular atrophy, there was a beats per session dependent increase of both. The detrimental effects from ESWL are dose dependent but not cumulative for up to 3 sessions. Histopathological experimental animal studies will aid in understanding local and maybe, by means of these local effects, systemic effects.
Uncertainty Analysis for DAM Projects.
1987-09-01
overwhelming majority of articles published on the use of statistical methodology for geotechnical engineering focus on performance predictions and design ...Results of the present study do not support the adoption of more esoteric statistical procedures except on a special case basis or in research ...influence that recommended statistical procedures might have had on the Carters Project, had they been applied during planning and design phases
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
Annual statistical report 2008 : based on data from CARE/EC
DOT National Transportation Integrated Search
2008-10-31
This Annual Statistical Report provides the basic characteristics of road accidents in 19 member states of : the European Union for the period 1997-2006, on the basis of data collected and processed in the CARE : database, the Community Road Accident...
Magnetic resonance spectra and statistical geometry
USDA-ARS?s Scientific Manuscript database
Methods of statistical geometry are introduced which allow one to estimate, on the basis of computable criteria, the conditions under which maximally informative data may be collected. We note the important role of constraints that introduce curvature into parameter space and discuss the appropriate...
On basis set superposition error corrected stabilization energies for large n-body clusters.
Walczak, Katarzyna; Friedrich, Joachim; Dolg, Michael
2011-10-07
In this contribution, we propose an approximate basis set superposition error (BSSE) correction scheme for the site-site function counterpoise and for the Valiron-Mayer function counterpoise correction of second order to account for the basis set superposition error in clusters with a large number of subunits. The accuracy of the proposed scheme has been investigated for a water cluster series at the CCSD(T), CCSD, MP2, and self-consistent field levels of theory using Dunning's correlation consistent basis sets. The BSSE corrected stabilization energies for a series of water clusters are presented. A study regarding the possible savings with respect to computational resources has been carried out as well as a monitoring of the basis set dependence of the approximate BSSE corrections. © 2011 American Institute of Physics
Centre of pressure patterns in the golf swing: individual-based analysis.
Ball, Kevin; Best, Russell
2012-06-01
Weight transfer has been identified as important in group-based analyses. The aim of this study was to extend this work by examining the importance of weight transfer in the golf swing on an individual basis. Five professional and amateur golfers performed 50 swings with the driver, hitting a ball into a net. The golfer's centre of pressure position and velocity, parallel with the line of shot, were measured by two force plates at eight swing events that were identified from high-speed video. The relationships between these parameters and club head velocity at ball contact were examined using regression statistics. The results did support the use of group-based analysis, with all golfers returning significant relationships. However, results were also individual-specific, with golfers returning different combinations of significant factors. Furthermore, factors not identified in group-based analysis were significant on an individual basis. The most consistent relationship was a larger weight transfer range associated with a larger club head velocity (p < 0.05). All golfers also returned at least one significant relationship with rate of weight transfer at swing events (p < 0.01). Individual-based analysis should form part of performance-based biomechanical analysis of sporting skills.
NASA Astrophysics Data System (ADS)
Xu, M. L.; Yu, Y.; Ramaswamy, H. S.; Zhu, S. M.
2017-01-01
Chinese liquor aroma components were characterized during the aging process using gas chromatography (GC). Principal component and cluster analysis (PCA, CA) were used to discriminate the Chinese liquor age which has a great economic value. Of a total of 21 major aroma components identified and quantified, 13 components which included several acids, alcohols, esters, aldehydes and furans decreased significantly in the first year of aging, maintained the same levels (p > 0.05) for next three years and decreased again (p < 0.05) in the fifth year. On the contrary, a significant increase was observed in propionic acid, furfural and phenylethanol. Ethyl lactate was found to be the most stable aroma component during aging process. Results of PCA and CA demonstrated that young liquor (fresh) and aged liquors were well separated from each other, which is in consistent with the evolution of aroma components along with the aging process. These findings provide a quantitative basis for discriminating the Chinese liquor age and a scientific basis for further research on elucidating the liquor aging process, and a possible tool to guard against counterfeit and defective products.
Enhanced Component Performance Study: Emergency Diesel Generators 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of emergency diesel generators (EDGs) at U.S. commercial nuclear power plants. This report evaluates component performance over time using (1) Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES) data from 1998 through 2014 and (2) maintenance unavailability (UA) performance data from Mitigating Systems Performance Index (MSPI) Basis Document data from 2002 through 2014. The objective is to show estimates of current failure probabilities and rates related to EDGs, trend these data on an annual basis, determine if the current data are consistent with the probability distributions currently recommended for use inmore » NRC probabilistic risk assessments, show how the reliability data differ for different EDG manufacturers and for EDGs with different ratings; and summarize the subcomponents, causes, detection methods, and recovery associated with each EDG failure mode. Engineering analyses were performed with respect to time period and failure mode without regard to the actual number of EDGs at each plant. The factors analyzed are: sub-component, failure cause, detection method, recovery, manufacturer, and EDG rating. Six trends with varying degrees of statistical significance were identified in the data.« less
DNA-DNA interaction beyond the ground state
NASA Astrophysics Data System (ADS)
Lee, D. J.; Wynveen, A.; Kornyshev, A. A.
2004-11-01
The electrostatic interaction potential between DNA duplexes in solution is a basis for the statistical mechanics of columnar DNA assemblies. It may also play an important role in recombination of homologous genes. We develop a theory of this interaction that includes thermal torsional fluctuations of DNA using field-theoretical methods and Monte Carlo simulations. The theory extends and rationalizes the earlier suggested variational approach which was developed in the context of a ground state theory of interaction of nonhomologous duplexes. It shows that the heuristic variational theory is equivalent to the Hartree self-consistent field approximation. By comparison of the Hartree approximation with an exact solution based on the QM analogy of path integrals, as well as Monte Carlo simulations, we show that this easily analytically-tractable approximation works very well in most cases. Thermal fluctuations do not remove the ability of DNA molecules to attract each other at favorable azimuthal conformations, neither do they wash out the possibility of electrostatic “snap-shot” recognition of homologous sequences, considered earlier on the basis of ground state calculations. At short distances DNA molecules undergo a “torsional alignment transition,” which is first order for nonhomologous DNA and weaker order for homologous sequences.
Hagberg, James M
2011-09-01
Cardiovascular disease (CVD) and CVD risk factors are highly heritable, and numerous lines of evidence indicate they have a strong genetic basis. While there is nothing known about the interactive effects of genetics and exercise training on CVD itself, there is at least some literature addressing their interactive effect on CVD risk factors. There is some evidence indicating that CVD risk factor responses to exercise training are also heritable and, thus, may have a genetic basis. While roughly 100 studies have reported significant effects of genetic variants on CVD risk factor responses to exercise training, no definitive conclusions can be generated at the present time, because of the lack of consistent and replicated results and the small sample sizes evident in most studies. There is some evidence supporting "possible" candidate genes that may affect these responses to exercise training: APO E and CETP for plasma lipoprotein-lipid profiles; eNOS, ACE, EDN1, and GNB3 for blood pressure; PPARG for type 2 diabetes phenotypes; and FTO and BAR genes for obesity-related phenotypes. However, while genotyping technologies and statistical methods are advancing rapidly, the primary limitation in this field is the need to generate what in terms of exercise intervention studies would be almost incomprehensible sample sizes. Most recent diabetes, obesity, and blood pressure genetic studies have utilized populations of 10,000-250,000 subjects, which result in the necessary statistical power to detect the magnitude of effects that would probably be expected for the impact of an individual gene on CVD risk factor responses to exercise training. Thus at this time it is difficult to see how this field will advance in the future to the point where robust, consistent, and replicated data are available to address these issues. However, the results of recent large-scale genomewide association studies for baseline CVD risk factors may drive future hypothesis-driven exercise training intervention studies in smaller populations addressing the impact of specific genetic variants on well-defined physiological phenotypes.
Li, Jun; Xie, Changjian; Guo, Hua
2017-08-30
A full dimensional accurate potential energy surface (PES) for the C( 3 P) and H 2 O reaction is developed based on ∼34 000 data points calculated at the level of the explicitly correlated unrestricted coupled cluster method with single, double, and perturbative triple excitations with the augmented correlation-consistent polarized triple zeta basis set (CCSD(T)-F12a/AVTZ). The PES is invariant with respect to the permutation of the two hydrogen atoms and the total root mean square error (RMSE) of the fit is only 0.31 kcal mol -1 . The PES features two barriers in the entrance channel and several potential minima, as well as multiple product channels. The rate coefficients of this reaction calculated using a transition-state theory and quasi-classical trajectory (QCT) method are small near room temperature, consistent with experiments. The reaction dynamics is also investigated with QCT on the new PES, which found that the reactivity is constrained by the entrance barriers and the final product branching is not statistical.
Diesel exposure and mortality among railway workers: results of a pilot study.
Schenker, M B; Smith, T; Muñoz, A; Woskie, S; Speizer, F E
1984-01-01
A pilot study of the mortality of railway workers was undertaken to evaluate the feasibility of studying the association of exposure to diesel exhaust and cause specific mortality. The cohort consisted of 2519 white male subjects aged 45-64 with at least 10 years of railway service by 1967. Subjects were selected on the basis of job classification, and cause specific mortality was ascertained for subjects who died (n = 501) up to 1979. The total follow up period was 28.4 (X 1000) person-years. The standardised mortality ratio (SMR) for the cohort, based on United States national rates, was 87 (95% confidence limits 80, 95), and there were no significant differences from expected number of deaths for any specific neoplasm. The directly standardised rate ratio for respiratory cancer among diesel exposed subjects relative to unexposed subjects was 1.42 +/- 0.50 (means +/- SE). A proportional hazards model was consistent with the findings of the standardised rate ratio, but in neither analysis was the increased risk of respiratory cancer in diesel exposed subjects statistically significant. PMID:6743578
NASA Astrophysics Data System (ADS)
Delle Site, Luigi
2018-01-01
A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.
OPERATIONAL EXPERIENCE WITH BEAM ABORT SYSTEM FOR SUPERCONDUCTING UNDULATOR QUENCH MITIGATION*
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harkay, Katherine C.; Dooling, Jeffrey C.; Sajaev, Vadim
A beam abort system has been implemented in the Advanced Photon Source storage ring. The abort system works in tandem with the existing machine protection system (MPS), and its purpose is to control the beam loss location and, thereby, minimize beam loss-induced quenches at the two superconducting undulators (SCUs). The abort system consists of a dedicated horizontal kicker designed to kick out all the bunches in a few turns after being triggered by MPS. The abort system concept was developed on the basis of single- and multi-particle tracking simulations using elegant and bench measurements of the kicker pulse. Performance ofmore » the abort system—kick amplitudes and loss distributions of all bunches—was analyzed using beam position monitor (BPM) turn histories, and agrees reasonably well with the model. Beam loss locations indicated by the BPMs are consistent with the fast fiber-optic beam loss monitor (BLM) diagnostics described elsewhere [1,2]. Operational experience with the abort system, various issues that were encountered, limitations of the system, and quench statistics are described.« less
NASA Astrophysics Data System (ADS)
Martin, Jan M. L.; Sundermann, Andreas
2001-02-01
We propose large-core correlation-consistent (cc) pseudopotential basis sets for the heavy p-block elements Ga-Kr and In-Xe. The basis sets are of cc-pVTZ and cc-pVQZ quality, and have been optimized for use with the large-core (valence-electrons only) Stuttgart-Dresden-Bonn (SDB) relativistic pseudopotentials. Validation calculations on a variety of third-row and fourth-row diatomics suggest them to be comparable in quality to the all-electron cc-pVTZ and cc-pVQZ basis sets for lighter elements. Especially the SDB-cc-pVQZ basis set in conjunction with a core polarization potential (CPP) yields excellent agreement with experiment for compounds of the later heavy p-block elements. For accurate calculations on Ga (and, to a lesser extent, Ge) compounds, explicit treatment of 13 valence electrons appears to be desirable, while it seems inevitable for In compounds. For Ga and Ge, we propose correlation consistent basis sets extended for (3d) correlation. For accurate calculations on organometallic complexes of interest to homogenous catalysis, we recommend a combination of the standard cc-pVTZ basis set for first- and second-row elements, the presently derived SDB-cc-pVTZ basis set for heavier p-block elements, and for transition metals, the small-core [6s5p3d] Stuttgart-Dresden basis set-relativistic effective core potential combination supplemented by (2f1g) functions with exponents given in the Appendix to the present paper.
Institutional Image Indicators of Three Universities: Basis for Attracting Prospective Entrants
ERIC Educational Resources Information Center
Bringula, Rex P.; Basa, Roselle S.
2011-01-01
This study determined the student profile and enrollment of the three Universities in the University Belt. It also found out the respondents' level of consideration concerning the institutional image indicators that served as basis for attracting prospective entrants. Descriptive statistics revealed the following: most of the respondents belonged…
Directory of Agencies Collecting Statistical Data from College & University Libraries.
ERIC Educational Resources Information Center
LaBrake, Lynn B., Ed.
This directory of organizations and agencies that survey academic libraries for statistical information on a regular basis includes 104 organizations representing state and federal agencies, college and university administrative bodies, accrediting organizations, all types of library organizations and associations, and publishers. The directory…
The validity of multiphase DNS initialized on the basis of single--point statistics
NASA Astrophysics Data System (ADS)
Subramaniam, Shankar
1999-11-01
A study of the point--process statistical representation of a spray reveals that single--point statistical information contained in the droplet distribution function (ddf) is related to a sequence of single surrogate--droplet pdf's, which are in general different from the physical single--droplet pdf's. The results of this study have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single--point statistics such as the average number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets.
Linking consistency with object/thread semantics - An approach to robust computation
NASA Technical Reports Server (NTRS)
Chen, Raymond C.; Dasgupta, Partha
1989-01-01
This paper presents an object/thread based paradigm that links data consistency with object/thread semantics. The paradigm can be used to achieve a wide range of consistency semantics from strict atomic transactions to standard process semantics. The paradigm supports three types of data consistency. Object programmers indicate the type of consistency desired on a per-operation basis and the system performs automatic concurrency control and recovery management to ensure that those consistency requirements are met. This allows programmers to customize consistency and recovery on a per-application basis without having to supply complicated, custom recovery management schemes. The paradigm allows robust and nonrobust computation to operate concurrently on the same data in a well defined manner. The operating system needs to support only one vehicle of computation - the thread.
ERIC Educational Resources Information Center
Nicholson, James; Ridgway, Jim
2017-01-01
White and Gorard make important and relevant criticisms of some of the methods commonly used in social science research, but go further by criticising the logical basis for inferential statistical tests. This paper comments briefly on matters we broadly agree on with them and more fully on matters where we disagree. We agree that too little…
Appraising the self-assessed support needs of Turkish women with breast cancer.
Erci, B; Karabulut, N
2007-03-01
The purposes of this study were to establish the range of needs of women with breast cancer and to examine how women's needs might form clusters that could provide the basis for developing a standardized scale of needs for use by local breast care nurses in the evaluation of care. The sample consisted of 143 women with breast cancer who were admitted to the outpatient and inpatient oncology clinics in a university hospital in Erzurum, Turkey. The data were collected by questionnaire, and included demographic characteristics and the self-assessed support needs of women with breast cancer. Statistical analyses have shown that the standardized scale of needs has statistically acceptable levels of reliability and validity. The women's support needs mostly clustered in Family and Friends (79%) and After Care (78.3%). The most frequently required support category was Family and Friend; however, the women were in need of support of all categories. In terms of age ranges, there are statistically significant differences in relation to Femininity and Body Image, and Family and Friends of the seven categories. Women experienced a high level of needs associated with a diagnosis of breast cancer. The results in this study should increase awareness among cancer care professionals about a range of psychosocial needs and may help them target particular patient groups for particular support interventions.
MRI textures as outcome predictor for Gamma Knife radiosurgery on vestibular schwannoma
NASA Astrophysics Data System (ADS)
Langenhuizen, P. P. J. H.; Legters, M. J. W.; Zinger, S.; Verheul, H. B.; Leenstra, S.; de With, P. H. N.
2018-02-01
Vestibular schwannomas (VS) are benign brain tumors that can be treated with high-precision focused radiation with the Gamma Knife in order to stop tumor growth. Outcome prediction of Gamma Knife radiosurgery (GKRS) treatment can help in determining whether GKRS will be effective on an individual patient basis. However, at present, prognostic factors of tumor control after GKRS for VS are largely unknown, and only clinical factors, such as size of the tumor at treatment and pre-treatment growth rate of the tumor, have been considered thus far. This research aims at outcome prediction of GKRS by means of quantitative texture feature analysis on conventional MRI scans. We compute first-order statistics and features based on gray-level co- occurrence (GLCM) and run-length matrices (RLM), and employ support vector machines and decision trees for classification. In a clinical dataset, consisting of 20 tumors showing treatment failure and 20 tumors exhibiting treatment success, we have discovered that the second-order statistical metrics distilled from GLCM and RLM are suitable for describing texture, but are slightly outperformed by simple first-order statistics, like mean, standard deviation and median. The obtained prediction accuracy is about 85%, but a final choice of the best feature can only be made after performing more extensive analyses on larger datasets. In any case, this work provides suitable texture measures for successful prediction of GKRS treatment outcome for VS.
NASA Technical Reports Server (NTRS)
Kim, Hakil; Swain, Philip H.
1990-01-01
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.
Advanced microwave soil moisture studies. [Big Sioux River Basin, Iowa
NASA Technical Reports Server (NTRS)
Dalsted, K. J.; Harlan, J. C.
1983-01-01
Comparisons of low level L-band brightness temperature (TB) and thermal infrared (TIR) data as well as the following data sets: soil map and land cover data; direct soil moisture measurement; and a computer generated contour map were statistically evaluated using regression analysis and linear discriminant analysis. Regression analysis of footprint data shows that statistical groupings of ground variables (soil features and land cover) hold promise for qualitative assessment of soil moisture and for reducing variance within the sampling space. Dry conditions appear to be more conductive to producing meaningful statistics than wet conditions. Regression analysis using field averaged TB and TIR data did not approach the higher sq R values obtained using within-field variations. The linear discriminant analysis indicates some capacity to distinguish categories with the results being somewhat better on a field basis than a footprint basis.
Molecular Dynamics Simulation of the Antiamoebin Ion Channel: Linking Structure and Conductance
NASA Technical Reports Server (NTRS)
Wilson, Michael A.; Wei, Chenyu; Bjelkmar, Paer; Wallace, B. A.; Pohorille, Andrew
2011-01-01
Molecular dynamics simulations were carried out in order to ascertain which of the potential multimeric forms of the transmembrane peptaibol channel, antiamoebin, is consistant with its measured conductance. Estimates of the conductance obtained through counting ions that cross the channel and by solving the Nernst-Planck equation yield consistent results, indicating that the motion of ions inside the channel can be satisfactorily described as diffusive.The calculated conductance of octameric channels is markedly higher than the conductance measured in single channel recordings, whereas the tetramer appears to be non-conducting. The conductance of the hexamer was estimated to be 115+/-34 pS and 74+/-20 pS, at 150 mV and 75 mV, respectively, in satisfactory agreement with the value of 90 pS measured at 75 mV. On this basis we propose that the antiamoebin channel consists of six monomers. Its pore is large enough to accommodate K(+) and Cl(-) with their first solvation shells intact. The free energy barrier encountered by K(+) is only 2.2 kcal/mol whereas Cl(-) encounters a substantially higher barrier of nearly 5 kcal/mol. This difference makes the channel selective for cations. Ion crossing events are shown to be uncorrelated and follow Poisson statistics. keywords: ion channels, peptaibols, channel conductance, molecular dynamics
Duning, Thomas; Kellinghaus, Christoph; Mohammadi, Siawoosh; Schiffbauer, Hagen; Keller, Simon; Ringelstein, E Bernd; Knecht, Stefan; Deppe, Michael
2010-02-01
Conventional structural MRI fails to identify a cerebral lesion in 25% of patients with cryptogenic partial epilepsy (CPE). Diffusion tensor imaging is an MRI technique sensitive to microstructural abnormalities of cerebral white matter (WM) by quantification of fractional anisotropy (FA). The objectives of the present study were to identify focal FA abnormalities in patients with CPE who were deemed MRI negative during routine presurgical evaluation. Diffusion tensor imaging at 3 T was performed in 12 patients with CPE and normal conventional MRI and in 67 age matched healthy volunteers. WM integrity was compared between groups on the basis of automated voxel-wise statistics of FA maps using an analysis of covariance. Volumetric measurements from high resolution T1-weighted images were also performed. Significant FA reductions in WM regions encompassing diffuse areas of the brain were observed when all patients as a group were compared with controls. On an individual basis, voxel based analyses revealed widespread symmetrical FA reduction in CPE patients. Furthermore, asymmetrical temporal lobe FA reduction was consistently ipsilateral to the electroclinical focus. No significant correlations were found between FA alterations and clinical data. There were no differences in brain volumes of CPE patients compared with controls. Despite normal conventional MRI, WM integrity abnormalities in CPE patients extend far beyond the epileptogenic zone. Given that unilateral temporal lobe FA abnormalities were consistently observed ipsilateral to the seizure focus, analysis of temporal FA may provide an informative in vivo investigation into the localisation of the epileptogenic zone in MRI negative patients.
Application of a truncated normal failure distribution in reliability testing
NASA Technical Reports Server (NTRS)
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Reactive intermediates in 4He nanodroplets: Infrared laser Stark spectroscopy of dihydroxycarbene
NASA Astrophysics Data System (ADS)
Broderick, Bernadette M.; McCaslin, Laura; Moradi, Christopher P.; Stanton, John F.; Douberly, Gary E.
2015-04-01
Singlet dihydroxycarbene ( HO C ̈ OH ) is produced via pyrolytic decomposition of oxalic acid, captured by helium nanodroplets, and probed with infrared laser Stark spectroscopy. Rovibrational bands in the OH stretch region are assigned to either trans,trans- or trans,cis-rotamers on the basis of symmetry type, nuclear spin statistical weights, and comparisons to electronic structure theory calculations. Stark spectroscopy provides the inertial components of the permanent electric dipole moments for these rotamers. The dipole components for trans, trans- and trans, cis-rotamers are (μa, μb) = (0.00, 0.68(6)) and (1.63(3), 1.50(5)), respectively. The infrared spectra lack evidence for the higher energy cis,cis-rotamer, which is consistent with a previously proposed pyrolytic decomposition mechanism of oxalic acid and computations of HO C ̈ OH torsional interconversion and tautomerization barriers.
Zuend, Stephan J; Jacobsen, Eric N
2009-10-28
An experimental and computational investigation of amido-thiourea promoted imine hydrocyanation has revealed a new and unexpected mechanism of catalysis. Rather than direct activation of the imine by the thiourea, as had been proposed previously in related systems, the data are consistent with a mechanism involving catalyst-promoted proton transfer from hydrogen isocyanide to imine to generate diastereomeric iminium/cyanide ion pairs that are bound to catalyst through multiple noncovalent interactions; these ion pairs collapse to form the enantiomeric alpha-aminonitrile products. This mechanistic proposal is supported by the observation of a statistically significant correlation between experimental and calculated enantioselectivities induced by eight different catalysts (P < 0.01). The computed models reveal a basis for enantioselectivity that involves multiple stabilizing and destabilizing interactions between substrate and catalyst, including thiourea-cyanide and amide-iminium interactions.
Jones, Lyndon; MacDougall, Nancy; Sorbara, L Gina
2002-12-01
To compare subjective symptoms and signs in a group of individuals who wear silicone-hydrogel lenses on a daily wear basis while they sequentially used two differing care regimens. Fifty adapted soft-lens wearers were fitted with a silicone-hydrogel lens material (PureVision, Bausch & Lomb). The lenses were worn on a daily wear basis for two consecutive 1-month periods, during which the subjects used either a Polyquad (polyquaternium-1) -based system or a polyaminopropyl biguanide (PHMB) -based system, using a double-masked, randomized, crossover experimental design. Significant levels of relatively asymptomatic corneal staining were observed when subjects used the PHMB-based system, with 37% of subjects demonstrating a level of staining consistent with a classical solution-based toxicity reaction. Only 2% of the subjects exhibited such staining when using the Polyquad-based system. These results were significantly different (p < 0.001). Significant symptoms were not correlated with the degree of staining, with no differences in lens comfort or overall preference being reported between the regimens (p = NS). The only statistically significant difference in symptoms related to minor differences in stinging after lens insertion being reported, with the Polyquad-based system demonstrating less stinging (p < 0.008). Practitioners who fit silicone-hydrogel contact lenses on a daily wear basis should be wary of the potential for certain PHMB-containing multipurpose care systems to invoke corneal staining. Switching to non-PHMB based regimens will eliminate this complication in most instances.
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
Statistical Inference for Big Data Problems in Molecular Biophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Savol, Andrej; Burger, Virginia
2012-01-01
We highlight the role of statistical inference techniques in providing biological insights from analyzing long time-scale molecular simulation data. Technologi- cal and algorithmic improvements in computation have brought molecular simu- lations to the forefront of techniques applied to investigating the basis of living systems. While these longer simulations, increasingly complex reaching petabyte scales presently, promise a detailed view into microscopic behavior, teasing out the important information has now become a true challenge on its own. Mining this data for important patterns is critical to automating therapeutic intervention discovery, improving protein design, and fundamentally understanding the mech- anistic basis of cellularmore » homeostasis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
Fatigue of graphite/epoxy buffer strip panels with center cracks
NASA Technical Reports Server (NTRS)
Bigelow, C. A.
1985-01-01
The effects of fatigue loading on the behavior of graphite/epoxy panels with either S-Glass or Kevlar-49 buffer strips is studied. Buffer strip panels are fatigued and tested in tension to measure their residual strength with crack-like damage. Panels are made with 45/0/-45/90 sub 2s layup with either S-Glass or Kevlar-49 buffer strip material. The buffer strips are parallel to the loading direction and made by replacing narrow strips of the 0-degree graphite plies with strips of either 0-degree S-Glass/epoxy or Kevlar-49/epoxy on a one-for-one basis. The panels are subjected to a fatigue loading spectrum MINITWIST, the shortened version of the standardized load program for the wing lower surface of a transport aircraft. Two levels of maximum strain are used in the spectrum with three durations of the fatigue spectrum. One group of panels is preloaded prior to the application of the fatigue cycling. The preload consists of statistically loading the spectrum in tension until the crack-tip damage zone reaches the ajacent buffer strips. After fatigue loading, all specimens are statistically loaded in tension to failure to determine their residual strengths.
Enhanced Bio-Ethanol Production from Industrial Potato Waste by Statistical Medium Optimization
Izmirlioglu, Gulten; Demirci, Ali
2015-01-01
Industrial wastes are of great interest as a substrate in production of value-added products to reduce cost, while managing the waste economically and environmentally. Bio-ethanol production from industrial wastes has gained attention because of its abundance, availability, and rich carbon and nitrogen content. In this study, industrial potato waste was used as a carbon source and a medium was optimized for ethanol production by using statistical designs. The effect of various medium components on ethanol production was evaluated. Yeast extract, malt extract, and MgSO4·7H2O showed significantly positive effects, whereas KH2PO4 and CaCl2·2H2O had a significantly negative effect (p-value < 0.05). Using response surface methodology, a medium consisting of 40.4 g/L (dry basis) industrial waste potato, 50 g/L malt extract, and 4.84 g/L MgSO4·7H2O was found optimal and yielded 24.6 g/L ethanol at 30 °C, 150 rpm, and 48 h of fermentation. In conclusion, this study demonstrated that industrial potato waste can be used effectively to enhance bioethanol production. PMID:26501261
Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23
NASA Technical Reports Server (NTRS)
Willson, Robert M.; Hathaway, David H.
2004-01-01
On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).
Gridded national inventory of U.S. methane emissions
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...
2016-11-16
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
NASA Technical Reports Server (NTRS)
Bell, Thomas
2007-01-01
Every week the U.S. population carries out a climate-change experiment by varying their activities with the day of the week. It is well documented that pollution levels vary on a weekly basis. Particulate aerosol pollution is generally a maximum in the middle of the week and a minimum on weekends. It is also well known that aerosols can affect precipitation, although whether they suppress or enhance storm development depends on many factors. The Tropical Rainfall Measuring Mission (TRMM) satellite has provided evidence that rain statistics change with the day of the week over the southeast U.S. and neighboring waters during the summer months (JJA) of 1998-2005. There is a midweek increase in both rain area and intensity over land, and a midweek decrease over the nearby Atlantic and perhaps the Gulf of Mexico. Statistical tests suggest that the weekly variations are very unlikely to be due to the random behavior of weather. We will discuss the TRMM evidence. Wind data from model reanalysis, rain-gauge data, and TRMM radar data all appear to be consistent with the picture that aerosols are causing summertime storms to grow more vigorously and to produce more rainfall.
Gridded National Inventory of U.S. Methane Emissions.
Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L
2016-12-06
We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
CAPACITY BUILDING PROCESS IN ENVIRONMENTAL AND HEALTH IMPACT ASSESSMENT FOR A THAI COMMUNITY.
Chaithui, Suthat; Sithisarankul, Pornchai; Hengpraprom, Sarunya
2017-03-01
This research aimed at exploring the development of the capacitybuilding process in environmental and health impact assessment, including the consideration of subsequent, capacity-building achievements. Data were gathered through questionnaires, participatory observations, in-depth interviews, focus group discussions, and capacity building checklist forms. These data were analyzed using content analysis, descriptive statistics, and inferential statistics. Our study used the components of the final draft for capacity-building processes consisting of ten steps that were formulated by synthesis from each respective process. Additionally, the evaluation of capacity building levels was performed using 10-item evaluation criteria for nine communities. The results indicated that the communities performed well under these criteria. Finally, exploration of the factors influencing capacity building in environmental and health impact assessment indicated that the learning of community members by knowledge exchange via activities and study visits were the most influential factors of the capacity building processes in environmental and health impact assessment. The final revised version of capacitybuilding process in environmental and health impact assessment could serve as a basis for the consideration of interventions in similar areas, so that they increased capacity in environmental and health impact assessments.
A Novel Signal Modeling Approach for Classification of Seizure and Seizure-Free EEG Signals.
Gupta, Anubha; Singh, Pushpendra; Karlekar, Mandar
2018-05-01
This paper presents a signal modeling-based new methodology of automatic seizure detection in EEG signals. The proposed method consists of three stages. First, a multirate filterbank structure is proposed that is constructed using the basis vectors of discrete cosine transform. The proposed filterbank decomposes EEG signals into its respective brain rhythms: delta, theta, alpha, beta, and gamma. Second, these brain rhythms are statistically modeled with the class of self-similar Gaussian random processes, namely, fractional Brownian motion and fractional Gaussian noises. The statistics of these processes are modeled using a single parameter called the Hurst exponent. In the last stage, the value of Hurst exponent and autoregressive moving average parameters are used as features to design a binary support vector machine classifier to classify pre-ictal, inter-ictal (epileptic with seizure free interval), and ictal (seizure) EEG segments. The performance of the classifier is assessed via extensive analysis on two widely used data set and is observed to provide good accuracy on both the data set. Thus, this paper proposes a novel signal model for EEG data that best captures the attributes of these signals and hence, allows to boost the classification accuracy of seizure and seizure-free epochs.
A statistical investigation into the stability of iris recognition in diverse population sets
NASA Astrophysics Data System (ADS)
Howard, John J.; Etter, Delores M.
2014-05-01
Iris recognition is increasingly being deployed on population wide scales for important applications such as border security, social service administration, criminal identification and general population management. The error rates for this incredibly accurate form of biometric identification are established using well known, laboratory quality datasets. However, it is has long been acknowledged in biometric theory that not all individuals have the same likelihood of being correctly serviced by a biometric system. Typically, techniques for identifying clients that are likely to experience a false non-match or a false match error are carried out on a per-subject basis. This research makes the novel hypothesis that certain ethnical denominations are more or less likely to experience a biometric error. Through established statistical techniques, we demonstrate this hypothesis to be true and document the notable effect that the ethnicity of the client has on iris similarity scores. Understanding the expected impact of ethnical diversity on iris recognition accuracy is crucial to the future success of this technology as it is deployed in areas where the target population consists of clientele from a range of geographic backgrounds, such as border crossings and immigration check points.
Mattfeldt, Torsten
2011-04-01
Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.
31 CFR 9.5 - Applications for investigation.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., both past and current. (c) Statistical material presented should be on a calendar-year basis for... domestic industry concerned with the article in question. (4) Pertinent statistics showing the quantities... competition created by imports of the article in question. (6) The effect, if any, of imports of the article...
40 CFR 51.364 - Enforcement against contractors, stations and inspectors.
Code of Federal Regulations, 2010 CFR
2010-07-01
... suspend or revoke the station or inspector license within three station business days of the finding. (2..., revocations, and violations and shall compile statistics on violations and penalties on an annual basis. (d... approved by the Administrator. Statistical process control shall be used whenever possible to demonstrate...
Farkle Fundamentals and Fun. Activities for Students
ERIC Educational Resources Information Center
Hooley, Donald E.
2014-01-01
The dice game Farkle provides an excellent basis for four activities that reinforce probability and expected value concepts for students in an introductory statistics class. These concepts appear in the increasingly popular AP statistics course (Peck 2011) and are used in analyzing ethical issues from insurance and gambling (COMAP 2009; Woodward…
The Forest Survey Organization Central States Forest Experiment Station
1956-01-01
This report contains forest area and timber volume statistics for the State of Iowa. The information presented here was gathered and compiled according to three different geographical units, the divisions being made on the basis of similar forest, soil, and economic conditions (frontispiece). So, for the benefit of those who might find such localized information useful...
ERIC Educational Resources Information Center
Foster, Emily M.
1942-01-01
The U.S. Office of Education is required by law to collect statistics to show the condition and progress of education. Statistics can be made available, on a national scale, to the extent that school administrators, principals, and college officials cooperate on a voluntary basis with the Office of Education in making the facts available. This…
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Fee structure for statistics for city... SERVICES AND STUDIES BY THE BUREAU OF THE CENSUS § 50.40 Fee structure for statistics for city blocks in... for each city block, drawn from the subjects which are being covered on a 100-percent basis. For these...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Fee structure for statistics for city... SERVICES AND STUDIES BY THE BUREAU OF THE CENSUS § 50.40 Fee structure for statistics for city blocks in... for each city block, drawn from the subjects which are being covered on a 100-percent basis. For these...
1982-06-01
usefulness to the Untted States Antarctic mission as managed by the National Science Foundation. Various statistical measures were applied to the reported... statistical procedures that would evolve a general meteorological picture of each of these remote sites. Primary texts used as a basis for...processed by station for monthly, seasonal and annual statistics , as appropriate. The following outlines the evaluations completed for both
The basis function approach for modeling autocorrelation in ecological data
Hefley, Trevor J.; Broms, Kristin M.; Brost, Brian M.; Buderman, Frances E.; Kay, Shannon L.; Scharf, Henry; Tipton, John; Williams, Perry J.; Hooten, Mevin B.
2017-01-01
Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data.
Basis sets for the calculation of core-electron binding energies
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-05-01
Core-electron binding energies (CEBEs) computed within a Δ self-consistent field approach require large basis sets to achieve convergence with respect to the basis set limit. It is shown that supplementing a basis set with basis functions from the corresponding basis set for the element with the next highest nuclear charge (Z + 1) provides basis sets that give CEBEs close to the basis set limit. This simple procedure provides relatively small basis sets that are well suited for calculations where the description of a core-ionised state is important, such as time-dependent density functional theory calculations of X-ray emission spectroscopy.
Curtivo, Cátia Panizzon Dal; Funghi, Nathália Bitencourt; Tavares, Guilherme Diniz; Barbosa, Sávio Fujita; Löbenberg, Raimar; Bou-Chacra, Nádia Araci
2015-05-01
In this work, near-infrared spectroscopy (NIRS) method was used to evaluate the uniformity of dosage units of three captopril 25 mg tablets commercial batches. The performance of the calibration method was assessed by determination of Q value (0.9986), standard error of estimation (C-set SEE = 1.956), standard error of prediction (V-set SEP = 2.076) as well as the consistency (106.1%). These results indicated the adequacy of the selected model. The method validation revealed the agreement of the reference high pressure liquid chromatography (HPLC) and NIRS methods. The process evaluation using the NIRS method showed that the variability was due to common causes and delivered predictable results consistently. Cp and Cpk values were, respectively, 2.05 and 1.80. These results revealed a non-centered process in relation to the average target (100% w/w), in the specified range (85-115%). The probability of failure was 21:100 million tablets of captopril. The NIRS in combination with the method of multivariate calibration, partial least squares (PLS) regression, allowed the development of methodology for the uniformity of dosage units evaluation of captopril tablets 25 mg. The statistical process control strategy associated with NIRS method as PAT played a critical role in understanding of the sources and degree of variation and its impact on the process. This approach led towards a better process understanding and provided the sound scientific basis for its continuous improvement.
Molnar, Petra
2006-01-01
The skeletal remains from the Middle Neolithic (2750-2300 BC) burial ground at Ajvide, Gotland, are analyzed in order to explore musculoskeletal patterns and to attempt to trace general as well as three specific prehistoric activities (archery, harpooning, and kayaking) that are likely to have been performed in this marine setting of fishing, hunting, and gathering. Scoring of muscular and ligament attachments is performed using the scoring method of Hawkey and Merbs ([1995] Int. J. Osteoarchaeol. 5:324-338) for muskuloskeletal stress markers (MSM). The skeletal material consists of 24 male and 15 female adult individuals divided into three age groups: young (<24 years), middle (25-39 years), and old (>40 years). Thirty upper body MSM sites, on both the left and right sides, are scored and form the basis of the study. Results show that males most frequently have higher mean MSM scores than females. Bilateral asymmetry was noted as low in both sexes. Age proved to be a contributing factor to increased MSM scores, with a greater age-related increase in females. MSM patterns were analyzed statistically in muscle groups associated with the three investigated activities. Significant positive correlations were observed in male individuals in muscle groups associated with archery and to some extent harpooning, an indication that these activities would mainly have been performed by men. Correlations in kayaking muscles were not evidently consistent with the kayaking motion. Furthermore, the costoclavicular ligament, often referred to in connection with "kayaker's clavicle," showed no positive statistical correlation with the kayaking muscles.
Examining the effects of birth order on personality
Rohrer, Julia M.; Egloff, Boris; Schmukle, Stefan C.
2015-01-01
This study examined the long-standing question of whether a person’s position among siblings has a lasting impact on that person’s life course. Empirical research on the relation between birth order and intelligence has convincingly documented that performances on psychometric intelligence tests decline slightly from firstborns to later-borns. By contrast, the search for birth-order effects on personality has not yet resulted in conclusive findings. We used data from three large national panels from the United States (n = 5,240), Great Britain (n = 4,489), and Germany (n = 10,457) to resolve this open research question. This database allowed us to identify even very small effects of birth order on personality with sufficiently high statistical power and to investigate whether effects emerge across different samples. We furthermore used two different analytical strategies by comparing siblings with different birth-order positions (i) within the same family (within-family design) and (ii) between different families (between-family design). In our analyses, we confirmed the expected birth-order effect on intelligence. We also observed a significant decline of a 10th of a SD in self-reported intellect with increasing birth-order position, and this effect persisted after controlling for objectively measured intelligence. Most important, however, we consistently found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. On the basis of the high statistical power and the consistent results across samples and analytical designs, we must conclude that birth order does not have a lasting effect on broad personality traits outside of the intellectual domain. PMID:26483461
Examining the effects of birth order on personality.
Rohrer, Julia M; Egloff, Boris; Schmukle, Stefan C
2015-11-17
This study examined the long-standing question of whether a person's position among siblings has a lasting impact on that person's life course. Empirical research on the relation between birth order and intelligence has convincingly documented that performances on psychometric intelligence tests decline slightly from firstborns to later-borns. By contrast, the search for birth-order effects on personality has not yet resulted in conclusive findings. We used data from three large national panels from the United States (n = 5,240), Great Britain (n = 4,489), and Germany (n = 10,457) to resolve this open research question. This database allowed us to identify even very small effects of birth order on personality with sufficiently high statistical power and to investigate whether effects emerge across different samples. We furthermore used two different analytical strategies by comparing siblings with different birth-order positions (i) within the same family (within-family design) and (ii) between different families (between-family design). In our analyses, we confirmed the expected birth-order effect on intelligence. We also observed a significant decline of a 10th of a SD in self-reported intellect with increasing birth-order position, and this effect persisted after controlling for objectively measured intelligence. Most important, however, we consistently found no birth-order effects on extraversion, emotional stability, agreeableness, conscientiousness, or imagination. On the basis of the high statistical power and the consistent results across samples and analytical designs, we must conclude that birth order does not have a lasting effect on broad personality traits outside of the intellectual domain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less
ERIC Educational Resources Information Center
Byrne, Eileen M.
This volume is to be used in conjunction with volume I (Final Research Report) of the Women in Science and Technology in Australia (WISTA) research project. This document contains the main statistical tables of grade 12 and higher education enrollments used as the basis for the statistical element of the WISTA research report. The document is…
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Christmas trees. (b) Amortizable basis. The term amortizable basis means that portion of the basis of... United States which will contain trees in significant commercial quantities. The property may be a woodlot or other site but must consist of at least one acre which is planted with tree seedlings in the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Christmas trees. (b) Amortizable basis. The term amortizable basis means that portion of the basis of... United States which will contain trees in significant commercial quantities. The property may be a woodlot or other site but must consist of at least one acre which is planted with tree seedlings in the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... Christmas trees. (b) Amortizable basis. The term amortizable basis means that portion of the basis of... United States which will contain trees in significant commercial quantities. The property may be a woodlot or other site but must consist of at least one acre which is planted with tree seedlings in the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... Christmas trees. (b) Amortizable basis. The term amortizable basis means that portion of the basis of... United States which will contain trees in significant commercial quantities. The property may be a woodlot or other site but must consist of at least one acre which is planted with tree seedlings in the...
42 CFR 417.568 - Adequate financial records, statistical data, and cost finding.
Code of Federal Regulations, 2012 CFR
2012-10-01
... ORGANIZATIONS, COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Cost Basis § 417... health care industry. (b) Provision of data. (1) The HMO or CMP must provide adequate cost and... 42 Public Health 3 2012-10-01 2012-10-01 false Adequate financial records, statistical data, and...
Statistical Learning as a Basis for Social Understanding in Children
ERIC Educational Resources Information Center
Ruffman, Ted; Taumoepeau, Mele; Perkins, Chris
2012-01-01
Many authors have argued that infants understand goals, intentions, and beliefs. We posit that infants' success on such tasks might instead reveal an understanding of behaviour, that infants' proficient statistical learning abilities might enable such insights, and that maternal talk scaffolds children's learning about the social world as well. We…
Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)
A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...
Statistical Bulletin: Annual Report On Economic Indicators, 1979.
ERIC Educational Resources Information Center
American Samoa Development Planning Office, Pago Pago.
Designed to serve as the basis for systematic collection of statistical information for government and the private sector, this bulletin presents a wide variety of economic indicators in tabular form. The data, selected to facilitate government and private planning efforts, are displayed in 25 tables and 27 graphs. Information is organized under…
Varandas, A J C
2009-02-01
The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.
Landau's statistical mechanics for quasi-particle models
NASA Astrophysics Data System (ADS)
Bannur, Vishnu M.
2014-04-01
Landau's formalism of statistical mechanics [following L. D. Landau and E. M. Lifshitz, Statistical Physics (Pergamon Press, Oxford, 1980)] is applied to the quasi-particle model of quark-gluon plasma. Here, one starts from the expression for pressure and develop all thermodynamics. It is a general formalism and consistent with our earlier studies [V. M. Bannur, Phys. Lett. B647, 271 (2007)] based on Pathria's formalism [following R. K. Pathria, Statistical Mechanics (Butterworth-Heinemann, Oxford, 1977)]. In Pathria's formalism, one starts from the expression for energy density and develop thermodynamics. Both the formalisms are consistent with thermodynamics and statistical mechanics. Under certain conditions, which are wrongly called thermodynamic consistent relation, we recover other formalism of quasi-particle system, like in M. I. Gorenstein and S. N. Yang, Phys. Rev. D52, 5206 (1995), widely studied in quark-gluon plasma.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mezhov, E.A.; Reimarov, G.A.; Rubisov, V.N.
1987-05-01
On the basis of a statistical treatment of the entire set of published data on anion exchange extraction constants, the authors have refined and expanded the scale of the hydration parameters for the anions ..delta..G/sub hydr/ (the effective free energies of hydration for the anions). The authors have estimated the parameters ..delta..G for 93 anions and the coefficients % for 94 series of extraction systems, which are distinguished within each series only by the nature of the exchanging anions. The series are distinguished from one another by the nature of the cation extraction agent and the diluent.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
2005-07-01
as an access graft is addressed using statistical methods below. Graft consistency can be defined statistically as the variance associated with the...addressed using statistical methods below. Graft consistency can be defined statistically as the variance associated with the sample of grafts tested in...measured using a refractometer (Brix % method). The equilibration data are shown in Graph 1. The results suggest the following equilibration scheme: 40% v/v
Downscaling Thermal Infrared Radiance for Subpixel Land Surface Temperature Retrieval
Liu, Desheng; Pu, Ruiliang
2008-01-01
Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods. PMID:27879844
Downscaling Thermal Infrared Radiance for Subpixel Land Surface Temperature Retrieval.
Liu, Desheng; Pu, Ruiliang
2008-04-06
Land surface temperature (LST) retrieved from satellite thermal sensors often consists of mixed temperature components. Retrieving subpixel LST is therefore needed in various environmental and ecological studies. In this paper, we developed two methods for downscaling coarse resolution thermal infrared (TIR) radiance for the purpose of subpixel temperature retrieval. The first method was developed on the basis of a scale-invariant physical model on TIR radiance. The second method was based on a statistical relationship between TIR radiance and land cover fraction at high spatial resolution. The two methods were applied to downscale simulated 990-m ASTER TIR data to 90-m resolution. When validated against the original 90-m ASTER TIR data, the results revealed that both downscaling methods were successful in capturing the general patterns of the original data and resolving considerable spatial details. Further quantitative assessments indicated a strong agreement between the true values and the estimated values by both methods.
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
A New Standard for Assessing the Performance of High Contrast Imaging Systems
NASA Astrophysics Data System (ADS)
Jensen-Clem, Rebecca; Mawet, Dimitri; Gomez Gonzalez, Carlos A.; Absil, Olivier; Belikov, Ruslan; Currie, Thayne; Kenworthy, Matthew A.; Marois, Christian; Mazoyer, Johan; Ruane, Garreth; Tanner, Angelle; Cantalloube, Faustine
2018-01-01
As planning for the next generation of high contrast imaging instruments (e.g., WFIRST, HabEx, and LUVOIR, TMT-PFI, EELT-EPICS) matures and second-generation ground-based extreme adaptive optics facilities (e.g., VLT-SPHERE, Gemini-GPI) finish their principal surveys, it is imperative that the performance of different designs, post-processing algorithms, observing strategies, and survey results be compared in a consistent, statistically robust framework. In this paper, we argue that the current industry standard for such comparisons—the contrast curve—falls short of this mandate. We propose a new figure of merit, the “performance map,” that incorporates three fundamental concepts in signal detection theory: the true positive fraction, the false positive fraction, and the detection threshold. By supplying a theoretical basis and recipe for generating the performance map, we hope to encourage the widespread adoption of this new metric across subfields in exoplanet imaging.
Reactive intermediates in 4He nanodroplets: Infrared laser Stark spectroscopy of dihydroxycarbene
Broderick, Bernadette M.; McCaslin, Laura; Moradi, Christopher P.; ...
2015-04-14
Singlet dihydroxycarbene (HOmore » $$\\ddot C$$OH) is produced via pyrolytic decomposition of oxalic acid, captured by helium nanodroplets, and probed with infrared laser Stark spectroscopy. Rovibrational bands in the OH stretch region are assigned to either trans, trans-or trans, cis-rotamers on the basis of symmetry type, nuclear spin statistical weights, and comparisons to electronic structure theory calculations. Stark spectroscopy provides the inertial components of the permanent electric dipole moments for these rotamers. The dipole components for trans, trans-and trans, cis-rotamers are (μ a, μ b) = (0.00,0.68(6)) and (1.63(3), 1.50(5)), respectively. The infrared spectra lack evidence for the higher energy cis,cis-rotamer, which is consistent with a previously proposed pyrolytic decomposition mechanism of oxalic acid and computations of HO$$\\ddot C$$OH torsional interconversion and tautomerization barriers.« less
Zuend, Stephan J.
2009-01-01
An experimental and computational investigation of amido-thiourea promoted imine hydrocyanation has revealed a new and unexpected mechanism of catalysis. Rather than direct activation of the imine by the thiourea, as had been proposed previously in related systems, the data are consistent with a mechanism involving catalyst-promoted proton transfer from hydrogen isocyanide to imine to generate diastereomeric iminium/cyanide ion pairs that are bound to catalyst through multiple non-covalent interactions; these ion pairs collapse to form the enantiomeric α-aminonitrile products. This mechanistic proposal is supported by the observation of a statistically significant correlation between experimental and calculated enantioselectivities induced by eight different catalysts (P ≪ 0.01). The computed models reveal a basis for enantioselectivity that involves multiple stabilizing and destabilizing interactions between substrate and catalyst, including thiourea-cyanide and amide-iminium interactions. PMID:19778044
Automated three-dimensional quantification of myocardial perfusion and brain SPECT.
Slomka, P J; Radau, P; Hurwitz, G A; Dey, D
2001-01-01
To allow automated and objective reading of nuclear medicine tomography, we have developed a set of tools for clinical analysis of myocardial perfusion tomography (PERFIT) and Brain SPECT/PET (BRASS). We exploit algorithms for image registration and use three-dimensional (3D) "normal models" for individual patient comparisons to composite datasets on a "voxel-by-voxel basis" in order to automatically determine the statistically significant abnormalities. A multistage, 3D iterative inter-subject registration of patient images to normal templates is applied, including automated masking of the external activity before final fit. In separate projects, the software has been applied to the analysis of myocardial perfusion SPECT, as well as brain SPECT and PET data. Automatic reading was consistent with visual analysis; it can be applied to the whole spectrum of clinical images, and aid physicians in the daily interpretation of tomographic nuclear medicine images.
Experimental quantum compressed sensing for a seven-qubit system
Riofrío, C. A.; Gross, D.; Flammia, S. T.; Monz, T.; Nigg, D.; Blatt, R.; Eisert, J.
2017-01-01
Well-controlled quantum devices with their increasing system size face a new roadblock hindering further development of quantum technologies. The effort of quantum tomography—the reconstruction of states and processes of a quantum device—scales unfavourably: state-of-the-art systems can no longer be characterized. Quantum compressed sensing mitigates this problem by reconstructing states from incomplete data. Here we present an experimental implementation of compressed tomography of a seven-qubit system—a topological colour code prepared in a trapped ion architecture. We are in the highly incomplete—127 Pauli basis measurement settings—and highly noisy—100 repetitions each—regime. Originally, compressed sensing was advocated for states with few non-zero eigenvalues. We argue that low-rank estimates are appropriate in general since statistical noise enables reliable reconstruction of only the leading eigenvectors. The remaining eigenvectors behave consistently with a random-matrix model that carries no information about the true state. PMID:28513587
Shock-wave structure for a polyatomic gas with large bulk viscosity
NASA Astrophysics Data System (ADS)
Kosuge, Shingo; Aoki, Kazuo
2018-02-01
The structure of a standing plane shock wave in a polyatomic gas is investigated on the basis of kinetic theory, with special interest in gases with large bulk viscosities, such as CO2 gas. The ellipsoidal statistical model for a polyatomic gas is employed. First, the shock structure is computed numerically for various upstream Mach numbers and for various (large) values of the ratio of the bulk viscosity to the shear viscosity, and different types of profiles, such as the double-layer structure consisting of a thin upstream layer with a steep change and a much thicker downstream layer with a mild change, are obtained. Then, an asymptotic analysis for large values of the ratio is carried out, and an analytical solution that describes the different types of profiles obtained by the numerical analysis, such as the double-layer structure, correctly is obtained.
Catalog of Oroville, California, earthquakes; June 7, 1975 to July 31, 1976
Mantis, Constance; Lindh, Allan; Savage, William; Marks, Shirley
1979-01-01
On August 1, 1975, at 2020 GMT a magnitude 5.7 (ML) earthquake occurred 15 km south of Oroville, California, in the western foothills of the Sierra Nevada. It was preceded by 61 foreshocks that began on June 7, 1975, and was followed by thousands of aftershocks. Several studies have reported locations or analyses of various subsets of the Oroville sequence, including Morrison and others (1975), Savage and others (1975), Lester and others (1975), Toppozada and others (1975), Ryall and others (1975), Bufe and others (1976), Morrison and others (1976), and Lahr and others (1976). In this report arrival time data have been compiled from the original records at several institutions to produce a single catalog of the Oroville sequence from June 7, 1975, through July 31, 1976. This study has four objectives: to compile a list of earthquakes in the Oroville sequence that is as complete as possible above the minimum magnitude threshold of approximately 1.0;to determine accurate and uniform hypocentral coordinates for the earthquakes;to determine reliable and consistent magnitude values for the sequence; andto provide a statistically uniform basis for further investigation of the physical processes involved in the Oroville sequence as revealed by the parameters of the foreshocks and aftershocks.The basis and procedures for the data analysis are described in this report.
Kopacz, Malgorzata
2005-01-01
The purpose of this scientific study was to determine how personality traits, as classified by Cattell, influence preferences regarding musical elements. The subject group consisted of 145 students, male and female, chosen at random from different Polish universities. For the purpose of determining their personality traits the participants completed the 16PF Questionnaire (Cattell, Saunders, & Stice, 1957; Russel & Karol, 1993), in its Polish adaptation by Choynowski (Nowakowska, 1970). The participants' musical preferences were determined by their completing a Questionnaire of Musical Preferences (specifically created for the purposes of this research), in which respondents indicated their favorite piece of music. Next, on the basis of the Questionnaire of Musical Preferences, a list of the works of music chosen by the participants was compiled. All pieces were collected on CDs and analyzed to separate out their basic musical elements. The statistical analysis shows that some personality traits: Liveliness (Factor F), Social Boldness (Factor H), Vigilance (Factor L), Openness to Change (Factor Q1), Extraversion (a general factor) have an influence on preferences regarding musical elements. Important in the subjects' musical preferences were found to be those musical elements having stimulative value and the ability to regulate the need for stimulation. These are: tempo, rhythm in relation to metrical basis, number of melodic themes, sound voluminosity, and meter.
Future Needs and Recommendations in the Development of ...
A species sensitivity distribution (SSD) is a probability model of the variation of species sensitivities to a stressor, in particular chemical exposure. The SSD approach has been used as a decision support tool in environmental protection and management since the 1980s, and the ecotoxicological, statistical and regulatory basis and applications continue to evolve. This article summarizes the findings of a 2014 workshop held by ECETOC (the European Center for Toxicology and Ecotoxicology of Chemicals) and the UK Environment Agency in Amsterdam, the Netherlands on the ecological relevance, statistical basis, and regulatory applications of SSDs. An array of research recommendations categorized under the topical areas of Use of SSDs, Ecological Considerations, Guideline Considerations, Method Development and Validation, Toxicity Data, Mechanistic Understanding and Uncertainty were identified and prioritized. A rationale for the most critical research needs identified in the workshop is provided. The workshop reviewed the technical basis and historical development and application of SSDs, described approaches to estimating generic and scenario specific SSD-based thresholds, evaluated utility and application of SSDs as diagnostic tools, and presented new statistical approaches to formulate SSDs. Collectively, these address many of the research needs to expand and improve their application. The highest priority work, from a pragmatic regulatory point of view, is t
Decoy-state quantum key distribution with biased basis choice
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999
Decoy-state quantum key distribution with biased basis choice.
Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng
2013-01-01
We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.
The basis function approach for modeling autocorrelation in ecological data.
Hefley, Trevor J; Broms, Kristin M; Brost, Brian M; Buderman, Frances E; Kay, Shannon L; Scharf, Henry R; Tipton, John R; Williams, Perry J; Hooten, Mevin B
2017-03-01
Analyzing ecological data often requires modeling the autocorrelation created by spatial and temporal processes. Many seemingly disparate statistical methods used to account for autocorrelation can be expressed as regression models that include basis functions. Basis functions also enable ecologists to modify a wide range of existing ecological models in order to account for autocorrelation, which can improve inference and predictive accuracy. Furthermore, understanding the properties of basis functions is essential for evaluating the fit of spatial or time-series models, detecting a hidden form of collinearity, and analyzing large data sets. We present important concepts and properties related to basis functions and illustrate several tools and techniques ecologists can use when modeling autocorrelation in ecological data. © 2016 by the Ecological Society of America.
Polarized atomic orbitals for self-consistent field electronic structure calculations
NASA Astrophysics Data System (ADS)
Lee, Michael S.; Head-Gordon, Martin
1997-12-01
We present a new self-consistent field approach which, given a large "secondary" basis set of atomic orbitals, variationally optimizes molecular orbitals in terms of a small "primary" basis set of distorted atomic orbitals, which are simultaneously optimized. If the primary basis is taken as a minimal basis, the resulting functions are termed polarized atomic orbitals (PAO's) because they are valence (or core) atomic orbitals which have distorted or polarized in an optimal way for their molecular environment. The PAO's derive their flexibility from the fact that they are formed from atom-centered linear-combinations of the larger set of secondary atomic orbitals. The variational conditions satisfied by PAO's are defined, and an iterative method for performing a PAO-SCF calculation is introduced. We compare the PAO-SCF approach against full SCF calculations for the energies, dipoles, and molecular geometries of various molecules. The PAO's are potentially useful for studying large systems that are currently intractable with larger than minimal basis sets, as well as offering potential interpretative benefits relative to calculations in extended basis sets.
Testing for voter rigging in small polling stations
Jimenez, Raúl; Hidalgo, Manuel; Klimek, Peter
2017-01-01
Nowadays, a large number of countries combine formal democratic institutions with authoritarian practices. Although in these countries the ruling elites may receive considerable voter support, they often use several manipulation tools to control election outcomes. A common practice of these regimes is the coercion and mobilization of large numbers of voters. This electoral irregularity is known as voter rigging, distinguishing it from vote rigging, which involves ballot stuffing or stealing. We develop a statistical test to quantify the extent to which the results of a particular election display traces of voter rigging. Our key hypothesis is that small polling stations are more susceptible to voter rigging because it is easier to identify opposing individuals, there are fewer eyewitnesses, and interested parties might reasonably expect fewer visits from election observers. We devise a general statistical method for testing whether voting behavior in small polling stations is significantly different from the behavior in their neighbor stations in a way that is consistent with the widespread occurrence of voter rigging. On the basis of a comparative analysis, the method enables third parties to conclude that an explanation other than simple variability is needed to explain geographic heterogeneities in vote preferences. We analyze 21 elections in 10 countries and find significant statistical anomalies compatible with voter rigging in Russia from 2007 to 2011, in Venezuela from 2006 to 2013, and in Uganda in 2011. Particularly disturbing is the case of Venezuela, where the smallest polling stations were decisive to the outcome of the 2013 presidential elections. PMID:28695193
Snow, Stephanie L; Panton, Rachel L; Butler, Lorna J; Wilke, Derek R; Rutledge, Robert D H; Bell, David G; Rendon, Ricardo A
2007-05-01
To determine whether there is a gap between what patients know about early-stage prostate cancer and what they need to know to make treatment decisions, and whether the information patients receive varies depending on their treating physician. Needs assessment was performed using a questionnaire consisting of 41 statements about early-stage prostate cancer. Statements were divided into six thematic subsets. Participants used a 5-point Likert scale to rate statements in terms of knowledge of the information and importance to a treatment decision. Information gaps were defined as significant difference between the importance and knowledge of an item. Descriptive statistics were used to describe demographic subscale scores. The information gap was analyzed by a paired t test for each thematic subset. One-way analyses of variance were used to detect any differences on the basis of treating physician. Questionnaires were distributed to 270 men (135 treated by radical prostatectomy, 135 by external beam radiotherapy). The return rate was 51% (138 questionnaires). A statistically significant information gap was found among all six thematic subsets, with five of the six P values less than 0.0001. Statistically significant variation was observed in the amount of information patients received from their treating physicians among four of the thematic subsets. There is an information gap between what early-stage prostate cancer patients need to know and the information they receive. Additionally there is a difference in the amount of information provided by different physicians.
Testing for voter rigging in small polling stations.
Jimenez, Raúl; Hidalgo, Manuel; Klimek, Peter
2017-06-01
Nowadays, a large number of countries combine formal democratic institutions with authoritarian practices. Although in these countries the ruling elites may receive considerable voter support, they often use several manipulation tools to control election outcomes. A common practice of these regimes is the coercion and mobilization of large numbers of voters. This electoral irregularity is known as voter rigging, distinguishing it from vote rigging, which involves ballot stuffing or stealing. We develop a statistical test to quantify the extent to which the results of a particular election display traces of voter rigging. Our key hypothesis is that small polling stations are more susceptible to voter rigging because it is easier to identify opposing individuals, there are fewer eyewitnesses, and interested parties might reasonably expect fewer visits from election observers. We devise a general statistical method for testing whether voting behavior in small polling stations is significantly different from the behavior in their neighbor stations in a way that is consistent with the widespread occurrence of voter rigging. On the basis of a comparative analysis, the method enables third parties to conclude that an explanation other than simple variability is needed to explain geographic heterogeneities in vote preferences. We analyze 21 elections in 10 countries and find significant statistical anomalies compatible with voter rigging in Russia from 2007 to 2011, in Venezuela from 2006 to 2013, and in Uganda in 2011. Particularly disturbing is the case of Venezuela, where the smallest polling stations were decisive to the outcome of the 2013 presidential elections.
Steendahl, U; Prescott, E; Damsgaard, M T
1992-05-11
The substance methylmethacrylate (MMA) is an organic solvent which is employed inter alii for prostheses which is suspected of being neurotoxic. With the object of illustrating whether there is a connection between exposure to MMA and symptoms of organic dementia, a cross-sectional investigation was carried out on a population consisting of occupationally active dental technicians and opticians (n = 528) and a group of dental technicians who were no longer occupationally active (n = 173). No noteworthy difference in the background variables, apart from age, was observed. Age was taken into consideration in the analysis. The results show a statistically significant increase in the prevalence of the chronic symptoms on increasing exposure. Where the acute symptoms are concerned, the connection is not statistically significant, but a tendency is observed. A chronic symptom index constructed on the basis of 13 questions concerning chronic symptoms is compared with the life exposure and the age. A statistically significant increase in the index was found with exposure to MMA, although not for the oldest age group. The pattern symptoms, presence of bias and other forms of exposure are discussed. It is concluded that this investigation confirms the hypothesis that symptoms of organic dementia have a connection the exposure to MMA. The results support the presumption that MMA causes acute and chronic damage to the central nervous system even with exposure below the safety limits. It is recommended that the occupational environment of dental technicians, including the present safety limits, should be revised.
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
Applying the health promotion model to development of a worksite intervention.
Lusk, S L; Kerr, M J; Ronis, D L; Eakin, B L
1999-01-01
Consistent use of hearing protection devices (HPDs) decreases noise-induced hearing loss, however, many workers do not use them consistently. Past research has supported the need to use a conceptual framework to understand behaviors and guide intervention programs; however, few reports have specified a process to translate a conceptual model into an intervention. The strongest predictors from the Health Promotion Model were used to design a training program to increase HPD use among construction workers. Carpenters (n = 118), operating engineers (n = 109), and plumber/pipefitters (n = 129) in the Midwest were recruited to participate in the study. Written questionnaires including scales measuring the components of the Health Promotion Model were completed in classroom settings at worker trade group meetings. All items from scales predicting HPD use were reviewed to determine the basis for the content of a program to promote the use of HPDs. Three selection criteria were developed: (1) correlation with use of hearing protection (at least .20), (2) amenability to change, and (3) room for improvement (mean score not at ceiling). Linear regression and Pearson's correlation were used to assess the components of the model as predictors of HPD use. Five predictors had statistically significant regression coefficients: perceived noise exposure, self-efficacy, value of use, barriers to use, and modeling of use of hearing protection. Using items meeting the selection criteria, a 20-minute videotape with written handouts was developed as the core of an intervention. A clearly defined practice session was also incorporated in the training intervention. Determining salient factors for worker populations and specific protective equipment prior to designing an intervention is essential. These predictors provided the basis for a training program that addressed the specific needs of construction workers. Results of tests of the effectiveness of the program will be available in the near future.
NASA Astrophysics Data System (ADS)
Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen
2016-07-01
The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.
Closer look at time averages of the logistic map at the edge of chaos
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Tsallis, Constantino; Beck, Christian
2009-05-01
The probability distribution of sums of iterates of the logistic map at the edge of chaos has been recently shown [U. Tirnakli , Phys. Rev. E 75, 040106(R) (2007)] to be numerically consistent with a q -Gaussian, the distribution which—under appropriate constraints—maximizes the nonadditive entropy Sq , which is the basis of nonextensive statistical mechanics. This analysis was based on a study of the tails of the distribution. We now check the entire distribution, in particular, its central part. This is important in view of a recent q generalization of the central limit theorem, which states that for certain classes of strongly correlated random variables the rescaled sum approaches a q -Gaussian limit distribution. We numerically investigate for the logistic map with a parameter in a small vicinity of the critical point under which conditions there is convergence to a q -Gaussian both in the central region and in the tail region and find a scaling law involving the Feigenbaum constant δ . Our results are consistent with a large number of already available analytical and numerical evidences that the edge of chaos is well described in terms of the entropy Sq and its associated concepts.
An algorithm for separation of mixed sparse and Gaussian sources
Akkalkotkar, Ameya
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition. PMID:28414814
An algorithm for separation of mixed sparse and Gaussian sources.
Akkalkotkar, Ameya; Brown, Kevin Scott
2017-01-01
Independent component analysis (ICA) is a ubiquitous method for decomposing complex signal mixtures into a small set of statistically independent source signals. However, in cases in which the signal mixture consists of both nongaussian and Gaussian sources, the Gaussian sources will not be recoverable by ICA and will pollute estimates of the nongaussian sources. Therefore, it is desirable to have methods for mixed ICA/PCA which can separate mixtures of Gaussian and nongaussian sources. For mixtures of purely Gaussian sources, principal component analysis (PCA) can provide a basis for the Gaussian subspace. We introduce a new method for mixed ICA/PCA which we call Mixed ICA/PCA via Reproducibility Stability (MIPReSt). Our method uses a repeated estimations technique to rank sources by reproducibility, combined with decomposition of multiple subsamplings of the original data matrix. These multiple decompositions allow us to assess component stability as the size of the data matrix changes, which can be used to determinine the dimension of the nongaussian subspace in a mixture. We demonstrate the utility of MIPReSt for signal mixtures consisting of simulated sources and real-word (speech) sources, as well as mixture of unknown composition.
ERIC Educational Resources Information Center
Stemler, Steven E.; Grigorenko, Elena L.; Jarvin, Linda; Sternberg, Robert J.
2006-01-01
Sternberg's theory of successful intelligence was used to create augmented exams in Advanced Placement Psychology and Statistics. Participants included 1895 high school students from 19 states and 56 schools throughout the U.S. The psychometric results support the validity of creating examinations that assess memory, analytical, creative, and…
Belanger, Scott; Barron, Mace; Craig, Peter; Dyer, Scott; Galay-Burgos, Malyka; Hamer, Mick; Marshall, Stuart; Posthuma, Leo; Raimondo, Sandy; Whitehouse, Paul
2017-07-01
A species sensitivity distribution (SSD) is a probability model of the variation of species sensitivities to a stressor, in particular chemical exposure. The SSD approach has been used as a decision support tool in environmental protection and management since the 1980s, and the ecotoxicological, statistical, and regulatory basis and applications continue to evolve. This article summarizes the findings of a 2014 workshop held by the European Centre for Toxicology and Ecotoxicology of Chemicals and the UK Environment Agency in Amsterdam, The Netherlands, on the ecological relevance, statistical basis, and regulatory applications of SSDs. An array of research recommendations categorized under the topical areas of use of SSDs, ecological considerations, guideline considerations, method development and validation, toxicity data, mechanistic understanding, and uncertainty were identified and prioritized. A rationale for the most critical research needs identified in the workshop is provided. The workshop reviewed the technical basis and historical development and application of SSDs, described approaches to estimating generic and scenario-specific SSD-based thresholds, evaluated utility and application of SSDs as diagnostic tools, and presented new statistical approaches to formulate SSDs. Collectively, these address many of the research needs to expand and improve their application. The highest priority work, from a pragmatic regulatory point of view, is to develop a guidance of best practices that could act as a basis for global harmonization and discussions regarding the SSD methodology and tools. Integr Environ Assess Manag 2017;13:664-674. © 2016 SETAC. © 2016 SETAC.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Peterson, D. A.; Curtis, C. A.; Schmidt, C. C.; Hoffman, J.; Prins, E. M.
2014-12-01
The Fire Locating and Monitoring of Burning Emissions (FLAMBE) system converts satellite observations of thermally anomalous pixels into spatially and temporally continuous estimates of smoke release from open biomass burning. This system currently processes data from a constellation of 5 geostationary and 2 polar-orbiting sensors. Additional sensors, including NPP VIIRS and the imager on the Korea COMS-1 geostationary satellite, will soon be added. This constellation experiences schedule changes and outages of various durations, making the set of available scenes for fire detection highly variable on an hourly and daily basis. Adding to the complexity, the latency of the satellite data is variable between and within sensors. FLAMBE shares with many fire detection systems the goal of detecting as many fires as possible as early as possible, but the FLAMBE system must also produce a consistent estimate of smoke production with minimal artifacts from the changing constellation. To achieve this, NRL has developed a system of asynchronous processing and cross-calibration that permits satellite data to be used as it arrives, while preserving the consistency of the smoke emission estimates. This talk describes the asynchronous data ingest methodology, including latency statistics for the constellation. We also provide an overview and show results from the system we have developed to normalize multi-sensor fire detection for consistency.
On the optimization of Gaussian basis sets
NASA Astrophysics Data System (ADS)
Petersson, George A.; Zhong, Shijun; Montgomery, John A.; Frisch, Michael J.
2003-01-01
A new procedure for the optimization of the exponents, αj, of Gaussian basis functions, Ylm(ϑ,φ)rle-αjr2, is proposed and evaluated. The direct optimization of the exponents is hindered by the very strong coupling between these nonlinear variational parameters. However, expansion of the logarithms of the exponents in the orthonormal Legendre polynomials, Pk, of the index, j: ln αj=∑k=0kmaxAkPk((2j-2)/(Nprim-1)-1), yields a new set of well-conditioned parameters, Ak, and a complete sequence of well-conditioned exponent optimizations proceeding from the even-tempered basis set (kmax=1) to a fully optimized basis set (kmax=Nprim-1). The error relative to the exact numerical self-consistent field limit for a six-term expansion is consistently no more than 25% larger than the error for the completely optimized basis set. Thus, there is no need to optimize more than six well-conditioned variational parameters, even for the largest sets of Gaussian primitives.
Addeh, Abdoljalil; Khormali, Aminollah; Golilarz, Noorbakhsh Amiri
2018-05-04
The control chart patterns are the most commonly used statistical process control (SPC) tools to monitor process changes. When a control chart produces an out-of-control signal, this means that the process has been changed. In this study, a new method based on optimized radial basis function neural network (RBFNN) is proposed for control chart patterns (CCPs) recognition. The proposed method consists of four main modules: feature extraction, feature selection, classification and learning algorithm. In the feature extraction module, shape and statistical features are used. Recently, various shape and statistical features have been presented for the CCPs recognition. In the feature selection module, the association rules (AR) method has been employed to select the best set of the shape and statistical features. In the classifier section, RBFNN is used and finally, in RBFNN, learning algorithm has a high impact on the network performance. Therefore, a new learning algorithm based on the bees algorithm has been used in the learning module. Most studies have considered only six patterns: Normal, Cyclic, Increasing Trend, Decreasing Trend, Upward Shift and Downward Shift. Since three patterns namely Normal, Stratification, and Systematic are very similar to each other and distinguishing them is very difficult, in most studies Stratification and Systematic have not been considered. Regarding to the continuous monitoring and control over the production process and the exact type detection of the problem encountered during the production process, eight patterns have been investigated in this study. The proposed method is tested on a dataset containing 1600 samples (200 samples from each pattern) and the results showed that the proposed method has a very good performance. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Low-flow statistics of selected streams in Chester County, Pennsylvania
Schreffler, Curtis L.
1998-01-01
Low-flow statistics for many streams in Chester County, Pa., were determined on the basis of data from 14 continuous-record streamflow stations in Chester County and data from 1 station in Maryland and 1 station in Delaware. The stations in Maryland and Delaware are on streams that drain large areas within Chester County. Streamflow data through the 1994 water year were used in the analyses. The low-flow statistics summarized are the 1Q10, 7Q10, 30Q10, and harmonic mean. Low-flow statistics were estimated at 34 partial-record stream sites throughout Chester County.
On real statistics of relaxation in gases
NASA Astrophysics Data System (ADS)
Kuzovlev, Yu. E.
2016-02-01
By example of a particle interacting with ideal gas, it is shown that the statistics of collisions in statistical mechanics at any value of the gas rarefaction parameter qualitatively differ from that conjugated with Boltzmann's hypothetical molecular chaos and kinetic equation. In reality, the probability of collisions of the particle in itself is random. Because of that, the relaxation of particle velocity acquires a power-law asymptotic behavior. An estimate of its exponent is suggested on the basis of simple kinematic reasons.
The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model
NASA Astrophysics Data System (ADS)
Verkley, Wim; Severijns, Camiel
2014-05-01
Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).
Spectral likelihood expansions for Bayesian inference
NASA Astrophysics Data System (ADS)
Nagel, Joseph B.; Sudret, Bruno
2016-03-01
A spectral approach to Bayesian inference is presented. It pursues the emulation of the posterior probability density. The starting point is a series expansion of the likelihood function in terms of orthogonal polynomials. From this spectral likelihood expansion all statistical quantities of interest can be calculated semi-analytically. The posterior is formally represented as the product of a reference density and a linear combination of polynomial basis functions. Both the model evidence and the posterior moments are related to the expansion coefficients. This formulation avoids Markov chain Monte Carlo simulation and allows one to make use of linear least squares instead. The pros and cons of spectral Bayesian inference are discussed and demonstrated on the basis of simple applications from classical statistics and inverse modeling.
Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe
2010-01-01
Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.
Characterizing interactions in online social networks during exceptional events
NASA Astrophysics Data System (ADS)
Omodei, Elisa; De Domenico, Manlio; Arenas, Alex
2015-08-01
Nowadays, millions of people interact on a daily basis on online social media like Facebook and Twitter, where they share and discuss information about a wide variety of topics. In this paper, we focus on a specific online social network, Twitter, and we analyze multiple datasets each one consisting of individuals' online activity before, during and after an exceptional event in terms of volume of the communications registered. We consider important events that occurred in different arenas that range from policy to culture or science. For each dataset, the users' online activities are modeled by a multilayer network in which each layer conveys a different kind of interaction, specifically: retweeting, mentioning and replying. This representation allows us to unveil that these distinct types of interaction produce networks with different statistical properties, in particular concerning the degree distribution and the clustering structure. These results suggests that models of online activity cannot discard the information carried by this multilayer representation of the system, and should account for the different processes generated by the different kinds of interactions. Secondly, our analysis unveils the presence of statistical regularities among the different events, suggesting that the non-trivial topological patterns that we observe may represent universal features of the social dynamics on online social networks during exceptional events.
Influencing Factors of the Body Mass Index of Elementary Students in Southern Taiwan
Chou, Li-Na; Chen, Min-Li
2017-01-01
The body mass index (BMI) of school children in Taiwan is markedly increasing. According to statistical data from the Taiwan Ministry of Education, the prevalence of obesity in school children from the southern part of the country is the highest in Taiwan. Thus, exploring the factors influencing BMI in elementary school children from southern Taiwan is crucial. This study investigated the influencing factors including physical activity levels, sedentary behaviors, dietary habits, and perceived body shape on the BMIs of elementary school children from southern Taiwan. A cross-sectional design was used, and the participants consisted of 3251 fifth-grade students (1628 boys, 50.1%; 1623 girls, 49.9%). The average BMI values for boys and girls were 19.69 and 18.70 (kg/cm) respectively. Statistically significant associations were observed between BMI and sex, 31–60 min of daily vigorous or moderate physical activities levels, length of time spent watching television, time spent on video games or the computer, and intake of vegetable or meat gravy with rice (p < 0.001). Perceived body shape also affected the BMI of school children. The results of this study enable educational institutions in Taiwan to understand the factors affecting the BMI of school children and use this information as the basis for future healthy body weight policies. PMID:28241506
Influencing Factors of the Body Mass Index of Elementary Students in Southern Taiwan.
Chou, Li-Na; Chen, Min-Li
2017-02-23
The body mass index (BMI) of school children in Taiwan is markedly increasing. According to statistical data from the Taiwan Ministry of Education, the prevalence of obesity in school children from the southern part of the country is the highest in Taiwan. Thus, exploring the factors influencing BMI in elementary school children from southern Taiwan is crucial. This study investigated the influencing factors including physical activity levels, sedentary behaviors, dietary habits, and perceived body shape on the BMIs of elementary school children from southern Taiwan. A cross-sectional design was used, and the participants consisted of 3251 fifth-grade students (1628 boys, 50.1%; 1623 girls, 49.9%). The average BMI values for boys and girls were 19.69 and 18.70 (kg/cm) respectively. Statistically significant associations were observed between BMI and sex, 31-60 min of daily vigorous or moderate physical activities levels, length of time spent watching television, time spent on video games or the computer, and intake of vegetable or meat gravy with rice ( p < 0.001). Perceived body shape also affected the BMI of school children. The results of this study enable educational institutions in Taiwan to understand the factors affecting the BMI of school children and use this information as the basis for future healthy body weight policies.
NASA Astrophysics Data System (ADS)
Mitchell, Bruce C.; Chakraborty, Jayajit
2015-11-01
Heat waves are the most significant cause of mortality in the US compared to other natural hazards. Prior studies have found increased heat exposure for individuals of lower socioeconomic status in several US cities, but few comparative analyses of the social distribution of urban heat have been conducted. To address this gap, our paper examines and compares the environmental justice consequences of urban heat risk in the three largest US cities: New York City, Los Angeles, and Chicago. Risk to urban heat is estimated on the basis of three characteristics of the urban thermal landscape: land surface temperature, vegetation abundance, and structural density of the built urban environment. These variables are combined to develop an urban heat risk index, which is then statistically compared with social vulnerability indicators representing socioeconomic status, age, disability, race/ethnicity, and linguistic isolation. The results indicate a consistent and significant statistical association between lower socioeconomic and minority status and greater urban heat risk, in all three cities. Our findings support a growing body of environmental justice literature that indicates the presence of a landscape of thermal inequity in US cities and underscores the need to conduct comparative analyses of social inequities in exposure to urban heat.
Li, Cen; Yang, Hongxia; Xiao, Yuancan; Zhandui; Sanglao; Wang, Zhang; Ladan, Duojie; Bi, Hongtao
2016-01-01
Zuotai (gTso thal) is one of the famous drugs containing mercury in Tibetan medicine. However, little is known about the chemical substance basis of its pharmacodynamics and the intrinsic link of different samples sources so far. Given this, energy dispersive spectrometry of X-ray (EDX), scanning electron microscopy (SEM), atomic force microscopy (AFM), and powder X-ray diffraction (XRD) were used to assay the elements, micromorphology, and phase composition of nine Zuotai samples from different regions, respectively; the XRD fingerprint features of Zuotai were analyzed by multivariate statistical analysis. EDX result shows that Zuotai contains Hg, S, O, Fe, Al, Cu, and other elements. SEM and AFM observations suggest that Zuotai is a kind of ancient nanodrug. Its particles are mainly in the range of 100–800 nm, which commonly further aggregate into 1–30 μm loosely amorphous particles. XRD test shows that β-HgS, S8, and α-HgS are its main phase compositions. XRD fingerprint analysis indicates that the similarity degrees of nine samples are very high, and the results of multivariate statistical analysis are broadly consistent with sample sources. The present research has revealed the physicochemical characteristics of Zuotai, and it would play a positive role in interpreting this mysterious Tibetan drug. PMID:27738409
Li, Cen; Yang, Hongxia; Du, Yuzhi; Xiao, Yuancan; Zhandui; Sanglao; Wang, Zhang; Ladan, Duojie; Bi, Hongtao; Wei, Lixin
2016-01-01
Zuotai ( gTso thal ) is one of the famous drugs containing mercury in Tibetan medicine. However, little is known about the chemical substance basis of its pharmacodynamics and the intrinsic link of different samples sources so far. Given this, energy dispersive spectrometry of X-ray (EDX), scanning electron microscopy (SEM), atomic force microscopy (AFM), and powder X-ray diffraction (XRD) were used to assay the elements, micromorphology, and phase composition of nine Zuotai samples from different regions, respectively; the XRD fingerprint features of Zuotai were analyzed by multivariate statistical analysis. EDX result shows that Zuotai contains Hg, S, O, Fe, Al, Cu, and other elements. SEM and AFM observations suggest that Zuotai is a kind of ancient nanodrug. Its particles are mainly in the range of 100-800 nm, which commonly further aggregate into 1-30 μ m loosely amorphous particles. XRD test shows that β -HgS, S 8 , and α -HgS are its main phase compositions. XRD fingerprint analysis indicates that the similarity degrees of nine samples are very high, and the results of multivariate statistical analysis are broadly consistent with sample sources. The present research has revealed the physicochemical characteristics of Zuotai , and it would play a positive role in interpreting this mysterious Tibetan drug.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Optoelectronics-related competence building in Japanese and Western firms
NASA Astrophysics Data System (ADS)
Miyazaki, Kumiko
1992-05-01
In this paper, an analysis is made of how different firms in Japan and the West have developed competence related to optoelectronics on the basis of their previous experience and corporate strategies. The sample consists of a set of seven Japanese and four Western firms in the industrial, consumer electronics and materials sectors. Optoelectronics is divided into subfields including optical communications systems, optical fibers, optoelectronic key components, liquid crystal displays, optical disks, and others. The relative strengths and weaknesses of companies in the various subfields are determined using the INSPEC database, from 1976 to 1989. Parallel data are analyzed using OTAF U.S. patent statistics and the two sets of data are compared. The statistical analysis from the database is summarized for firms in each subfield in the form of an intra-firm technology index (IFTI), a new technique introduced to assess the revealed technology advantage of firms. The quantitative evaluation is complemented by results from intensive interviews with the management and scientists of the firms involved. The findings show that there is a marked variation in the way firms' technological trajectories have evolved giving rise to strength in some and weakness in other subfields for the different companies, which are related to their accumulated core competencies, previous core business activities, organizational, marketing, and competitive factors.
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim
2009-11-01
Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.
Assessing the Milky Way Satellites Associated with the Sagittarius Dwarf Spheroidal Galaxy
NASA Astrophysics Data System (ADS)
Law, David R.; Majewski, Steven R.
2010-08-01
Numerical models of the tidal disruption of the Sagittarius (Sgr) dwarf galaxy have recently been developed that for the first time simultaneously satisfy most observational constraints on the angular position, distance, and radial velocity trends of both leading and trailing tidal streams emanating from the dwarf. We use these dynamical models in combination with extant three-dimensional position and velocity data for Galactic globular clusters and dSph galaxies to identify those Milky Way satellites that are likely to have originally formed in the gravitational potential well of the Sgr dwarf, and have been stripped from Sgr during its extended interaction with the Milky Way. We conclude that the globular clusters Arp 2, M 54, NGC 5634, Terzan 8, and Whiting 1 are almost certainly associated with the Sgr dwarf, and that Berkeley 29, NGC 5053, Pal 12, and Terzan 7 are likely to be as well (albeit at lower confidence). The initial Sgr system therefore may have contained five to nine globular clusters, corresponding to a specific frequency SN = 5-9 for an initial Sgr luminosity MV = -15.0. Our result is consistent with the 8 ± 2 genuine Sgr globular clusters expected on the basis of statistical modeling of the Galactic globular cluster distribution and the corresponding false-association rate due to chance alignments with the Sgr streams. The globular clusters identified as most likely to be associated with Sgr are consistent with previous reconstructions of the Sgr age-metallicity relation, and show no evidence for a second-parameter effect shaping their horizontal branch morphologies. We find no statistically significant evidence to suggest that any of the recently discovered population of ultrafaint dwarf galaxies are associated with the Sgr tidal streams, but are unable to rule out this possibility conclusively for all systems.
Jednoróg, Katarzyna; Marchewka, Artur; Altarelli, Irene; Monzalvo Lopez, Ana Karla; van Ermingen-Marbach, Muna; Grande, Marion; Grabowska, Anna; Heim, Stefan; Ramus, Franck
2015-05-01
The neural basis of specific reading disability (SRD) remains only partly understood. A dozen studies have used voxel-based morphometry (VBM) to investigate gray matter volume (GMV) differences between SRD and control children, however, recent meta-analyses suggest that few regions are consistent across studies. We used data collected across three countries (France, Poland, and Germany) with the aim of both increasing sample size (236 SRD and controls) to obtain a clearer picture of group differences, and of further assessing the consistency of the findings across languages. VBM analysis reveals a significant group difference in a single cluster in the left thalamus. Furthermore, we observe correlations between reading accuracy and GMV in the left supramarginal gyrus and in the left cerebellum, in controls only. Most strikingly, we fail to replicate all the group differences in GMV reported in previous studies, despite the superior statistical power. The main limitation of this study is the heterogeneity of the sample drawn from different countries (i.e., speaking languages with varying orthographic transparencies) and selected based on different assessment batteries. Nevertheless, analyses within each country support the conclusions of the cross-linguistic analysis. Explanations for the discrepancy between the present and previous studies may include: (1) the limited suitability of VBM to reveal the subtle brain disruptions underlying SRD; (2) insufficient correction for multiple statistical tests and flexibility in data analysis, and (3) publication bias in favor of positive results. Thus the study echoes widespread concerns about the risk of false-positive results inherent to small-scale VBM studies. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
45 CFR 265.3 - What reports must the State file on a quarterly basis?
Code of Federal Regulations, 2010 CFR
2010-10-01
...) must collect on a monthly basis, and file on a quarterly basis, the data specified in the SSP-MOE Data... provided at § 264.85 of this chapter, in lieu of the TANF Financial Report. (d) SSP-MOE Data Report. The SSP-MOE Data Report consists of four sections. Two sections contain disaggregated data elements and...
Are some BL Lac objects artefacts of gravitational lensing?
NASA Technical Reports Server (NTRS)
Ostriker, J. P.; Vietri, M.
1985-01-01
It is proposed here that a significant fraction of BL Lac objects are optically violently variable quasars whose continuum emission has been greatly amplified, relative to the line emission, by pointlike gravitational lenses in intervening galaxies. Several anomalous physical and statistical properties of BL Lacs can be understood on the basis of this model, which is immediately testable on the basis of absorption line studies and by direct imaging.
Fractional superstatistics from a kinetic approach
NASA Astrophysics Data System (ADS)
Ourabah, Kamel; Tribeche, Mouloud
2018-03-01
Through a kinetic approach, in which temperature fluctuations are taken into account, we obtain generalized fractional statistics interpolating between Fermi-Dirac and Bose-Einstein statistics. The latter correspond to the superstatistical analogues of the Polychronakos and Haldane-Wu statistics. The virial coefficients corresponding to these statistics are worked out and compared to those of an ideal two-dimensional anyon gas. It is shown that the obtained statistics reproduce correctly the second and third virial coefficients of an anyon gas. On this basis, a link is established between the statistical parameter and the strength of fluctuations. A further generalization is suggested by allowing the statistical parameter to fluctuate. As a by-product, superstatistics of ewkons, introduced recently to deal with dark energy [Phys. Rev. E 94, 062115 (2016), 10.1103/PhysRevE.94.062115], are also obtained within the same method.
NASA Astrophysics Data System (ADS)
Beketskaya, Olga
2010-05-01
In Russia quality standards of contaminated substances values in environment consist of ecological and sanitary rate-setting. The sanitary risk assessment base on potential risk that contaminants pose to protect human beings. The main purpose of the ecological risk assessment is to protect ecosystem. To determine negative influence on living organisms in the sanitary risk assessment in Russia we use MPC. This value of contaminants show how substances affected on different part of environment, biological activity and soil processes. The ecological risk assessment based on comparison compounds concentration with background concentration for definite territories. Taking into account high interval of microelements value in soils, we suggest using statistic method for determination of concentration levels of chemical elements concentration in soils of Russia. This method is based on determination middle levels of elements content in natural condition. The top limit of middle chemical elements concentration in soils is value, which exceed middle regional background level in three times standard deviation. The top limit of natural concentration excess we can explain as anthropogenic impact. At first we study changing in the middle content value of microelements in soils of geographic regions in European part of Russia on the basis of cartographical analysis. Cartographical analysis showed that the soil of mountainous and mountain surrounding regions is enriched with microelements. On the plain territory of European part of Russia for most of microelements was noticed general direction of increasing their concentration in soils from north to south, also in the same direction soil clay content rise for majority of soils. For all other territories a clear connection has been noticed between the distribution of sand sediment. By our own investigation and data from scientific literature data base was created. This data base consist of following soil properties: texture, organic matter content, concentration of microelements and pH value. On the basis of this data base massive of data for Forest-steppe and Steppe regions was create, which was divided by texture. For all data statistics method was done and was calculated maximum level natural microelements content for soils with different texture (?+3*δ). As a result of our statistic calculation we got middle and the top limit of background concentration of microelements in sandy and clay soils (conditional border - sandy loam) of two regions. We showed, that for all territory of European part of Russia and for Forest-steppe and Steppe regions separately middle content and maximum level natural microelements concentrations (?+3*σ) are higher in clay soils, rather then in sandy soils. Data characterizing soils, in different regions, of similar texture differs less than the data collected for sandy and clay soils of the same region. After all this calculation we can notice, that data of middle and top limit of background microelements concentration in soils, based on statistic method, can be used in the aim of ecological risk assessment. Using offered method allow to calculate top limit of background concentration for sandy and clay soils for large-scale geographic regions, exceeding which will be evidence of anthropogenic contamination of soil.
Derivation of a formula for the resonance integral for a nonorthogonal basis set
Yim, Yung-Chang; Eyring, Henry
1981-01-01
In a self-consistent field calculation, a formula for the off-diagonal matrix elements of the core Hamiltonian is derived for a nonorthogonal basis set by a polyatomic approach. A set of parameters is then introduced for the repulsion integral formula of Mataga-Nishimoto to fit the experimental data. The matrix elements computed for the nonorthogonal basis set in the π-electron approximation are transformed to those for an orthogonal basis set by the Löwdin symmetrical orthogonalization. PMID:16593009
Theoretical Investigation Leading to Energy Storage in Atomic and Molecular Systems
1990-12-01
can be calculated in a single run. 21 j) Non-gradient optimization of basis function exponents is possible. The source code can be modified to carry...basis. The 10s3p/5s3p basis consists of the 9s/4s contraction of Siegbahn and Liu (Reference 91) augmented by a diffuse s-type function ( exponent ...vibrational modes. Introduction of diffuse basis functions and optimization of the d-orbital exponents have a small but important effect on the
NASA Astrophysics Data System (ADS)
Hübener, H.; Pérez-Osorio, M. A.; Ordejón, P.; Giustino, F.
2012-09-01
We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81, 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices of silicon and diamond calculated using our method fall within 1% of reference planewaves calculations, demonstrating that this method is promising. We find that polarization orbitals are critical for achieving good agreement with planewaves calculations, and that only a few additional ζ's are required for obtaining converged results, provided the split norm is properly optimized. Our present work establishes the validity of local orbital basis sets and the self-consistent Sternheimer approach for the calculation of dielectric matrices in extended systems, and prepares the ground for future studies of electronic excitations using these methods.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Cascading process in the flute-mode turbulence of a plasma
NASA Technical Reports Server (NTRS)
Gonzalez, R.; Gomez, D.; Fontan, C. F.; Schifino, A. C. S.; Montagne, R.
1993-01-01
The cascades of ideal invariants in the flute-mode turbulence are analyzed by considering a statistics based on an elementary three-mode coupling process. The statistical dynamics of the system is investigated on the basis of the existence of the physically most important (PMI) triad. When finite ion Larmor radius effects are considered, the PMI triad describes the formation of zonal flows.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
ERIC Educational Resources Information Center
Soule, Margaret
This survey of the current status of public school libraries in Maine was intended to provide statistical data as a basis for improving the school library media center program in these schools. Information was gathered that detailed how resources and delivery of services differed across grade level; across variation in size of school; between…
Miranda de Sá, Antonio Mauricio F L; Infantosi, Antonio Fernando C; Lazarev, Vladimir V
2007-01-01
In the present work, a commonly used index for evaluating the Event-Related Synchronization and Desynchronization (ERS/ERD) in the EEG was expressed as a function of the Spectral F-Test (SFT), which is a statistical test for assessing if two sample spectra are from populations with identical theoretical spectra. The sampling distribution of SFT has been derived, allowing hence ERS/ERD to be evaluated under a statistical basis. An example of the technique was also provided in the EEG signals from 10 normal subjects during intermittent photic stimulation.
NASA Astrophysics Data System (ADS)
Svirina, Anna; Shindor, Olga; Tatmyshevsky, Konstantin
2014-12-01
The paper deals with the main problems of Russian energy system development that proves necessary to provide educational programs in the field of renewable and alternative energy. In the paper the process of curricula development and defining teaching techniques on the basis of expert opinion evaluation is defined, and the competence model for renewable and alternative energy processing master students is suggested. On the basis of a distributed questionnaire and in-depth interviews, the data for statistical analysis was obtained. On the basis of this data, an optimization of curricula structure was performed, and three models of a structure for optimizing teaching techniques were developed. The suggested educational program structure which was adopted by employers is presented in the paper. The findings include quantitatively estimated importance of systemic thinking and professional skills and knowledge as basic competences of a masters' program graduate; statistically estimated necessity of practice-based learning approach; and optimization models for structuring curricula in renewable and alternative energy processing. These findings allow the establishment of a platform for the development of educational programs.
Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beggs, W.J.
1981-02-01
This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; themore » analysis of variance; quality control procedures; and linear regression analysis.« less
Lederer, David J; Bradford, Williamson Z; Fagan, Elizabeth A; Glaspole, Ian; Glassberg, Marilyn K; Glasscock, Kenneth F; Kardatzke, David; King, Talmadge E; Lancaster, Lisa H; Nathan, Steven D; Pereira, Carlos A; Sahn, Steven A; Swigris, Jeffrey J; Noble, Paul W
2015-07-01
FVC outcomes in clinical trials on idiopathic pulmonary fibrosis (IPF) can be substantially influenced by the analytic methodology and the handling of missing data. We conducted a series of sensitivity analyses to assess the robustness of the statistical finding and the stability of the estimate of the magnitude of treatment effect on the primary end point of FVC change in a phase 3 trial evaluating pirfenidone in adults with IPF. Source data included all 555 study participants randomized to treatment with pirfenidone or placebo in the Assessment of Pirfenidone to Confirm Efficacy and Safety in Idiopathic Pulmonary Fibrosis (ASCEND) study. Sensitivity analyses were conducted to assess whether alternative statistical tests and methods for handling missing data influenced the observed magnitude of treatment effect on the primary end point of change from baseline to week 52 in FVC. The distribution of FVC change at week 52 was systematically different between the two treatment groups and favored pirfenidone in each analysis. The method used to impute missing data due to death had a marked effect on the magnitude of change in FVC in both treatment groups; however, the magnitude of treatment benefit was generally consistent on a relative basis, with an approximate 50% reduction in FVC decline observed in the pirfenidone group in each analysis. Our results confirm the robustness of the statistical finding on the primary end point of change in FVC in the ASCEND trial and corroborate the estimated magnitude of the pirfenidone treatment effect in patients with IPF. ClinicalTrials.gov; No.: NCT01366209; URL: www.clinicaltrials.gov.
Self-consistent mean-field approach to the statistical level density in spherical nuclei
NASA Astrophysics Data System (ADS)
Kolomietz, V. M.; Sanzhur, A. I.; Shlomo, S.
2018-06-01
A self-consistent mean-field approach within the extended Thomas-Fermi approximation with Skyrme forces is applied to the calculations of the statistical level density in spherical nuclei. Landau's concept of quasiparticles with the nucleon effective mass and the correct description of the continuum states for the finite-depth potentials are taken into consideration. The A dependence and the temperature dependence of the statistical inverse level-density parameter K is obtained in a good agreement with experimental data.
An Investigation of Attitude Consistency.
ERIC Educational Resources Information Center
Leonard, Wilbert M., II
The author explores some germane implications of cognitive consistency theory. An "affective-cognitive consistency" theory, which specifies the relationship between the affective and cognitive components of the attitude structure, was taken as the theoretical basis of this study. The theory suggests that by knowing what a person values, it should…
Facts and Figures. A Compendium of Statistics on Ontario Universities. Volume 4.
ERIC Educational Resources Information Center
Council of Ontario Universities, Toronto.
The purpose of this compendium is to provide consistent and accurate statistical and graphical information on the Ontario (Canada) university system. The compendium consists of seven sections: (1) Ontario population data with population projections 1986-2021, median income by educational attainment 1985-1994, and unemployment rates by educational…
WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D; Williamson, J; Siebers, J
2014-06-15
Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water andmore » water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.« less
Brown, James G; Joyce, Kerry E; Stacey, Dawn; Thomson, Richard G
2015-05-01
Efficacy of patient decision aids (PtDAs) may be influenced by trial participants' identity either as patients seeking to benefit personally from involvement or as volunteers supporting the research effort. To determine if study characteristics indicative of participants' trial identity might influence PtDA efficacy. We undertook exploratory subgroup meta-analysis of the 2011 Cochrane review of PtDAs, including trials that compared PtDA with usual care for treatment decisions. We extracted data on whether participants initiated the care pathway, setting, practitioner interactions, and 6 outcome variables (knowledge, risk perception, decisional conflict, feeling informed, feeling clear about values, and participation). The main subgroup analysis categorized trials as "volunteerism" or "patienthood" on the basis of whether participants initiated the care pathway. A supplementary subgroup analysis categorized trials on the basis of whether any volunteerism factors were present (participants had not initiated the care pathway, had attended a research setting, or had a face-to-face interaction with a researcher). Twenty-nine trials were included. Compared with volunteerism trials, pooled effect sizes were higher in patienthood trials (where participants initiated the care pathway) for knowledge, decisional conflict, feeling informed, feeling clear, and participation. The subgroup difference was statistically significant for knowledge only (P = 0.03). When trials were compared on the basis of whether volunteerism factors were present, knowledge was significantly greater in patienthood trials (P < 0.001), but there was otherwise no consistent pattern of differences in effects across outcomes. There is a tendency toward greater PtDA efficacy in trials in which participants initiate the pathway of care. Knowledge acquisition appears to be greater in trials where participants are predominantly patients rather than volunteers. © The Author(s) 2015.
Liu, Xin; Zhang, Na; Fan, Zhaoyang; Feng, Fei; Yang, Qi; Zheng, Hairong; Liu, Pengcheng; Li, Debiao
2013-01-01
Purpose To evaluate the diagnostic performance of a newly developed noncontrast-enhanced MR angiography (NCE-MRA) technique using flow-sensitive dephasing (FSD) prepared steady-state free precession (SSFP) for detecting calf arterial disease in patients with diabetes. Materials and Methods Forty-five patients with diabetes who underwent routine CE-MRA of lower extremities were recruited for NCE-MRA at the calf on a 1.5T MR system. Image quality evaluated on a four-point scale and diagnostic performance for detecting more than 50% arterial stenosis were statistically analyzed, using CE-MRA as the standard of reference. Results A total of 264 calf arterial segments were obtained in the 45 patients with 88 legs. The percentage of diagnostic arterial segments was all 98% for NCE- and CE-MRA. The image quality, SNR, CNR was 3.3, 177, 138 and 3.5, 103, 99 for NCE-MRA and CE-MRA respectively. The average sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of NCE-MRA were 97%, 96%, 90%, 99%, and 96%, respectively on a per-segment basis and 90%, 84%, 82%, 91%, and 87%, respectively on a per-patients basis. Conclusion The NCE-MRA technique demonstrates adequate image quality in the delineation of calf arteries and consistent diagnostic performance for detecting significant stenosis with CE-MRA in patients with diabetes. PMID:24925770
Application of artificial neural networks to chemostratigraphy
NASA Astrophysics Data System (ADS)
Malmgren, BjöRn A.; Nordlund, Ulf
1996-08-01
Artificial neural networks, a branch of artificial intelligence, are computer systems formed by a number of simple, highly interconnected processing units that have the ability to learn a set of target vectors from a set of associated input signals. Neural networks learn by self-adjusting a set of parameters, using some pertinent algorithm to minimize the error between the desired output and network output. We explore the potential of this approach in solving a problem involving classification of geochemical data. The data, taken from the literature, are derived from four late Quaternary zones of volcanic ash of basaltic and rhyolithic origin from the Norwegian Sea. These ash layers span the oxygen isotope zones 1, 5, 7, and 11, respectively (last 420,000 years). The data consist of nine geochemical variables (oxides) determined in each of 183 samples. We employed a three-layer back propagation neural network to assess its efficiency to optimally differentiate samples from the four ash zones on the basis of their geochemical composition. For comparison, three statistical pattern recognition techniques, linear discriminant analysis, the k-nearest neighbor (k-NN) technique, and SIMCA (soft independent modeling of class analogy), were applied to the same data. All of these showed considerably higher error rates than the artificial neural network, indicating that the back propagation network was indeed more powerful in correctly classifying the ash particles to the appropriate zone on the basis of their geochemical composition.
NASA Systems Engineering Research Consortium: Defining the Path to Elegance in Systems
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Farrington, Phillip A.
2016-01-01
The NASA Systems Engineering Research Consortium was formed at the end of 2010 to study the approaches to producing elegant systems on a consistent basis. This has been a transformative study looking at the engineering and organizational basis of systems engineering. The consortium has engaged in a variety of research topics to determine the path to elegant systems. In the second year of the consortium, a systems engineering framework emerged which structured the approach to systems engineering and guided our research. This led in the third year to set of systems engineering postulates that the consortium is continuing to refine. The consortium has conducted several research projects that have contributed significantly to the understanding of systems engineering. The consortium has surveyed the application of the NASA 17 systems engineering processes, explored the physics and statistics of systems integration, and considered organizational aspects of systems engineering discipline integration. The systems integration methods have included system exergy analysis, Akaike Information Criteria (AIC), State Variable Analysis, Multidisciplinary Coupling Analysis (MCA), Multidisciplinary Design Optimization (MDO), System Cost Modelling, System Robustness, and Value Modelling. Organizational studies have included the variability of processes in change evaluations, margin management within the organization, information theory of board structures, social categorization of unintended consequences, and initial looks at applying cognitive science to systems engineering. Consortium members have also studied the bidirectional influence of policy and law with systems engineering.
NASA Systems Engineering Research Consortium: Defining the Path to Elegance in Systems
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Farrington, Phillip A.
2016-01-01
The NASA Systems Engineering Research Consortium was formed at the end of 2010 to study the approaches to producing elegant systems on a consistent basis. This has been a transformative study looking at the engineering and organizational basis of systems engineering. The consortium has engaged in a variety of research topics to determine the path to elegant systems. In the second year of the consortium, a systems engineering framework emerged which structured the approach to systems engineering and guided our research. This led in the third year to set of systems engineering postulates that the consortium is continuing to refine. The consortium has conducted several research projects that have contributed significantly to the understanding of systems engineering. The consortium has surveyed the application of the NASA 17 systems engineering processes, explored the physics and statistics of systems integration, and considered organizational aspects of systems engineering discipline integration. The systems integration methods have included system energy analysis, Akaike Information Criteria (AIC), State Variable Analysis, Multidisciplinary Coupling Analysis (MCA), Multidisciplinary Design Optimization (MDO), System Cost Modeling, System Robustness, and Value Modeling. Organizational studies have included the variability of processes in change evaluations, margin management within the organization, information theory of board structures, social categorization of unintended consequences, and initial looks at applying cognitive science to systems engineering. Consortium members have also studied the bidirectional influence of policy and law with systems engineering.
Sun, Zhoutong; Lonsdale, Richard; Li, Guangyue; Reetz, Manfred T
2016-10-04
Saturation mutagenesis at sites lining the binding pockets of enzymes constitutes a viable protein engineering technique for enhancing or inverting stereoselectivity. Statistical analysis shows that oversampling in the screening step (the bottleneck) increases astronomically as the number of residues in the randomization site increases, which is the reason why reduced amino acid alphabets have been employed, in addition to splitting large sites into smaller ones. Limonene epoxide hydrolase (LEH) has previously served as the experimental platform in these methodological efforts, enabling comparisons between single-code saturation mutagenesis (SCSM) and triple-code saturation mutagenesis (TCSM); these employ either only one or three amino acids, respectively, as building blocks. In this study the comparative platform is extended by exploring the efficacy of double-code saturation mutagenesis (DCSM), in which the reduced amino acid alphabet consists of two members, chosen according to the principles of rational design on the basis of structural information. The hydrolytic desymmetrization of cyclohexene oxide is used as the model reaction, with formation of either (R,R)- or (S,S)-cyclohexane-1,2-diol. DCSM proves to be clearly superior to the likewise tested SCSM, affording both R,R- and S,S-selective mutants. These variants are also good catalysts in reactions of further substrates. Docking computations reveal the basis of enantioselectivity. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Monitoring and Evaluation: Statistical Support for Life-cycle Studies, Annual Report 2003.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skalski, John
2003-11-01
The ongoing mission of this project is the development of statistical tools for analyzing fisheries tagging data in the most precise and appropriate manner possible. This mission also includes providing statistical guidance on the best ways to design large-scale tagging studies. This mission continues because the technologies for conducting fish tagging studies continuously evolve. In just the last decade, fisheries biologists have seen the evolution from freeze-brands and coded wire tags (CWT) to passive integrated transponder (PIT) tags, balloon-tags, radiotelemetry, and now, acoustic-tags. With each advance, the technology holds the promise of more detailed and precise information. However, the technologymore » for analyzing and interpreting the data also becomes more complex as the tagging techniques become more sophisticated. The goal of the project is to develop the analytical tools in parallel with the technical advances in tagging studies, so that maximum information can be extracted on a timely basis. Associated with this mission is the transfer of these analytical capabilities to the field investigators to assure consistency and the highest levels of design and analysis throughout the fisheries community. Consequently, this project provides detailed technical assistance on the design and analysis of tagging studies to groups requesting assistance throughout the fisheries community. Ideally, each project and each investigator would invest in the statistical support needed for the successful completion of their study. However, this is an ideal that is rarely if every attained. Furthermore, there is only a small pool of highly trained scientists in this specialized area of tag analysis here in the Northwest. Project 198910700 provides the financial support to sustain this local expertise on the statistical theory of tag analysis at the University of Washington and make it available to the fisheries community. Piecemeal and fragmented support from various agencies and organizations would be incapable of maintaining a center of expertise. The mission of the project is to help assure tagging studies are designed and analyzed from the onset to extract the best available information using state-of-the-art statistical methods. The overarching goals of the project is to assure statistically sound survival studies so that fish managers can focus on the management implications of their findings and not be distracted by concerns whether the studies are statistically reliable or not. Specific goals and objectives of the study include the following: (1) Provide consistent application of statistical methodologies for survival estimation across all salmon life cycle stages to assure comparable performance measures and assessment of results through time, to maximize learning and adaptive management opportunities, and to improve and maintain the ability to responsibly evaluate the success of implemented Columbia River FWP salmonid mitigation programs and identify future mitigation options. (2) Improve analytical capabilities to conduct research on survival processes of wild and hatchery chinook and steelhead during smolt outmigration, to improve monitoring and evaluation capabilities and assist in-season river management to optimize operational and fish passage strategies to maximize survival. (3) Extend statistical support to estimate ocean survival and in-river survival of returning adults. Provide statistical guidance in implementing a river-wide adult PIT-tag detection capability. (4) Develop statistical methods for survival estimation for all potential users and make this information available through peer-reviewed publications, statistical software, and technology transfers to organizations such as NOAA Fisheries, the Fish Passage Center, US Fish and Wildlife Service, US Geological Survey (USGS), US Army Corps of Engineers (USACE), Public Utility Districts (PUDs), the Independent Scientific Advisory Board (ISAB), and other members of the Northwest fisheries community. (5) Provide and maintain statistical software for tag analysis and user support. (6) Provide improvements in statistical theory and software as requested by user groups. These improvements include extending software capabilities to address new research issues, adapting tagging techniques to new study designs, and extending the analysis capabilities to new technologies such as radio-tags and acoustic-tags.« less
The space of ultrametric phylogenetic trees.
Gavryushkin, Alex; Drummond, Alexei J
2016-08-21
The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Type-curve estimation of statistical heterogeneity
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica
2004-04-01
The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.
Laboratory performance in the Sediment Laboratory Quality-Assurance Project, 1996-98
Gordon, John D.; Newland, Carla A.; Gagliardi, Shane T.
2000-01-01
Analytical results from all sediment quality-control samples are compiled and statistically summarized by the USGS, Branch of Quality Systems, both on an intra- and interlaboratory basis. When evaluating these data, the reader needs to keep in mind that every measurement has an error component associated with it. It is premature to use the data from the first five SLQA studies to judge any of the laboratories as performing in an unacceptable manner. There were, however, some notable differences in the results for the 12 laboratories that participated in the five SLQA studies. For example, the overall median percent difference for suspended-sediment concentration on an individual laboratory basis ranged from –18.04 to –0.33 percent. Five of the 12 laboratories had an overall median percent difference for suspended-sediment concentration of –2.02 to –0.33 percent. There was less variability in the median difference for the measured fine-size material mass. The overall median percent difference for fine-size material mass ranged from –10.11 to –4.27 percent. Except for one laboratory, the median difference for fine-size material mass was within a fairly narrow range of –6.76 to –4.27 percent. The median percent difference for sand-size material mass differed among laboratories more than any other physical sediment property measured in the study. The overall median percent difference for the sand-size material mass ranged from –1.49 percent to 26.39 percent. Five of the nine laboratories that do sand/fine separations had overall median percent differences that ranged from –1.49 to 2.98 percent for sand-size material mass. Careful review of the data reveals that certain laboratories consistently produced data within statistical control limits for some or all of the physical sediment properties measured in this study, whereas other laboratories occasionally produced data that exceeded the control limits.
Uenohara, Seiji; Mitsui, Takahito; Hirata, Yoshito; Morie, Takashi; Horio, Yoshihiko; Aihara, Kazuyuki
2013-06-01
We experimentally study strange nonchaotic attractors (SNAs) and chaotic attractors by using a nonlinear integrated circuit driven by a quasiperiodic input signal. An SNA is a geometrically strange attractor for which typical orbits have nonpositive Lyapunov exponents. It is a difficult problem to distinguish between SNAs and chaotic attractors experimentally. If a system has an SNA as a unique attractor, the system produces an identical response to a repeated quasiperiodic signal, regardless of the initial conditions, after a certain transient time. Such reproducibility of response outputs is called consistency. On the other hand, if the attractor is chaotic, the consistency is low owing to the sensitive dependence on initial conditions. In this paper, we analyze the experimental data for distinguishing between SNAs and chaotic attractors on the basis of the consistency.
NASA Astrophysics Data System (ADS)
Halbach, Heiner; Chatterjee, Niranjan D.
1984-11-01
The technique of linear parametric programming has been applied to derive sets of internally consistent thermodynamic data for 21 condensed phases of the quaternary system CaO-Al2O3-SiO2-H2O (CASH) (Table 4). This was achieved by simultaneously processing: a) calorimetric data for 16 of these phases (Table 1), and b) experimental phase equilibria reversal brackets for 27 reactions (Table 3) involving these phases. Calculation of equilibrium P-T curves of several arbitrarily picked reactions employing the preferred set of internally consistent thermodynamic data from Table 4 shows that the input brackets are invariably satisfied by the calculations (Fig. 2a). By contrast, the same equilibria calculated on the basis of a set of thermodynamic data derived by applying statistical methods to a large body of comparable input data (Haas et al. 1981; Hemingway et al. 1982) do not necessarily agree with the experimental reversal brackets. Prediction of some experimentally investigated phase relations not included into the linear programming input database also appears to be remarkably successful. Indications are, therefore, that the thermodynamic data listed in Table 4 may be used with confidence to predict geologic phase relations in the CASH system with considerable accuracy. For such calculated phase diagrams and their petrological implications, the reader's attention is drawn to the paper by Chatterjee et al. (1984).
The vortex street as a statistical-mechanical phenomenon
NASA Technical Reports Server (NTRS)
Montgomery, D.
1974-01-01
An explanation of the Karman vortex street is presented on the basis of the two-temperature canonical distribution for inviscid two-dimensional flows in Navier-Stokes fluids or guiding-center plasmas.
Characterizing Sub-Daily Flow Regimes: Implications of Hydrologic Resolution on Ecohydrology Studies
Bevelhimer, Mark S.; McManamay, Ryan A.; O'Connor, B.
2014-05-26
Natural variability in flow is a primary factor controlling geomorphic and ecological processes in riverine ecosystems. Within the hydropower industry, there is growing pressure from environmental groups and natural resource managers to change reservoir releases from daily peaking to run-of-river operations on the basis of the assumption that downstream biological communities will improve under a more natural flow regime. In this paper, we discuss the importance of assessing sub-daily flows for understanding the physical and ecological dynamics within river systems. We present a variety of metrics for characterizing sub-daily flow variation and use these metrics to evaluate general trends amongmore » streams affected by peaking hydroelectric projects, run-of-river projects and streams that are largely unaffected by flow altering activities. Univariate and multivariate techniques were used to assess similarity among different stream types on the basis of these sub-daily metrics. For comparison, similar analyses were performed using analogous metrics calculated with mean daily flow values. Our results confirm that sub-daily flow metrics reveal variation among and within streams that are not captured by daily flow statistics. Using sub-daily flow statistics, we were able to quantify the degree of difference between unaltered and peaking streams and the amount of similarity between unaltered and run-of-river streams. The sub-daily statistics were largely uncorrelated with daily statistics of similar scope. Furthermore, on short temporal scales, sub-daily statistics reveal the relatively constant nature of unaltered streamreaches and the highly variable nature of hydropower-affected streams, whereas daily statistics show just the opposite over longer temporal scales.« less
ERIC Educational Resources Information Center
Hill, Dave
2009-01-01
In this paper, the author critiques what he analyses as the misuse of statistics in arguments put forward by some Critical Race Theorists in Britain showing that "Race" "trumps" Class in terms of underachievement at 16+ exams in England and Wales. At a theoretical level, using Marxist work the author argues for a notion of…
Statistics of Low-Mass Companions to Stars: Implications for Their Origin
NASA Technical Reports Server (NTRS)
Stepinski, T. F.; Black, D. C.
2001-01-01
One of the more significant results from observational astronomy over the past few years has been the detection, primarily via radial velocity studies, of low-mass companions (LMCs) to solar-like stars. The commonly held interpretation of these is that the majority are "extrasolar planets" whereas the rest are brown dwarfs, the distinction made on the basis of apparent discontinuity in the distribution of M sin i for LMCs as revealed by a histogram. We report here results from statistical analysis of M sin i, as well as of the orbital elements data for available LMCs, to rest the assertion that the LMCs population is heterogeneous. The outcome is mixed. Solely on the basis of the distribution of M sin i a heterogeneous model is preferable. Overall, we find that a definitive statement asserting that LMCs population is heterogeneous is, at present, unjustified. In addition we compare statistics of LMCs with a comparable sample of stellar binaries. We find a remarkable statistical similarity between these two populations. This similarity coupled with marked populational dissimilarity between LMCs and acknowledged planets motivates us to suggest a common origin hypothesis for LMCs and stellar binaries as an alternative to the prevailing interpretation. We discuss merits of such a hypothesis and indicate a possible scenario for the formation of LMCs.
A statistical method for lung tumor segmentation uncertainty in PET images based on user inference.
Zheng, Chaojie; Wang, Xiuying; Feng, Dagan
2015-01-01
PET has been widely accepted as an effective imaging modality for lung tumor diagnosis and treatment. However, standard criteria for delineating tumor boundary from PET are yet to develop largely due to relatively low quality of PET images, uncertain tumor boundary definition, and variety of tumor characteristics. In this paper, we propose a statistical solution to segmentation uncertainty on the basis of user inference. We firstly define the uncertainty segmentation band on the basis of segmentation probability map constructed from Random Walks (RW) algorithm; and then based on the extracted features of the user inference, we use Principle Component Analysis (PCA) to formulate the statistical model for labeling the uncertainty band. We validated our method on 10 lung PET-CT phantom studies from the public RIDER collections [1] and 16 clinical PET studies where tumors were manually delineated by two experienced radiologists. The methods were validated using Dice similarity coefficient (DSC) to measure the spatial volume overlap. Our method achieved an average DSC of 0.878 ± 0.078 on phantom studies and 0.835 ± 0.039 on clinical studies.
Ensembles of radial basis function networks for spectroscopic detection of cervical precancer
NASA Technical Reports Server (NTRS)
Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.
1998-01-01
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.
An Automated System for Chromosome Analysis
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Melnyk, J. H.
1976-01-01
The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and to provide a basis for statistical analysis of quantitative chromosome measurement data are described.
Implementation of Discovery Projects in Statistics
ERIC Educational Resources Information Center
Bailey, Brad; Spence, Dianna J.; Sinn, Robb
2013-01-01
Researchers and statistics educators consistently suggest that students will learn statistics more effectively by conducting projects through which they actively engage in a broad spectrum of tasks integral to statistical inquiry, in the authentic context of a real-world application. In keeping with these findings, we share an implementation of…
NASA Technical Reports Server (NTRS)
1980-01-01
MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.
NASA Astrophysics Data System (ADS)
Yu, Yong; Wang, Jun
Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.
[Data supporting quality circle management of inpatient depression treatment].
Brand, S; Härter, M; Sitta, P; van Calker, D; Menke, R; Heindl, A; Herold, K; Kudling, R; Luckhaus, C; Rupprecht, U; Sanner, Dirk; Schmitz, D; Schramm, E; Berger, M; Gaebel, W; Schneider, F
2005-07-01
Several quality assurance initiatives in health care have been undertaken during the past years. The next step consists of systematically combining single initiatives in order to built up a strategic quality management. In a German multicenter study, the quality of inpatient depression treatment was measured in ten psychiatric hospitals. Half of the hospitals received comparative feedback on their individual results in comparison to the other hospitals (bench marking). Those bench markings were used by each hospital as a statistic basis for in-house quality work, to improve the quality of depression treatment. According to hospital differences concerning procedure and outcome, different goals were chosen. There were also differences with respect to structural characteristics, strategies, and outcome. The feedback from participants about data-based quality circles in general and the availability of bench-marking data was positive. The necessity of carefully choosing quality circle members and professional moderation became obvious. Data-based quality circles including bench-marking have proven to be useful for quality management in inpatient depression care.
NASA Astrophysics Data System (ADS)
Kim, C.-H.; Kreiner, J. M.; Zakrzewski, B.; Ogłoza, W.; Kim, H.-W.; Jeong, M.-J.
2018-04-01
A comprehensive catalog of 623 galactic eclipsing binary (EB) systems with eccentric orbits is presented with more than 2830 times of minima determined from the archived photometric data by various sky-survey projects and new photometric measurements. The systems are divided into two groups according to whether the individual system has a GCVS name or not. All the systems in both groups are further classified into three categories (D, A, and A+III) on the basis of their eclipse timing diagrams: 453 D systems showing just constantly displaced secondary minima, 139 A systems displaying only apsidal motion (AM), and 31 A+III systems exhibiting both AM and light-time effects. AM parameters for 170 systems (A and A+III systems) are consistently calculated and cataloged with basic information for all systems. Some important statistics for the AM parameters are discussed and compared with those derived for the eccentric EB systems in the Large and Small Magellanic Clouds.
The generalizability of the Youth Self-Report syndrome structure in 23 societies.
Ivanova, Masha Y; Achenbach, Thomas M; Rescorla, Leslie A; Dumenci, Levent; Almqvist, Fredrik; Bilenberg, Niels; Bird, Hector; Broberg, Anders G; Dobrean, Anca; Döpfner, Manfred; Erol, Nese; Forns, Maria; Hannesdottir, Helga; Kanbayashi, Yasuko; Lambert, Michael C; Leung, Patrick; Minaei, Asghar; Mulatu, Mesfin S; Novik, Torunn; Oh, Kyung Ja; Roussos, Alexandra; Sawyer, Michael; Simsek, Zeynep; Steinhausen, Hans-Christoph; Weintraub, Sheila; Winkler Metzke, Christa; Wolanczyk, Tomasz; Zilber, Nelly; Zukauskiene, Rita; Verhulst, Frank C
2007-10-01
As a basis for theories of psychopathology, clinical psychology and related disciplines need sound taxonomies that are generalizable across diverse populations. To test the generalizability of a statistically derived 8-syndrome taxonomic model for youth psychopathology, confirmatory factor analyses (CFAs) were performed on the Youth Self-Report (T. M. Achenbach & L. A. Rescorla, 2001) completed by 30,243 youths 11-18 years old from 23 societies. The 8-syndrome taxonomic model met criteria for good fit to the data from each society. This was consistent with findings for the parent-completed Child Behavior Checklist (Achenbach & Rescorla, 2001) and the teacher-completed Teacher's Report Form (Achenbach & Rescorla, 2001) from many societies. Separate CFAs by gender and age group supported the 8-syndrome model for boys and girls and for younger and older youths within individual societies. The findings provide initial support for the taxonomic generalizability of the 8-syndrome model across very diverse societies, both genders, and 2 age groups. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Multi-charge-state molecular dynamics and self-diffusion coefficient in the warm dense matter regime
NASA Astrophysics Data System (ADS)
Fu, Yongsheng; Hou, Yong; Kang, Dongdong; Gao, Cheng; Jin, Fengtao; Yuan, Jianmin
2018-01-01
We present a multi-ion molecular dynamics (MIMD) simulation and apply it to calculating the self-diffusion coefficients of ions with different charge-states in the warm dense matter (WDM) regime. First, the method is used for the self-consistent calculation of electron structures of different charge-state ions in the ion sphere, with the ion-sphere radii being determined by the plasma density and the ion charges. The ionic fraction is then obtained by solving the Saha equation, taking account of interactions among different charge-state ions in the system, and ion-ion pair potentials are computed using the modified Gordon-Kim method in the framework of temperature-dependent density functional theory on the basis of the electron structures. Finally, MIMD is used to calculate ionic self-diffusion coefficients from the velocity correlation function according to the Green-Kubo relation. A comparison with the results of the average-atom model shows that different statistical processes will influence the ionic diffusion coefficient in the WDM regime.
ISO9000 and the quality management system in the digital hospital.
Liu, Yalan; Yao, Bin; Zhang, Zigang
2002-01-01
ISO9000 quality management system (ISO9000QMS) emphasize on the customer-oriented, managers' leadership and all staff's joining, adopt the process method and system management, spread the taking facts as a basis to make decision and improve consistently, and establish win-win relation with the suppliers. So, the digital hospital can adopt the ISO9000QMS. In order to establish the ISO9000QMS, the digital hospital should: (1) Design integrally, including analyzing the operation procedure, clarifying the job duties, setting up the spreading team and setting the quality policy and objectives: (2) Learning the ISO9000 quality standards; (3) Drawing up the documents, including the quality manual, program files and operation guiding files; (4) Training according the documents; (5) Executing the quality standard, including the service quality auditing, quality record auditing and quality system auditing; (6) Improving continually. With the establishment of ISO900QMS, the digital hospital can appraise more accurately, analyze quality matters statistically and avoid the interference of artificial factors.
NASA Astrophysics Data System (ADS)
Drossel, Welf-Guntram; Schubert, Andreas; Putz, Matthias; Koriath, Hans-Joachim; Wittstock, Volker; Hensel, Sebastian; Pierer, Alexander; Müller, Benedikt; Schmidt, Marek
2018-01-01
The technique joining by forming allows the structural integration of piezoceramic fibers into locally microstructured metal sheets without any elastic interlayers. A high-volume production of the joining partners causes in statistical deviations from the nominal dimensions. A numerical simulation on geometric process sensitivity shows that the deviations have a high significant influence on the resulting fiber stresses after the joining by forming operation and demonstrate the necessity of a monitoring concept. On this basis, the electromechanical behavior of piezoceramic array transducers is investigated experimentally before, during and after the joining process. The piezoceramic array transducer consists of an arrangement of five electrical interconnected piezoceramic fibers. The findings show that the impedance spectrum depends on the fiber stresses and can be used for in-process monitoring during the joining process. Based on the impedance values the preload state of the interconnected piezoceramic fibers can be specifically controlled and a fiber overload.
Andrade, Hector; Renaud, Paul E
2011-12-01
Benthic faunal data is regularly collected worldwide to assess the ecological quality of marine environments. Recently, there has been renewed interest in developing biological indices able to identify environmental status and potential anthropogenic impacts. In this paper we evaluate the performance of a general polychaete/amphipod ratio along the Norwegian continental shelf as an environmental indicator for offshore oil and gas impacts. Two main trends are apparent: first, a contamination gradient is discernible from where production takes place compared to stations 10,000 m away. Second, the quality of the marine environment has improved over time. These results are consistent with monitoring reports employing a combination of uni- and multi-variate statistics. Thus, we consider this ratio as a relatively simple, useful and potentially cost-effective complement to other more demanding assessment techniques. Because of its strong theoretical basis, it may also be useful for detecting ecological change as a result of other activities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Beyond statistical inference: A decision theory for science
KILLEEN, PETER R.
2008-01-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests—which place all value on the replicability of an effect and none on its magnitude—as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute. PMID:17201351
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Practicable group testing method to evaluate weight/weight GMO content in maize grains.
Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi
2011-07-13
Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.
Beyond statistical inference: a decision theory for science.
Killeen, Peter R
2006-08-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.
Diagnostic articulation tables
NASA Astrophysics Data System (ADS)
Mikhailov, V. G.
2002-09-01
In recent years, considerable progress has been made in the development of instrumental methods for general speech quality and intelligibility evaluation on the basis of modeling the auditory perception of speech and measuring the signal-to-noise ratio. Despite certain advantages (fast measurement procedures with a low labor consumption), these methods are not universal and, in essence, secondary, because they rely on the calibration based on subjective-statistical measurements. At the same time, some specific problems of speech quality evaluation, such as the diagnostics of the factors responsible for the deviation of the speech quality from standard (e.g., accent features of a speaker or individual voice distortions), can be solved by psycholinguistic methods. This paper considers different kinds of diagnostic articulation tables: tables of minimal pairs of monosyllabic words (DRT) based on the Jacobson differential features, tables consisting of multisyllabic quartets of Russian words (the choice method), and tables of incomplete monosyllables of the _VC/CV_ type (the supplementary note method). Comparative estimates of the tables are presented along with the recommendations concerning their application.
Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng
2017-03-21
Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.
The efficacy of E.P.D., a new immunotherapy, in the treatment of allergic diseases in children.
Caramia, G; Franceschini, F; Cimarelli, Z A; Ciucchi, M S; Gagliardini, R; Ruffini, E
1996-11-01
A double blind study was made on a group of 35 children, 8 of whom were allergic to Grass and 27 allergic to Pteronyssinus and Farinae Dermatophagoides. We verified the efficacy and tolerability of a new immunotherapy called E.P.D. (Enzyme Potentiated Desensitization). This particular immunotherapy consists in an intradermal injection of a mix made up of an allergic solution at extremely low doses and an enzyme, beta-glucuronidase. The vaccine is administered once a year, two weeks before pollen peaks for children with seasonal allergies and two times a year, in February and November, for children with non-seasonal allergies (Dermatophagoides). The results, statistically analyzed on the basis of a symptoms score, showed good clinical efficacy in patients affected by both seasonal and non-seasonal allergies. Due to the clinical effectiveness, easy administration and excellent tolerability of the immunotherapy, E.P.D. is particularly suited for treating or reducing allergic symptoms in allergic children.
Di Stanislao, C; Di Berardino, L; Bianchi, I; Bologna, G
1997-02-01
Control of seasonal symptoms by means of a preventive and easy to use (only one intradermal injection eight weeks before the pollen peak) immunotherapy, is recommended nowadays. We verified the clinical efficacy of E.P.D. (Enzyme Potentiated Desensibilization) in a double-blind, placebo-controlled study. This particular immunotherapy consists of an intradermal injection mix, made up of allergenic extracts at extremely low doses and an enzyme called beta-glucuronidase. The vaccine is administered once a year, eight weeks before pollen peaks. We studied a group of 40 patients allergic to grass pollen. The results, analysed statistically on the basis of a symptoms score, showed good clinical efficacy and a significant reduction of drug consumption during the high pollen period. Due to the clinical effectiveness, easy administration (only on injection) and excellent tolerance of the immunotherapy, E.P.D. is particularly suited for the prevention of seasonal symptoms in patients allergic to grass pollen.
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-01-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882
Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne
2012-12-01
In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.
The clinician's guide to autism.
Harrington, John W; Allen, Korrie
2014-02-01
On the basis of the most recent epidemiologic research, Autism Spectrum Disorder (ASD) affects approximately 1% to 2% of all children. (1)(2) On the basis of some research evidence and consensus, the Modified Checklist for Autism in Toddlers isa helpful tool to screen for autism in children between ages 16 and 30 months. (11) The Diagnostic Statistical Manual of Mental Disorders, Fourth Edition, changes to a 2-symptom category from a 3-symptom category in the Diagnostic Statistical Manual of Mental Disorders, Fifth Edition(DSM-5): deficits in social communication and social interaction are combined with repetitive and restrictive behaviors, and more criteria are required per category. The DSM-5 subsumes all the previous diagnoses of autism (classic autism, Asperger syndrome, and pervasive developmental disorder not otherwise specified) into just ASDs. On the basis of moderate to strong evidence, the use of applied behavioral analysis and intensive behavioral programs has a beneficial effect on language and the core deficits of children with autism. (16) Currently, minimal or no evidence is available to endorse most complementary and alternative medicine therapies used by parents, such as dietary changes (gluten free), vitamins, chelation, and hyperbaric oxygen. (16) On the basis of consensus and some studies, pediatric clinicians should improve their capacity to provide children with ASD a medical home that is accessible and provides family-centered, continuous, comprehensive and coordinated, compassionate, and culturally sensitive care. (20)
Senior Computational Scientist | Center for Cancer Research
The Basic Science Program (BSP) pursues independent, multidisciplinary research in basic and applied molecular biology, immunology, retrovirology, cancer biology, and human genetics. Research efforts and support are an integral part of the Center for Cancer Research (CCR) at the Frederick National Laboratory for Cancer Research (FNLCR). The Cancer & Inflammation Program (CIP), Basic Science Program, HLA Immunogenetics Section, under the leadership of Dr. Mary Carrington, studies the influence of human leukocyte antigens (HLA) and specific KIR/HLA genotypes on risk of and outcomes to infection, cancer, autoimmune disease, and maternal-fetal disease. Recent studies have focused on the impact of HLA gene expression in disease, the molecular mechanism regulating expression levels, and the functional basis for the effect of differential expression on disease outcome. The lab’s further focus is on the genetic basis for resistance/susceptibility to disease conferred by immunogenetic variation. KEY ROLES/RESPONSIBILITIES The Senior Computational Scientist will provide research support to the CIP-BSP-HLA Immunogenetics Section performing bio-statistical design, analysis and reporting of research projects conducted in the lab. This individual will be involved in the implementation of statistical models and data preparation. Successful candidate should have 5 or more years of competent, innovative biostatistics/bioinformatics research experience, beyond doctoral training Considerable experience with statistical software, such as SAS, R and S-Plus Sound knowledge, and demonstrated experience of theoretical and applied statistics Write program code to analyze data using statistical analysis software Contribute to the interpretation and publication of research results
[Bayesian statistics in medicine -- part II: main applications and inference].
Montomoli, C; Nichelatti, M
2008-01-01
Bayesian statistics is not only used when one is dealing with 2-way tables, but it can be used for inferential purposes. Using the basic concepts presented in the first part, this paper aims to give a simple overview of Bayesian methods by introducing its foundation (Bayes' theorem) and then applying this rule to a very simple practical example; whenever possible, the elementary processes at the basis of analysis are compared to those of frequentist (classical) statistical analysis. The Bayesian reasoning is naturally connected to medical activity, since it appears to be quite similar to a diagnostic process.
Dissecting the genetics of complex traits using summary association statistics.
Pasaniuc, Bogdan; Price, Alkes L
2017-02-01
During the past decade, genome-wide association studies (GWAS) have been used to successfully identify tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyse summary association statistics. Here, we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases.
[The concept "a case in outpatient treatment" in military policlinic activity].
Vinogradov, S N; Vorob'ev, E G; Shklovskiĭ, B L
2014-04-01
Substantiates the necessity of transition of military policlinics to the accounting system and evaluation of their activity on the finished cases of outpatient treatment. Only automating data-statistical processes can solve this problem. On the basis of analysis of the literature data, requirements of the guidance documents and observational results concludes that preliminarily should be done revisal (formalisation) of existing concepts of medical statistics from the position of information environment which in use - electronic databases. In this aspect specified the main features of outpatient treatment case as a unit of medical-statistical record, and formulated its definition.
Dissecting the genetics of complex traits using summary association statistics
Pasaniuc, Bogdan; Price, Alkes L.
2017-01-01
During the past decade, genome-wide association studies (GWAS) have successfully identified tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyze summary association statistics. Here we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases. PMID:27840428
A Framework for Assessing High School Students' Statistical Reasoning.
Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang
2016-01-01
Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.
A Framework for Assessing High School Students' Statistical Reasoning
2016-01-01
Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students’ statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students’ statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework’s cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments. PMID:27812091
NASA Technical Reports Server (NTRS)
Holland, S. Douglas (Inventor); Steele, Glen F. (Inventor); Romero, Denise M. (Inventor); Koudelka, Robert David (Inventor)
2008-01-01
A data multiplexer that accommodates both industry standard CCSDS data packets and bits streams and standard IEEE 1394 data is described. The multiplexer provides a statistical allotment of bandwidth to the channels in turn, preferably four, but expandable in increments of four up to sixteen. A microcontroller determines bandwidth requested by the plurality of channels, as well as the bandwidth available, and meters out the available bandwidth on a statistical basis employing flow control to the input channels.
Atmospheric Visibility Monitoring for planetary optical communications
NASA Technical Reports Server (NTRS)
Cowles, Kelly
1991-01-01
The Atmospheric Visibility Monitoring project endeavors to improve current atmospheric models and generate visibility statistics relevant to prospective earth-satellite optical communications systems. Three autonomous observatories are being used to measure atmospheric conditions on the basis of observed starlight; these data will yield clear-sky and transmission statistics for three sites with high clear-sky probabilities. Ground-based data will be compared with satellite imagery to determine the correlation between satellite data and ground-based observations.
Frequency and longitudinal trends of household care product use
NASA Astrophysics Data System (ADS)
Moran, Rebecca E.; Bennett, Deborah H.; Tancredi, Daniel J.; Wu, Xiangmei (May); Ritz, Beate; Hertz-Picciotto, Irva
2012-08-01
The use of household cleaning products and air fresheners exposes people to a variety of chemicals, including some that have been shown to be irritants, potential carcinogens and endocrine disrupting compounds. In addition, some react with ambient ozone infiltrating to the indoor environment to form potentially toxic secondary pollutants. Although realistic estimates of usage patterns are necessary for modeling potential exposures in risk assessments, few studies have documented cleaning habits and product usage to characterize how they vary between households and over time. In addition, understanding within-household temporal variability of use is important to assess the reliability of exposure questionnaires used in epidemiological surveys and improve the cost-efficiency of data collection. In the SUPERB (Study of Use of Products and Exposure-Related Behavior) study, frequencies of use of eight types of household cleaning products and air fresheners and the performance of different types of cleaning tasks are collected in three annual telephone and six quarterly web-based surveys. All-purpose and glass cleaners were the products most frequently used among all products surveyed. Use frequencies differed by demographic and other household characteristics for some products. Product usage was internally consistent, with over 75% of pairwise cross-sectional correlations between product types statistically significantly different from zero. In addition, each product type was correlated with at least one cleaning habit. Frequency of cleaning product use and performing cleaning tasks did not vary by season. An examination of intra-household variability showed moderately to highly consistent usage patterns over time, with lower temporal consistency observed among products used more frequently, such as all-purpose cleaners. Frequency of household care product usage was consistent enough that in epidemiologic studies, participants can be classified, for example, into three categories on the basis of a single assessment, with only minimal misclassification.
Transportation Safety Information Report : Second Quarter 1984
DOT National Transportation Integrated Search
1984-01-01
The "Transportation Safety Information Report" is a compendium of selected national-level transportation safety statistics for all modes of transportation. The report presents and compares data on a monthly and quarterly basis for transportation fata...
Transportation Safety Information Report : Second Quarter 1985
DOT National Transportation Integrated Search
1985-10-01
The "Transportation Safety Information Report" is a compendium of selected national-level transportation safety statistics for all modes of transportation. The report presents and compares data on a monthly and quarterly basis for transportation fata...
Clayson, Peter E; Miller, Gregory A
2017-01-01
Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Ortega, A O L; Dos Santos, M T B R; Mendes, F M; Ciamponi, A L
2014-09-01
The relation between teeth-grinding and the use of drugs acting on the central nervous system of cerebral palsy (CP) patients has not yet been described. The aim of this research was to evaluate the presence or absence of teeth-grinding (sleep and/or awake periods) in normal and in CP children and adolescents, as well as the association of teeth-grinding and use of anticonvulsant drugs. The sample consisted of 207 children and adolescents, divided into three groups: G1, individuals with CP who did not take anticonvulsant drugs; G2, individuals with CP administered medications on a regular basis; and CG, normal individuals. Logistic regression analyses were performed to evaluate the association of teeth-grinding with some variables. No significant statistical differences were observed regarding the presence or absence of teeth-grinding when G1 and G2 were compared. However, compared with the CG, a statistically significant difference was determined, with the CG showing fewer children presenting teeth-grinding (P < 0·001). Among those children/adolescents prescribed drug therapy, the barbiturate group showed a greater frequency of teeth-grinding. CP children and adolescents show a greater and significant presence of grinding of the teeth compared with normal individuals. Subjects taking barbiturate drugs showed greater presence of teeth-grinding, than those who were taking medications from the other groups of anticonvulsant drugs. © 2014 John Wiley & Sons Ltd.
Mezzasalma, Stefano A
2007-03-15
The theoretical basis of a recent theory of Brownian relativity for polymer solutions is deepened and reexamined. After the problem of relative diffusion in polymer solutions is addressed, its two postulates are formulated in all generality. The former builds a statistical equivalence between (uncorrelated) timelike and shapelike reference frames, that is, among dynamical trajectories of liquid molecules and static configurations of polymer chains. The latter defines the "diffusive horizon" as the invariant quantity to work with in the special version of the theory. Particularly, the concept of universality in polymer physics corresponds in Brownian relativity to that of covariance in the Einstein formulation. Here, a "universal" law consists of a privileged observation, performed from the laboratory rest frame and agreeing with any diffusive reference system. From the joint lack of covariance and simultaneity implied by the Brownian Lorentz-Poincaré transforms, a relative uncertainty arises, in a certain analogy with quantum mechanics. It is driven by the difference between local diffusion coefficients in the liquid solution. The same transformation class can be used to infer Fick's second law of diffusion, playing here the role of a gauge invariance preserving covariance of the spacetime increments. An overall, noteworthy conclusion emerging from this view concerns the statistics of (i) static macromolecular configurations and (ii) the motion of liquid molecules, which would be much more related than expected.
Ohtsuka, Masahiro; Muto, Shunsuke; Tatsumi, Kazuyoshi; Kobayashi, Yoshinori; Kawata, Tsunehiro
2016-04-01
The occupation sites and the occupancies of trace dopants in La/Co co-doped Sr-M-type ferrite, SrFe12O19, were quantitatively and precisely determined by beam-rocking energy-dispersive X-ray spectroscopy (EDXS) on the basis of electron-channeling effects. Because the Co atoms, in particular, should be partially substituted for the five crystallographically inequivalent sites, which could be key parameters in improving the magneto-crystalline anisotropy, it is difficult yet intriguing to discover their occupation sites and occupancies without using the methods of large-scale facilities, such as neutron diffraction and synchrotron radiation. In the present study, we tackled this problem by applying an extended statistical atom location by channeling enhanced microanalysis method, using conventional transmission electron microscopy, EDXS and dynamical electron elastic/inelastic scattering theories. The results show that the key occupation sites of Co were the 2a, 4f1 and 12k sites. The quantified occupancies of Co were consistent with those of the previous study, which involved a combination of neutron diffraction and extended X-ray absorption fine structure analysis, as well as energetics considerations based on by first-principles calculations. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
Status and trends of land change in the United States--1973 to 2000
,
2012-01-01
U.S. Geological Survey (USGS) Professional Paper 1794 is a four-volume series on the status and trends of the Nation’s land use and land cover, providing an assessment of the rates and causes of land-use and land-cover change in the United States between 1973 and 2000. Volumes A, B, C, and D provide analyses for the Western United States, the Great Plains, the Midwest–South Central United States, and the Eastern United States, respectively. The assessments of land-use and land-cover trends are conducted on an ecoregion-by-ecoregion basis, and each ecoregion assessment is guided by a nationally consistent study design that includes mapping, statistical methods, field studies, and analysis. Individual assessments provide a picture of the characteristics of land change occurring in a given ecoregion; in combination, they provide a framework for understanding the complex national mosaic of change and also the causes and consequences of change. Thus, each volume in this series provides a regional assessment of how (and how fast) land use and land cover are changing, and why. The four volumes together form the first comprehensive picture of land change across the Nation. This report is only one of the products produced by USGS on land-use and land-cover change in the United States. Other reports and land-cover statistics are available online at http://landcovertrends.usgs.gov.
Martín, Josune; Torre, Fernando; Padierna, Angel; Aguirre, Urko; González, Nerea; Matellanes, Begoña; Quintana, José M
2014-11-01
To assess whether an interdisciplinary intervention is more effective than usual care for improving the health-related quality of life (HRQoL) among patients with fibromyalgia (FM), and to identify variables that were predictors of improvement in HRQoL. In a randomized controlled clinical trial carried out on an outpatient basis in a hospital pain management unit, 153 patients with FM were randomly allocated to an experimental group (EG) or a control group (CG). Participants completed the Fibromyalgia Impact Questionnaire (FIQ) at baseline and 6 months after the intervention. The EG received an interdisciplinary treatment (12 sessions for 6 weeks) which consisted of coordinated psychological, medical, educational, and physiotherapeutic interventions while the CG received standard-of-care pharmacologic treatment. Descriptive statistics, ANOVA, Chi square and Fisher tests and generalized linear models were used for data analysis. Six months after the intervention, statistically significant improvements in HRQoL were observed in physical functioning (P = 0.01), pain (P = 0.03) and total FIQ score (P = 0.04) in the EG compared to the CG. The number of physical illnesses was identified as a predictor for improvement. This interdisciplinary intervention has shown effectiveness in improving the HRQoL of this sample of patients with FM. The number of physical illnesses was identified as a predictor of that improvement. © 2013 World Institute of Pain.
Sex determination by three-dimensional geometric morphometrics of craniofacial form.
Chovalopoulou, Maria-Eleni; Valakos, Efstratios D; Manolis, Sotiris K
The purpose of the present study is to define which regions of the cranium, the upper-face, the orbits and the nasal are the most sexually dimorphic, by using three-dimensional geometric morphometric methods, and investigate the effectiveness of this method in determining sex from the shape of these regions. The study sample consisted of 176 crania of known sex (94 males, 82 females) belonging to individuals who lived in Greece during the 20(th) century. The three-dimensional co-ordinates of 31 ecto-cranial landmarks were digitized using a MicroScribe 3DX contact digitizer. Goodall's F-test was performed in order to compare statistical differences in shape between males and females. Generalized Procrustes Analysis (GPA) was used to obtain size and shape variables for statistical analysis. Shape, Size and Form analyses were carried out by logistic regression and discriminant function analysis. The results indicate that there are shape differences between the sexes in the upper-face and the orbits. The highest shape classification rate was obtained from the upper-face region. The centroid size of the caraniofacial and the orbital regions was smaller in females than males. Moreover, it was found that size is significant for sexual dimorphism in the upper-face region. As anticipated, the classification accuracy improves when both size and shape are combined. The findings presented here constitute a firm basis upon which further research can be conducted.
NASA Astrophysics Data System (ADS)
Youngman, M.; Weil, C.; Salisbury, T.; Villarreal, C.
2015-12-01
The U.S. National Geodetic Survey is collecting airborne gravity with the Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project to produce a geoid supporting heights accurate to 2 centimeters, where possible, with a modernized U.S. vertical datum in 2022. Targeting 15.6 million square kilometers, the GRAV-D project is unprecedented in its scope of consistently collected airborne gravity data across the entire U.S. and its holdings. Currently over 42% of data collection has been completed by 42 surveys (field campaigns) covering 34 completed blocks (data collection areas). The large amount of data available offers a unique opportunity to evaluate the causes of data quality variation from survey to survey. Two metrics were chosen to use as a basis for comparing the quality of each survey/block: 1. total crossover error (i.e. difference in gravity recorded at all locations of crossing flight lines) and 2. the statistical difference of the airborne gravity from the EGM2008 global model. We have determined that the aircraft used for surveying contributes significantly to the variation in data quality. This paper will further expand upon that recent work, using statistical analysis to determine the contribution of aircraft selection to data quality taking into account other variables such as differences in survey setup or weather conditions during surveying.
10 CFR Appendix C to Part 73 - Nuclear Power Plant Safeguards Contingency Plans
Code of Federal Regulations, 2013 CFR
2013-01-01
... command and delegation of authority as these apply to safeguards contingencies. b. Physical Layout—(i..., up to and including the design basis threat of radiological sabotage. The goals of licensee... general description of how the response is organized. a. Perceived Danger—Consistent with the design basis...
16 CFR 306.5 - Automotive fuel rating.
Code of Federal Regulations, 2011 CFR
2011-01-01
... fuels other than biodiesel blends and biomass-based diesel blends, you must possess a reasonable basis... alternative liquid automotive fuel that you must disclose. In the case of biodiesel blends, you must possess a reasonable basis, consisting of competent and reliable evidence, for the percentage of biodiesel contained in...
16 CFR 306.5 - Automotive fuel rating.
Code of Federal Regulations, 2010 CFR
2010-01-01
... fuels other than biodiesel blends and biomass-based diesel blends, you must possess a reasonable basis... the fuel, and in the case of biomass-based diesel blends, you must possess a reasonable basis, consisting of competent and reliable evidence, for the percentage of biomass-based diesel contained in the...
16 CFR 260.5 - Interpretation and substantiation of environmental marketing claims.
Code of Federal Regulations, 2011 CFR
2011-01-01
... reasonable basis substantiating the claim. A reasonable basis consists of competent and reliable evidence. In... reliable scientific evidence, defined as tests, analyses, research, studies or other evidence based on the... qualified to do so, using procedures generally accepted in the profession to yield accurate and reliable...
Evaluating the consistency of gene sets used in the analysis of bacterial gene expression data.
Tintle, Nathan L; Sitarik, Alexandra; Boerema, Benjamin; Young, Kylie; Best, Aaron A; Dejongh, Matthew
2012-08-08
Statistical analyses of whole genome expression data require functional information about genes in order to yield meaningful biological conclusions. The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) are common sources of functionally grouped gene sets. For bacteria, the SEED and MicrobesOnline provide alternative, complementary sources of gene sets. To date, no comprehensive evaluation of the data obtained from these resources has been performed. We define a series of gene set consistency metrics directly related to the most common classes of statistical analyses for gene expression data, and then perform a comprehensive analysis of 3581 Affymetrix® gene expression arrays across 17 diverse bacteria. We find that gene sets obtained from GO and KEGG demonstrate lower consistency than those obtained from the SEED and MicrobesOnline, regardless of gene set size. Despite the widespread use of GO and KEGG gene sets in bacterial gene expression data analysis, the SEED and MicrobesOnline provide more consistent sets for a wide variety of statistical analyses. Increased use of the SEED and MicrobesOnline gene sets in the analysis of bacterial gene expression data may improve statistical power and utility of expression data.
The Chinese version of the Outcome Expectations for Exercise scale: validation study.
Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger
2011-06-01
Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out to see whether these results are generalisable to older Chinese people living in urban areas. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1992-01-01
This technique for electronically rotating the polarization basis of an orthogonal-linear polarization radiometer is based on the measurement of the first three feedhorn Stokes parameters, along with the subsequent transformation of this measured Stokes vector into a rotated coordinate frame. The technique requires an accurate measurement of the cross-correlation between the two orthogonal feedhorn modes, for which an innovative polarized calibration load was developed. The experimental portion of this investigation consisted of a proof of concept demonstration of the technique of electronic polarization basis rotation (EPBR) using a ground based 90-GHz dual orthogonal-linear polarization radiometer. Practical calibration algorithms for ground-, aircraft-, and space-based instruments were identified and tested. The theoretical effort consisted of radiative transfer modeling using the planar-stratified numerical model described in Gasiewski and Staelin (1990).
Incidence of Osteoporosis in Patients with Urolithiasis
Bijelic, Radojka; Milicevic, Snjezana; Balaban, Jagoda
2014-01-01
ABSTRACT Introduction. Clinical researches have shown an increased bone disintegration and lower bone mass in patients with calcium urolithiasis. Goal. The goal of our research was to establish the incidence of osteoporosis in adult patients with calcium urolithiasis, on the basis of measuring mineral bone density, using DEXA method, with a special reflection on age subgroups. Material and methods. Clinical research was prospective and it was implemented at the University Clinical Center of Banja Luka, at the Clinic for Endocrinology, Diabetes and Metabolic Diseases and at the Urology Clinic. Material in this research consisted of patients divided in two groups, a working and a control group. One hundred and twenty (120) patients were included in both these groups, divided in three age subgroups: 20-40, 40-60 and over 60. The working group consisted of the patients with calcium urolithiasis and the control group consisted of patients without calcium urolithiasis. Establishing of mineral bone density at L2-L4 of lumbal spine vertebrae and hip was done for the patients in both these groups, using DEXA method. Results. Analysis of mineral bone density using DEXA method in patients in age groups of working and control groups, as well as in the total sample of working and control groups, have shown that the patients of the working group, over 60, had a decreased mineral bone density (30% of osteopenia and 15% osteoporosis) significantly more expressed when compared to the other two age groups (12.5% in the subgroup 20-40 and 17.5% in the subgroup 40-60), which presents a statistically significant difference (p<0.05). In the control group, when taking into account age groups, osteopenia and osteoporosis were marked in 37.5% and 2.5% in the group of patients over 60, whereas in the youngest population, 5% of osteopenia was found, which presents a statistically significant difference (p<0.05). When observing the total sample of working and control group, there was a statistically significant difference in the working and control group (p<0.01); incidence of osteoporosis in the working group amounted to 7.5% and in the control group it was 0.8%. Conclusion. Urolithiasis and osteoporosis are two multifactorial diseases which are evidently reciprocal. This is why we suggest that educating the population about the risk factors for occurrence of these diseases as well as preventive measures that may contribute to their decrease should begin as early as possible. PMID:25568567
Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin
2011-06-07
The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics
Consistency of extreme flood estimation approaches
NASA Astrophysics Data System (ADS)
Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf
2017-04-01
Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.
A New Potential Energy Surface for N+O2: Is There an NOO Minimum?
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1995-01-01
We report a new calculation of the N+02 potential energy surface using complete active space self-consistent field internally contracted configuration interaction with the Dunning correlation consistent basis sets. The peroxy isomer of N02 is found to be a very shallow minimum separated from NO+O by a barrier of only 0.3 kcal/mol (excluding zero-point effects). The entrance channel barrier height is estimated to be 8.6 kcal/mol for ICCI+Q calculations correlating all but the Ols and N1s electrons with a cc-p VQZ basis set.
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Statistical methods used in articles published by the Journal of Periodontal and Implant Science.
Choi, Eunsil; Lyu, Jiyoung; Park, Jinyoung; Kim, Hae-Young
2014-12-01
The purposes of this study were to assess the trend of use of statistical methods including parametric and nonparametric methods and to evaluate the use of complex statistical methodology in recent periodontal studies. This study analyzed 123 articles published in the Journal of Periodontal & Implant Science (JPIS) between 2010 and 2014. Frequencies and percentages were calculated according to the number of statistical methods used, the type of statistical method applied, and the type of statistical software used. Most of the published articles considered (64.4%) used statistical methods. Since 2011, the percentage of JPIS articles using statistics has increased. On the basis of multiple counting, we found that the percentage of studies in JPIS using parametric methods was 61.1%. Further, complex statistical methods were applied in only 6 of the published studies (5.0%), and nonparametric statistical methods were applied in 77 of the published studies (38.9% of a total of 198 studies considered). We found an increasing trend towards the application of statistical methods and nonparametric methods in recent periodontal studies and thus, concluded that increased use of complex statistical methodology might be preferred by the researchers in the fields of study covered by JPIS.
GAMA/H-ATLAS: The Dust Opacity-Stellar Mass Surface Density Relation for Spiral Galaxies
NASA Astrophysics Data System (ADS)
Grootes, M. W.; Tuffs, R. J.; Popescu, C. C.; Pastrav, B.; Andrae, E.; Gunawardhana, M.; Kelvin, L. S.; Liske, J.; Seibert, M.; Taylor, E. N.; Graham, Alister W.; Baes, M.; Baldry, I. K.; Bourne, N.; Brough, S.; Cooray, A.; Dariush, A.; De Zotti, G.; Driver, S. P.; Dunne, L.; Gomez, H.; Hopkins, A. M.; Hopwood, R.; Jarvis, M.; Loveday, J.; Maddox, S.; Madore, B. F.; Michałowski, M. J.; Norberg, P.; Parkinson, H. R.; Prescott, M.; Robotham, A. S. G.; Smith, D. J. B.; Thomas, D.; Valiante, E.
2013-03-01
We report the discovery of a well-defined correlation between B-band face-on central optical depth due to dust, τ ^f_B, and the stellar mass surface density, μ*, of nearby (z <= 0.13) spiral galaxies: {log}(τ ^{f}_{B}) = 1.12(+/- 0.11) \\cdot {log}({μ _{*}}/{{M}_{⊙ } {kpc}^{-2}}) - 8.6(+/- 0.8). This relation was derived from a sample of spiral galaxies taken from the Galaxy and Mass Assembly (GAMA) survey, which were detected in the FIR/submillimeter (submm) in the Herschel-ATLAS science demonstration phase field. Using a quantitative analysis of the NUV attenuation-inclination relation for complete samples of GAMA spirals categorized according to stellar mass surface density, we demonstrate that this correlation can be used to statistically correct for dust attenuation purely on the basis of optical photometry and Sérsic-profile morphological fits. Considered together with previously established empirical relationships of stellar mass to metallicity and gas mass, the near linearity and high constant of proportionality of the τ ^f_B - μ_{*} relation disfavors a stellar origin for the bulk of refractory grains in spiral galaxies, instead being consistent with the existence of a ubiquitous and very rapid mechanism for the growth of dust in the interstellar medium. We use the τ ^f_B - μ_{*} relation in conjunction with the radiation transfer model for spiral galaxies of Popescu & Tuffs to derive intrinsic scaling relations between specific star formation rate (SFR), stellar mass, and stellar surface density, in which attenuation of the UV light used for the measurement of SFR is corrected on an object-to-object basis. A marked reduction in scatter in these relations is achieved which we demonstrate is due to correction of both the inclination-dependent and face-on components of attenuation. Our results are consistent with a general picture of spiral galaxies in which most of the submm emission originates from grains residing in translucent structures, exposed to UV in the diffuse interstellar radiation field.
Stroup, Caleb N.; Welhan, John A.; Davis, Linda C.
2008-01-01
The statistical stationarity of distributions of sedimentary interbed thicknesses within the southwestern part of the Idaho National Laboratory (INL) was evaluated within the stratigraphic framework of Quaternary sediments and basalts at the INL site, eastern Snake River Plain, Idaho. The thicknesses of 122 sedimentary interbeds observed in 11 coreholes were documented from lithologic logs and independently inferred from natural-gamma logs. Lithologic information was grouped into composite time-stratigraphic units based on correlations with existing composite-unit stratigraphy near these holes. The assignment of lithologic units to an existing chronostratigraphy on the basis of nearby composite stratigraphic units may introduce error where correlations with nearby holes are ambiguous or the distance between holes is great, but we consider this the best technique for grouping stratigraphic information in this geologic environment at this time. Nonparametric tests of similarity were used to evaluate temporal and spatial stationarity in the distributions of sediment thickness. The following statistical tests were applied to the data: (1) the Kolmogorov-Smirnov (K-S) two-sample test to compare distribution shape, (2) the Mann-Whitney (M-W) test for similarity of two medians, (3) the Kruskal-Wallis (K-W) test for similarity of multiple medians, and (4) Levene's (L) test for the similarity of two variances. Results of these analyses corroborate previous work that concluded the thickness distributions of Quaternary sedimentary interbeds are locally stationary in space and time. The data set used in this study was relatively small, so the results presented should be considered preliminary, pending incorporation of data from more coreholes. Statistical tests also demonstrated that natural-gamma logs consistently fail to detect interbeds less than about 2-3 ft thick, although these interbeds are observable in lithologic logs. This should be taken into consideration when modeling aquifer lithology or hydraulic properties based on lithology.
Crashworthiness evaluation of light rail vehicle interiors.
DOT National Transportation Integrated Search
2011-12-01
Statistically, light rail transit (LRT) systems have higher injury rates on a perpassengermile basis than heavy rail and commuter rail systems, because in most cities, light rail vehicles (LRVs) operate on city streets. Passenger safety is depe...
20 CFR 609.14 - Payments to States.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., either in advance or by way of reimbursement, as may be determined by the Department, the sum that the.... An estimate may be made on the basis of a statistical, sampling, or other method agreed on by the...
20 CFR 614.15 - Payments to States.
Code of Federal Regulations, 2010 CFR
2010-04-01
... State shall be paid, either in advance or by way of reimbursement, as may be determined by the... been paid to the State. An estimate may be made on the basis of a statistical, sampling, or other...
42 CFR 412.22 - Excluded hospitals and hospital units: General rules.
Code of Federal Regulations, 2010 CFR
2010-10-01
... must meet the governance and control requirements at paragraphs (e)(1)(i) through (e)(1)(iv) of this... allocates costs and maintains adequate statistical data to support the basis of allocation. (G) It reports...
Fault detection and diagnosis using neural network approaches
NASA Technical Reports Server (NTRS)
Kramer, Mark A.
1992-01-01
Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.
NASA Astrophysics Data System (ADS)
Petrov, A. I.; Petrova, D. A.
2017-10-01
The article considers one of the topical problems of road safety management at the federal level - the problem of the heterogeneity of road traffic accident rate in Russian cities. The article analyzes actual statistical data on road traffic accident rate in the administrative centers of Russia. The histograms of the distribution of the values of two most important road accidents characteristics - Social Risk HR and Severity Rate of Road Accidents - formed in 2016 in administrative centers of Russia are presented. On the basis of the regression model of the statistical connection between Severity Rate of Road Accidents and Social Risk HR, a classification of the Russian cities based on the level of actual road traffic accident rate was developed. On the basis of this classification a differentiated system of priority methods for organizing the safe functioning of transport systems in the cities of Russia is proposed.
Burr, Tom; Hamada, Michael S.; Ticknor, Larry; ...
2015-01-01
The aim of nuclear safeguards is to ensure that special nuclear material is used for peaceful purposes. Historically, nuclear material accounting (NMA) has provided the quantitative basis for monitoring for nuclear material loss or diversion, and process monitoring (PM) data is collected by the operator to monitor the process. PM data typically support NMA in various ways, often by providing a basis to estimate some of the in-process nuclear material inventory. We develop options for combining PM residuals and NMA residuals (residual = measurement - prediction), using a hybrid of period-driven and data-driven hypothesis testing. The modified statistical tests canmore » be used on time series of NMA residuals (the NMA residual is the familiar material balance), or on a combination of PM and NMA residuals. The PM residuals can be generated on a fixed time schedule or as events occur.« less
Ganymede - A relationship between thermal history and crater statistics
NASA Technical Reports Server (NTRS)
Phillips, R. J.; Malin, M. C.
1980-01-01
An approach for factoring the effects of a planetary thermal history into a predicted set of crater statistics for an icy satellite is developed and forms the basis for subsequent data inversion studies. The key parameter is a thermal evolution-dependent critical time for which craters of a particular size forming earlier do not contribute to present-day statistics. An example is given for the satellite Ganymede and the effect of the thermal history is easily seen in the resulting predicted crater statistics. A preliminary comparison with the data, subject to the uncertainties in ice rheology and impact flux history, suggests a surface age of 3.8 x 10 to the 9th years and a radionuclide abundance of 0.3 times the chondritic value.
Statistical Policy Working Paper 25. Data Editing Workshop and Exposition
DOT National Transportation Integrated Search
1996-12-01
Statistical Policy Working Paper 25 is the written record of the Data Editing Workshop and Exposition held March 22, 1996, at the Bureau of Labor Statistics (BLS) Conference and Training Center. The program consisted of 44 oral presentations and 19 s...
Experimental statistics for biological sciences.
Bang, Heejung; Davidian, Marie
2010-01-01
In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.
Serum MMP 2 and TIMP 2 in patients with inguinal hernias.
Smigielski, Jacek; Brocki, Marian; Kuzdak, Krzysztof; Kołomecki, Krzysztof
2011-06-01
More than sixty thousand inguinal hernia operations are performed every year in Poland. Despite many years of related research, the exact pathologic mechanism of this condition is still not fully understood. Recent studies suggested a pronounced relationship between the molecular structure of collagen fibers and the activity of metalloproteinases, the enzymes taking part in the degradation of collagen, as well as their tissue inhibitors. A prospective study has been established to measure serum levels of the matrix metalloproteinase 2 (MMP-2) and Matrix metalloproteinase tissue inhibitor 2 (TIMP-2) in 150 males between the ages of 26 and 70. The control group (CG) consisted of thirty healthy male volunteers of a similar age distribution. Our results indicate that MMP-2 was highest in the direct hernia group, a statistically very significant elevation (P<0(.) 05) of 1562ng mL(-1) against the CG 684ng mL(-1) . The highest level of TIMP, 78ng mL(-1) , was found in the group with recurrent hernia, against 49(.) 5ng mL(-1) of the CG (statistical significance of P<0(.) 05). The MMP-2 and TIMP-2 levels were concurrently elevated only in the recurrent hernia group. The patients with inguinal hernia have a statistically significant increase in serum levels of MMP-2. Our finding of the MMP-2 and TIMP-2 distinctly higher in the patients suffering from recurrence of direct inguinal hernia (reflecting a previous surgical failure) may suggest the theory that the extracellular matrix defect lies at the basis of this disorder. © 2011 The Authors. European Journal of Clinical Investigation © 2011 Stichting European Society for Clinical Investigation Journal Foundation.
Bradford, Williamson Z.; Fagan, Elizabeth A.; Glaspole, Ian; Glassberg, Marilyn K.; Glasscock, Kenneth F.; King, Talmadge E.; Lancaster, Lisa H.; Nathan, Steven D.; Pereira, Carlos A.; Sahn, Steven A.; Swigris, Jeffrey J.; Noble, Paul W.
2015-01-01
BACKGROUND: FVC outcomes in clinical trials on idiopathic pulmonary fibrosis (IPF) can be substantially influenced by the analytic methodology and the handling of missing data. We conducted a series of sensitivity analyses to assess the robustness of the statistical finding and the stability of the estimate of the magnitude of treatment effect on the primary end point of FVC change in a phase 3 trial evaluating pirfenidone in adults with IPF. METHODS: Source data included all 555 study participants randomized to treatment with pirfenidone or placebo in the Assessment of Pirfenidone to Confirm Efficacy and Safety in Idiopathic Pulmonary Fibrosis (ASCEND) study. Sensitivity analyses were conducted to assess whether alternative statistical tests and methods for handling missing data influenced the observed magnitude of treatment effect on the primary end point of change from baseline to week 52 in FVC. RESULTS: The distribution of FVC change at week 52 was systematically different between the two treatment groups and favored pirfenidone in each analysis. The method used to impute missing data due to death had a marked effect on the magnitude of change in FVC in both treatment groups; however, the magnitude of treatment benefit was generally consistent on a relative basis, with an approximate 50% reduction in FVC decline observed in the pirfenidone group in each analysis. CONCLUSIONS: Our results confirm the robustness of the statistical finding on the primary end point of change in FVC in the ASCEND trial and corroborate the estimated magnitude of the pirfenidone treatment effect in patients with IPF. TRIAL REGISTRY: ClinicalTrials.gov; No.: NCT01366209; URL: www.clinicaltrials.gov PMID:25856121
Predicting Rotator Cuff Tears Using Data Mining and Bayesian Likelihood Ratios
Lu, Hsueh-Yi; Huang, Chen-Yuan; Su, Chwen-Tzeng; Lin, Chen-Chiang
2014-01-01
Objectives Rotator cuff tear is a common cause of shoulder diseases. Correct diagnosis of rotator cuff tears can save patients from further invasive, costly and painful tests. This study used predictive data mining and Bayesian theory to improve the accuracy of diagnosing rotator cuff tears by clinical examination alone. Methods In this retrospective study, 169 patients who had a preliminary diagnosis of rotator cuff tear on the basis of clinical evaluation followed by confirmatory MRI between 2007 and 2011 were identified. MRI was used as a reference standard to classify rotator cuff tears. The predictor variable was the clinical assessment results, which consisted of 16 attributes. This study employed 2 data mining methods (ANN and the decision tree) and a statistical method (logistic regression) to classify the rotator cuff diagnosis into “tear” and “no tear” groups. Likelihood ratio and Bayesian theory were applied to estimate the probability of rotator cuff tears based on the results of the prediction models. Results Our proposed data mining procedures outperformed the classic statistical method. The correction rate, sensitivity, specificity and area under the ROC curve of predicting a rotator cuff tear were statistical better in the ANN and decision tree models compared to logistic regression. Based on likelihood ratios derived from our prediction models, Fagan's nomogram could be constructed to assess the probability of a patient who has a rotator cuff tear using a pretest probability and a prediction result (tear or no tear). Conclusions Our predictive data mining models, combined with likelihood ratios and Bayesian theory, appear to be good tools to classify rotator cuff tears as well as determine the probability of the presence of the disease to enhance diagnostic decision making for rotator cuff tears. PMID:24733553
A new approach to process control using Instability Index
NASA Astrophysics Data System (ADS)
Weintraub, Jeffrey; Warrick, Scott
2016-03-01
The merits of a robust Statistical Process Control (SPC) methodology have long been established. In response to the numerous SPC rule combinations, processes, and the high cost of containment, the Instability Index (ISTAB) is presented as a tool for managing these complexities. ISTAB focuses limited resources on key issues and provides a window into the stability of manufacturing operations. ISTAB takes advantage of the statistical nature of processes by comparing the observed average run length (OARL) to the expected run length (ARL), resulting in a gap value called the ISTAB index. The ISTAB index has three characteristic behaviors that are indicative of defects in an SPC instance. Case 1: The observed average run length is excessively long relative to expectation. ISTAB > 0 is indicating the possibility that the limits are too wide. Case 2: The observed average run length is consistent with expectation. ISTAB near zero is indicating that the process is stable. Case 3: The observed average run length is inordinately short relative to expectation. ISTAB < 0 is indicating that the limits are too tight, the process is unstable or both. The probability distribution of run length is the basis for establishing an ARL. We demonstrate that the geometric distribution is a good approximation to run length across a wide variety of rule sets. Excessively long run lengths are associated with one kind of defect in an SPC instance; inordinately short run lengths are associated with another. A sampling distribution is introduced as a way to quantify excessively long and inordinately short observed run lengths. This paper provides detailed guidance for action limits on these run lengths. ISTAB as a statistical method of review facilitates automated instability detection. This paper proposes a management system based on ISTAB as an enhancement to more traditional SPC approaches.
Skin hydration, microrelief and greasiness of normal skin in Antarctica.
Tsankov, N; Mateev, D; Darlenski, R
2018-03-01
The skin is the primary defence of the human body against external factors from physical, chemical, mechanical and biologic origin. Climatic factors together with low temperature and sun radiation affect the skin. The effect of climatic conditions in Antarctica on healthy skin has not been previously addressed. The aim of this study was to evaluate the changes in the skin hydration, greasiness and microrelief due to the extreme climatic environmental factors during the stay of the members of the Bulgarian Antarctic expedition. Fifty-nine Caucasian healthy subjects, 42 men and 17 women with mean age 50.9 years (27-68), were enrolled. The study was performed in five consecutive years from 2011 to 2016 at the Bulgarian Antarctic base camp at Livingston Island. The study protocol consisted of two parts: study A: duration of 15 days with measurement of skin physiology parameters on a daily basis, and study B: five measurements at baseline and at days 14, 30, 45 and 50 upon arrival in Antarctica. We measured three biophysical parameters related to skin physiology at cheek skin by an impedance measuring device. No statistically significant difference between parameters at the different measurement points. There is a variation in skin hydration reaching its lower point at day 11 and then returning to values similar to baseline. Initially, an increase in skin greasiness was witnessed with a sharp depression at day 11 and final values at day 15 resembling the ones at baseline. An increase, although not statistically significant, in skin roughness was observed in the first 15 days of the study. Study B showed no statistically significant variances between values of the three parameters. Our studies show the pioneer results of the effect of Antarctic climate on human skin physiology. © 2017 European Academy of Dermatology and Venereology.
Statistical characteristics of mechanical heart valve cavitation in accelerated testing.
Wu, Changfu; Hwang, Ned H C; Lin, Yu-Kweng M
2004-07-01
Cavitation damage has been observed on mechanical heart valves (MHVs) undergoing accelerated testing. Cavitation itself can be modeled as a stochastic process, as it varies from beat to beat of the testing machine. This in-vitro study was undertaken to investigate the statistical characteristics of MHV cavitation. A 25-mm St. Jude Medical bileaflet MHV (SJM 25) was tested in an accelerated tester at various pulse rates, ranging from 300 to 1,000 bpm, with stepwise increments of 100 bpm. A miniature pressure transducer was placed near a leaflet tip on the inflow side of the valve, to monitor regional transient pressure fluctuations at instants of valve closure. The pressure trace associated with each beat was passed through a 70 kHz high-pass digital filter to extract the high-frequency oscillation (HFO) components resulting from the collapse of cavitation bubbles. Three intensity-related measures were calculated for each HFO burst: its time span; its local root-mean-square (LRMS) value; and the area enveloped by the absolute value of the HFO pressure trace and the time axis, referred to as cavitation impulse. These were treated as stochastic processes, of which the first-order probability density functions (PDFs) were estimated for each test rate. Both the LRMS value and cavitation impulse were log-normal distributed, and the time span was normal distributed. These distribution laws were consistent at different test rates. The present investigation was directed at understanding MHV cavitation as a stochastic process. The results provide a basis for establishing further the statistical relationship between cavitation intensity and time-evolving cavitation damage on MHV surfaces. These data are required to assess and compare the performance of MHVs of different designs.
Brown, Dorothy Cimino; Bell, Margie; Rhodes, Linda
2013-12-01
To determine the optimal method for use of the Canine Brief Pain Inventory (CBPI) to quantitate responses of dogs with osteoarthritis to treatment with carprofen or placebo. 150 dogs with osteoarthritis. Data were analyzed from 2 studies with identical protocols in which owner-completed CBPIs were used. Treatment for each dog was classified as a success or failure by comparing the pain severity score (PSS) and pain interference score (PIS) on day 0 (baseline) with those on day 14. Treatment success or failure was defined on the basis of various combinations of reduction in the 2 scores when inclusion criteria were set as a PSS and PIS ≥ 1, 2, or 3 at baseline. Statistical analyses were performed to select the definition of treatment success that had the greatest statistical power to detect differences between carprofen and placebo treatments. Defining treatment success as a reduction of ≥ 1 in PSS and ≥ 2 in PIS in each dog had consistently robust power. Power was 62.8% in the population that included only dogs with baseline scores ≥ 2 and 64.7% in the population that included only dogs with baseline scores ≥ 3. The CBPI had robust statistical power to evaluate the treatment effect of carprofen in dogs with osteoarthritis when protocol success criteria were predefined as a reduction ≥ 1 in PIS and ≥ 2 in PSS. Results indicated the CBPI can be used as an outcome measure in clinical trials to evaluate new pain treatments when it is desirable to evaluate success in individual dogs rather than overall mean or median scores in a test population.
Hardiman, S; Miller, K; Murphy, M
1993-01-01
Safety observations during the clinical development of Mentane (velnacrine maleate) have included the occurrence of generally asymptomatic liver enzyme elevations confined to patients with Alzheimer's disease (AD). The clinical presentation of this reversible hepatocellular injury is analogous to that reported for tetrahydroaminoacridine (THA). Direct liver injury, possibly associated with the production of a toxic metabolite, would be consistent with reports of aberrant xenobiotic metabolism in Alzheimer's disease patients. Since a patient related aberration in drug metabolism was suspected, a biostatistical strategy was developed with the objective of predicting hepatotoxicity in individual patients prior to exposure to velnacrine maleate. The method used logistic regression techniques with variable selection restricted to those items which could be routinely and inexpensively accessed at screen evaluation for potential candidates for treatment. The model was to be predictive (a marker for eventual hepatotoxicity) rather than a causative model, and techniques employed "goodness of fit", percentage correct, and positive and negative predictive values. On the basis of demographic and baseline laboratory data from 942 patients, the PROPP statistic was developed (the Physician Reference Of Predicted Probabilities). Main effect variables included age, gender, and nine hematological and serum chemistry variables. The sensitivity of the current model is approximately 49%, specificity approximately 88%. Using prior probability estimates, however, in which the patient's likelihood of liver toxicity is presumed to be at least 30%, the positive predictive value ranged between 64-77%. Although the clinical utility of this statistic will require refinements and additional prospective confirmation, its potential existence speaks to the possibility of markers for idiosyncratic drug metabolism in patients with Alzheimer's disease.
Markowski, Alycia; Watkins, Maureen K; Burnett, Todd; Ho, Melissa; Ling, Michael
2018-04-01
Often, physical therapy students struggle with the skill and the confidence to perform manual techniques for musculoskeletal examination. Current teaching methods lack concurrent objective feedback. Real-time ultrasound imaging (RTUI) has the advantage of generating visualization of anatomical structures in real-time in an efficient and safe manner. We hypothesize that the use of RTUI to augment teaching with concurrent objective visual feedback will result in students' improved ability to create a change in joint space when performing a manual knee traction and higher confidence scores. Eighty-six students were randomly allocated to a control or an experimental group. All participants received baseline instructions on how to perform knee traction. The control group received standardized lab instruction (visual, video, and instructor/partner feedback). The experimental group received standardized lab instruction augmented with RTUI feedback. Pre-data and post-data collection consisted of measuring participants' ability to create changes in joint space when performing knee traction, a confidence survey evaluating perceived ability and a reflection paper. Joint space changes between groups were compared using a paired t-test. Surveys were analyzed with descriptive statistics and compared using Wilcoxon Rank Sum and for the reflection papers, themes were identified and descriptive statistics reported. Although there were no statistically significant differences between the control and the experimental group, overall scores improved. Qualitative data suggests students found the use of ultrasound imaging beneficial and would like more exposure. This novel approach to teaching knee traction with RTUI has potential and may be a basis for further studies. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ab Initio Density Fitting: Accuracy Assessment of Auxiliary Basis Sets from Cholesky Decompositions.
Boström, Jonas; Aquilante, Francesco; Pedersen, Thomas Bondo; Lindh, Roland
2009-06-09
The accuracy of auxiliary basis sets derived by Cholesky decompositions of the electron repulsion integrals is assessed in a series of benchmarks on total ground state energies and dipole moments of a large test set of molecules. The test set includes molecules composed of atoms from the first three rows of the periodic table as well as transition metals. The accuracy of the auxiliary basis sets are tested for the 6-31G**, correlation consistent, and atomic natural orbital basis sets at the Hartree-Fock, density functional theory, and second-order Møller-Plesset levels of theory. By decreasing the decomposition threshold, a hierarchy of auxiliary basis sets is obtained with accuracies ranging from that of standard auxiliary basis sets to that of conventional integral treatments.
Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie
2016-01-01
Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31-0.89] (P value = 0.009). Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies.
Rivoirard, Romain; Duplay, Vianney; Oriol, Mathieu; Tinquaut, Fabien; Chauvin, Franck; Magne, Nicolas; Bourmaud, Aurelie
2016-01-01
Background Quality of reporting for Randomized Clinical Trials (RCTs) in oncology was analyzed in several systematic reviews, but, in this setting, there is paucity of data for the outcomes definitions and consistency of reporting for statistical tests in RCTs and Observational Studies (OBS). The objective of this review was to describe those two reporting aspects, for OBS and RCTs in oncology. Methods From a list of 19 medical journals, three were retained for analysis, after a random selection: British Medical Journal (BMJ), Annals of Oncology (AoO) and British Journal of Cancer (BJC). All original articles published between March 2009 and March 2014 were screened. Only studies whose main outcome was accompanied by a corresponding statistical test were included in the analysis. Studies based on censored data were excluded. Primary outcome was to assess quality of reporting for description of primary outcome measure in RCTs and of variables of interest in OBS. A logistic regression was performed to identify covariates of studies potentially associated with concordance of tests between Methods and Results parts. Results 826 studies were included in the review, and 698 were OBS. Variables were described in Methods section for all OBS studies and primary endpoint was clearly detailed in Methods section for 109 RCTs (85.2%). 295 OBS (42.2%) and 43 RCTs (33.6%) had perfect agreement for reported statistical test between Methods and Results parts. In multivariable analysis, variable "number of included patients in study" was associated with test consistency: aOR (adjusted Odds Ratio) for third group compared to first group was equal to: aOR Grp3 = 0.52 [0.31–0.89] (P value = 0.009). Conclusion Variables in OBS and primary endpoint in RCTs are reported and described with a high frequency. However, statistical tests consistency between methods and Results sections of OBS is not always noted. Therefore, we encourage authors and peer reviewers to verify consistency of statistical tests in oncology studies. PMID:27716793
The Statistics Teaching Inventory: A Survey on Statistics Teachers' Classroom Practices and Beliefs
ERIC Educational Resources Information Center
Zieffler, Andrew; Park, Jiyoon; Garfield, Joan; delMas, Robert; Bjornsdottir, Audbjorg
2012-01-01
This paper reports on an instrument designed to assess the practices and beliefs of instructors of introductory statistics courses across the disciplines. Funded by a grant from the National Science Foundation, this project developed, piloted, and gathered validity evidence for the Statistics Teaching Inventory (STI). The instrument consists of 50…
Gorobets, Yu I; Gorobets, O Yu
2015-01-01
The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.
10 CFR Appendix C to Part 73 - Nuclear Power Plant Safeguards Contingency Plans
Code of Federal Regulations, 2011 CFR
2011-01-01
... command and delegation of authority as these apply to safeguards contingencies. b. Physical Layout—(i..., up to and including the design basis threat of radiological sabotage. The goals of licensee.... Perceived Danger—Consistent with the design basis threat specified in § 73.1(a)(1), licensees shall identify...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-11
... consistent with recommendations of the HHS Secretary's Advisory Committee on Genetics, Health, and Society... molecular basis, including, for example, information about what the test detects and what methods the test... and providing information on the molecular basis of genetic tests, such as detailed information about...
The Neural Basis of Syntactic Deficits in Primary Progressive Aphasia
ERIC Educational Resources Information Center
Wilson, Stephen M.; Galantucci, Sebastiano; Tartaglia, Maria Carmela; Gorno-Tempini, Maria Luisa
2012-01-01
Patients with primary progressive aphasia (PPA) vary considerably in terms of which brain regions are impacted, as well as in the extent to which syntactic processing is impaired. Here we review the literature on the neural basis of syntactic deficits in PPA. Structural and functional imaging studies have most consistently associated syntactic…
NASA Astrophysics Data System (ADS)
Petersson, George A.; Malick, David K.; Frisch, Michael J.; Braunstein, Matthew
2006-07-01
Examination of the convergence of full valence complete active space self-consistent-field configuration interaction including all single and double excitation (CASSCF-CISD) energies with expansion of the one-electron basis set reveals a pattern very similar to the convergence of single determinant energies. Calculations on the lowest four singlet states and the lowest four triplet states of N2 with the sequence of n-tuple-ζ augmented polarized (nZaP) basis sets (n =2, 3, 4, 5, and 6) are used to establish the complete basis set limits. Full configuration-interaction (CI) and core electron contributions must be included for very accurate potential energy surfaces. However, a simple extrapolation scheme that has no adjustable parameters and requires nothing more demanding than CAS(10e -,8orb)-CISD/3ZaP calculations gives the Re, ωe, ωeXe, Te, and De for these eight states with rms errors of 0.0006Å, 4.43cm-1, 0.35cm-1, 0.063eV, and 0.018eV, respectively.
Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.
Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J
2016-03-01
Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal stability of the transition-statistics are violated. Both studies provide compelling evidence that the auditory system extrapolates patterns on multiple timescales to adjust its response to prediction-errors, while profoundly distorting the effects of transition-statistics by the assumptions formed on the basis of first-impressions. Copyright © 2016 Elsevier B.V. All rights reserved.
Upward Flame Propagation and Wire Insulation Flammability: 2006 Round Robin Data Analysis
NASA Technical Reports Server (NTRS)
Hirsch, David B.
2007-01-01
This viewgraph document reviews test results from tests of different material used for wire insulation for flame propagation and flammability. The presentation focused on investigating data variability both within and between laboratories; evaluated the between-laboratory consistency through consistency statistic h, which indicates how one laboratory s cell average compares with averages from other labs; evaluated the within-laboratory consistency through the consistency statistic k, which is an indicator of how one laboratory s within-laboratory variability compares with the variability of other labs combined; and extreme results were tested to determine whether they resulted by chance or from nonrandom causes (human error, instrument calibration shift, non-adherence to procedures, etc.)
Ao, Xiaoping; Stenken, Julie A
2003-09-01
Microdialysis relative recovery (RR) enhancement using different water-soluble, epichlorohydrin-based cyclodextrin polymers (CD-EPS) was studied in vitro for different analytes, amitryptiline, carbamazepine, hydroquinone, ibuprofen, and 4-nitrophenol. When compared to the native CDs (alpha, beta, and gamma) on a per mole basis, the CD-EPS enhanced microdialysis RR was either statistically greater or the same. beta-CD-EPS was more highly retained than native beta-CD by a 20 000 Da molecular weight cutoff (MWCO) polycarbonate membrane, but showed no statistical difference for loss across a 100 000 Da MWCO polyethersulfone membrane (PES). When the same weight percent of beta-CD or beta-CD-EPS was included in the microdialysis perfusion fluid, the beta-CD-EPS produced a higher microdialysis RR than native beta-CD for all analytes across the PES membrane. However, enhancements for the PC membrane were statistically insignificant when beta-CD and beta-CD-EPS were compared on a per mole basis. These results suggest that CD-EPS may be used as effective enhancement agents during microdialysis sampling and for some membranes provide the additional advantage of being retained more than native CDs.
Simplified DFT methods for consistent structures and energies of large systems
NASA Astrophysics Data System (ADS)
Caldeweyher, Eike; Gerit Brandenburg, Jan
2018-05-01
Kohn–Sham density functional theory (DFT) is routinely used for the fast electronic structure computation of large systems and will most likely continue to be the method of choice for the generation of reliable geometries in the foreseeable future. Here, we present a hierarchy of simplified DFT methods designed for consistent structures and non-covalent interactions of large systems with particular focus on molecular crystals. The covered methods are a minimal basis set Hartree–Fock (HF-3c), a small basis set screened exchange hybrid functional (HSE-3c), and a generalized gradient approximated functional evaluated in a medium-sized basis set (B97-3c), all augmented with semi-classical correction potentials. We give an overview on the methods design, a comprehensive evaluation on established benchmark sets for geometries and lattice energies of molecular crystals, and highlight some realistic applications on large organic crystals with several hundreds of atoms in the primitive unit cell.
42 CFR 413.20 - Financial data and reports.
Code of Federal Regulations, 2010 CFR
2010-10-01
... costs payable under the program. Standardized definitions, accounting, statistics, and reporting...; OPTIONAL PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Accounting Records and... providers on an annual basis with reporting periods based on the provider's accounting year. In the...
MARYLAND AGRICULTURE AND YOUR WATERSHED
Using primarily 1995 State of Maryland agricultural statistics data, a new methodology was demonstrated with which State natural resource managers can analyze the areal extent of agricultural lands and production data on a watershed basis. The report organized major crop ...
Vehicular headways on signalized intersections: theory, models, and reality
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Šleis, Jiří
2015-01-01
We discuss statistical properties of vehicular headways measured on signalized crossroads. On the basis of mathematical approaches, we formulate theoretical and empirically inspired criteria for the acceptability of theoretical headway distributions. Sequentially, the multifarious families of statistical distributions (commonly used to fit real-road headway statistics) are confronted with these criteria, and with original empirical time clearances gauged among neighboring vehicles leaving signal-controlled crossroads after a green signal appears. Using three different numerical schemes, we demonstrate that an arrangement of vehicles on an intersection is a consequence of the general stochastic nature of queueing systems, rather than a consequence of traffic rules, driver estimation processes, or decision-making procedures.
Electric dipole moment of diatomic molecules by configuration interaction. IV.
NASA Technical Reports Server (NTRS)
Green, S.
1972-01-01
The theory of basis set dependence in configuration interaction calculations is discussed, taking into account a perturbation model which is valid for small changes in the self-consistent field orbitals. It is found that basis set corrections are essentially additive through first order. It is shown that an error found in a previously published dipole moment calculation by Green (1972) for the metastable first excited state of CO was indeed due to an inadequate basis set as claimed.
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
Thermosensitivity of growth is determined by chaperone-mediated proteome reallocation
Chen, Ke; Gao, Ye; Mih, Nathan; O’Brien, Edward J.; Yang, Laurence; Palsson, Bernhard O.
2017-01-01
Maintenance of a properly folded proteome is critical for bacterial survival at notably different growth temperatures. Understanding the molecular basis of thermoadaptation has progressed in two main directions, the sequence and structural basis of protein thermostability and the mechanistic principles of protein quality control assisted by chaperones. Yet we do not fully understand how structural integrity of the entire proteome is maintained under stress and how it affects cellular fitness. To address this challenge, we reconstruct a genome-scale protein-folding network for Escherichia coli and formulate a computational model, FoldME, that provides statistical descriptions of multiscale cellular response consistent with many datasets. FoldME simulations show (i) that the chaperones act as a system when they respond to unfolding stress rather than achieving efficient folding of any single component of the proteome, (ii) how the proteome is globally balanced between chaperones for folding and the complex machinery synthesizing the proteins in response to perturbation, (iii) how this balancing determines growth rate dependence on temperature and is achieved through nonspecific regulation, and (iv) how thermal instability of the individual protein affects the overall functional state of the proteome. Overall, these results expand our view of cellular regulation, from targeted specific control mechanisms to global regulation through a web of nonspecific competing interactions that modulate the optimal reallocation of cellular resources. The methodology developed in this study enables genome-scale integration of environment-dependent protein properties and a proteome-wide study of cellular stress responses. PMID:29073085
Pupillary transient responses to within-task cognitive load variation.
Wong, Hoe Kin; Epps, Julien
2016-12-01
Changes in physiological signals due to task evoked cognitive load have been reported extensively. However, pupil size based approaches for estimating cognitive load on a moment-to-moment basis are not as well understood as estimating cognitive load on a task-to-task basis, despite the appeal these approaches have for continuous load estimation. In particular, the pupillary transient response to instantaneous changes in induced load has not been experimentally quantified, and the within-task changes in pupil dilation have not been investigated in a manner that allows their consistency to be quantified with a view to biomedical system design. In this paper, a variation of the digit span task is developed which reliably induces rapid changes of cognitive load to generate task-evoked pupillary responses (TEPRs) associated with large, within-task load changes. Linear modelling and one-way ANOVA reveals that increasing the rate of cognitive loading, while keeping task demands constant, results in a steeper pupillary response. Instantaneous drops in cognitive load are shown to produce statistically significantly different transient pupillary responses relative to sustained load, and when characterised using an exponential decay response, the task-evoked pupillary response time constant is in the order of 1-5 s. Within-task test-retest analysis confirms the reliability of the moment-to-moment measurements. Based on these results, estimates of pupil diameter can be employed with considerably more confidence in moment-to-moment cognitive load estimation systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Brand, Bethany L; Lanius, Ruth; Vermetten, Eric; Loewenstein, Richard J; Spiegel, David
2012-01-01
This article provides an overview of the process of developing the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association with a focus on issues related to the trauma-related disorders, particularly the dissociative disorders (DD). We also discuss the highlights of research within the past 5 years in the assessment, treatment, and neurobiological basis of trauma disorders. Recent research shows that DD are associated with severe symptoms as well as a higher rate of utilization of mental health treatment compared with other psychiatric disorders. As a result, DD, like other complex posttraumatic disorders, exact a high economic as well as personal burden for patients and society. The latest research indicates that DD patients show a suboptimal response to standard exposure-based treatments for posttraumatic stress disorder as well as high levels of attrition from treatment. An emerging body of research on DD treatment, primarily of naturalistic and open trials, indicates that patients who receive specialized treatment that addresses their trauma-based, dissociative symptoms show improved functioning and reduced symptoms. Recent studies of the underlying neurobiological basis for dissociation support a model of excessive limbic inhibition in DD that is consistent with the phenomenology and clinical presentation of these patients. We are optimistic that the forthcoming DSM-5 will stimulate research on dissociation and the DD and suggest areas for future studies.
Trends in stratospheric ozone profiles using functional mixed models
NASA Astrophysics Data System (ADS)
Park, A. Y.; Guillas, S.; Petropavlovskikh, I.
2013-05-01
This paper is devoted to the modeling of altitude-dependent patterns of ozone variations over time. Umkher ozone profiles (quarter of Umkehr layer) from 1978 to 2011 are investigated at two locations: Boulder (USA) and Arosa (Switzerland). The study consists of two statistical stages. First we approximate ozone profiles employing an appropriate basis. To capture primary modes of ozone variations without losing essential information, a functional principal component analysis is performed as it penalizes roughness of the function and smooths excessive variations in the shape of the ozone profiles. As a result, data driven basis functions are obtained. Secondly we estimate the effects of covariates - month, year (trend), quasi biennial oscillation, the Solar cycle, arctic oscillation and the El Niño/Southern Oscillation cycle - on the principal component scores of ozone profiles over time using generalized additive models. The effects are smooth functions of the covariates, and are represented by knot-based regression cubic splines. Finally we employ generalized additive mixed effects models incorporating a more complex error structure that reflects the observed seasonality in the data. The analysis provides more accurate estimates of influences and trends, together with enhanced uncertainty quantification. We are able to capture fine variations in the time evolution of the profiles such as the semi-annual oscillation. We conclude by showing the trends by altitude over Boulder. The strongly declining trends over 2003-2011 for altitudes of 32-64 hPa show that stratospheric ozone is not yet fully recovering.
Capuchins, space, time and memory: an experimental test of what-where-when memory in wild monkeys
2016-01-01
There is considerable controversy about the existence, extent and adaptive value of integrated multimodal memory in non-human animals. Building on prior results showing that wild capuchin monkeys in Argentina appear to recall both the location and amount of food at patches they had previously visited, I tested whether they also track and use elapsed time as a basis for decisions about which feeding patches to visit. I presented them with an experimental array of eight feeding sites, at each of which food rewards increased with increasing elapsed time since the previous visit, similar to the pattern of ripe fruit accumulation in natural feeding trees. Over the course of 68 days, comprising two distinct renewal rate treatments, one group repeatedly visited sites in the feeding array, generating 212 valid choices between sites. Comparison of observations against simulated movements and multinomial statistical models shows that the monkeys' choices were most consistent with dynamic memory for elapsed time specific to each of the eight sites. Thus, it appears that capuchin monkeys possess and use integrated memories of prior food patch use, including where the patch is relative to their current location, how productive the patch is and how long it has been since they last visited the patch. Natural selection to use such integrated memories in foraging tasks may provide an ecologically relevant basis for the evolution of complex intelligence in primates. PMID:27708145
NASA Astrophysics Data System (ADS)
Jiang, Ying; Chen, Jeff Z. Y.
2013-10-01
This paper concerns establishing a theoretical basis and numerical scheme for studying the phase behavior of AB diblock copolymers made of wormlike chains. The general idea of a self-consistent field theory is the combination of the mean-field approach together with a statistical weight that describes the configurational properties of a polymer chain. In recent years, this approach has been extensively used for structural prediction of block copolymers, based on the Gaussian-model description of a polymer chain. The wormlike-chain model has played an important role in the description of polymer systems, covering the semiflexible-to-rod crossover of the polymer properties and the highly stretching regime, which the Gaussian-chain model has difficulties to describe. Although the idea of developing a self-consistent field theory for wormlike chains could be traced back to early development in polymer physics, the solution of such a theory has been limited due to technical difficulties. In particular, a challenge has been to develop a numerical algorithm enabling the calculation of the phase diagram containing three-dimensional structures for wormlike AB diblock copolymers. This paper describes a computational algorithm that combines a number of numerical tricks, which can be used for such a calculation. A phase diagram covering major parameter areas was constructed for the wormlike-chain system and reported by us, where the ratio between the total length and the persistence length of a constituent polymer is suggested as another tuning parameter for the microphase-separated structures; all detailed technical issues are carefully addressed in the current paper.
Condom use self-efficacy: effect on intended and actual condom use in adolescents.
Baele, J; Dusseldorp, E; Maes, S
2001-05-01
To investigate aspects of adolescents' condom use self-efficacy that affect their intended and actual condom use. Four hundred twenty-four male and female sexually experienced and inexperienced adolescents with a mean age of 17.0 years filled out a questionnaire concerning condom use self-efficacy and intended and actual condom use. Specific condom use self-efficacy scales were constructed from 37 items on the basis of a principal component analysis. The effect of self-efficacy, both as a global measure and in terms of specific scales, on condom use intention and consistency was assessed using multiple hierarchic regression analyses. Six specific self-efficacy scales were constructed: Technical Skills, Image Confidence, Emotion Control, Purchase, Assertiveness, and Sexual Control. In sexually inexperienced adolescents, global self-efficacy explained 48%, the six self-efficacy scales 30%, and both together 51% of the variance in intention, after statistical control for gender, age, and education level. In the sexually experienced sample, this was 40%, 50%, and 57% for intention, and 23%, 29%, and 33% for consistency of condom use. Significant predictors of intention in the final model were gender, age, global self-efficacy and purchasing skills in the inexperienced sample, and global self-efficacy, emotion control, assertiveness, image confidence, and sexual control in the experienced sample, whereas gender, age, global self-efficacy, emotion control, assertiveness, and purchase predicted consistency of condom use in the experienced sample. Condom use self-efficacy is a multidimensional construct. Intended and actual condom use in adolescents are best predicted by self-efficacy measures that include both global and relevant specific aspects of condom use.
Planck 2015 results. XVI. Isotropy and statistics of the CMB
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Akrami, Y.; Aluri, P. K.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Casaponsa, B.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Contreras, D.; Couchot, F.; Coulais, A.; Crill, B. P.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Fergusson, J.; Fernandez-Cobos, R.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Liu, H.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mikkelsen, K.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Pant, N.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Rotti, A.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Souradeep, T.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.
2016-09-01
We test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect our studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The "Cold Spot" is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.
Planck 2015 results: XVI. Isotropy and statistics of the CMB
Ade, P. A. R.; Aghanim, N.; Akrami, Y.; ...
2016-09-20
In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less
Infants with Williams syndrome detect statistical regularities in continuous speech.
Cashon, Cara H; Ha, Oh-Ryeong; Graf Estes, Katharine; Saffran, Jenny R; Mervis, Carolyn B
2016-09-01
Williams syndrome (WS) is a rare genetic disorder associated with delays in language and cognitive development. The reasons for the language delay are unknown. Statistical learning is a domain-general mechanism recruited for early language acquisition. In the present study, we investigated whether infants with WS were able to detect the statistical structure in continuous speech. Eighteen 8- to 20-month-olds with WS were familiarized with 2min of a continuous stream of synthesized nonsense words; the statistical structure of the speech was the only cue to word boundaries. They were tested on their ability to discriminate statistically-defined "words" and "part-words" (which crossed word boundaries) in the artificial language. Despite significant cognitive and language delays, infants with WS were able to detect the statistical regularities in the speech stream. These findings suggest that an inability to track the statistical properties of speech is unlikely to be the primary basis for the delays in the onset of language observed in infants with WS. These results provide the first evidence of statistical learning by infants with developmental delays. Copyright © 2016 Elsevier B.V. All rights reserved.
Physical concepts in the development of constitutive equations
NASA Technical Reports Server (NTRS)
Cassenti, B. N.
1985-01-01
Proposed viscoplastic material models include in their formulation observed material response but do not generally incorporate principles from thermodynamics, statistical mechanics, and quantum mechanics. Numerous hypotheses were made for material response based on first principles. Many of these hypotheses were tested experimentally. The proposed viscoplastic theories and the experimental basis of these hypotheses must be checked against the hypotheses. The physics of thermodynamics, statistical mechanics and quantum mechanics, and the effects of defects, are reviewed for their application to the development of constitutive laws.
Structure of wind-shear turbulence
NASA Technical Reports Server (NTRS)
Trevino, G.; Laituri, T. R.
1989-01-01
The statistical characteristics of wind shear turbulence are modelled. Isotropic turbulence serves as the basis of comparison for the anisotropic turbulence which exists in wind shear. The question of turbulence scales in wind shear is addressed from the perspective of power spectral density.
34 CFR 647.3 - Who is eligible to participate in a McNair project?
Code of Federal Regulations, 2010 CFR
2010-07-01
... statistical references or other national survey data submitted to and accepted by the Secretary on a case-by-case basis. (d) Has not enrolled in doctoral level study at an institution of higher education...
Pre-crash scenario framework for crash avoidance systems based on vehicle-to-vehicle communications
DOT National Transportation Integrated Search
2011-06-13
This paper prioritizes and statistically describes precrash : scenarios as a basis for the identification of : crash avoidance functions enhanced or enabled by : vehicle-to-vehicle (V2V) communication technology. : Pre-crash scenarios depict vehicle ...
Women in Physics in Lithuania: Challenges and Actions
NASA Astrophysics Data System (ADS)
Šatkovskienė, Dalia
2009-04-01
The gender equality problem in physics is discussed on the basis of Lithuanian statistics and results of the project, "Baltic States Network: Women in Sciences and High Technology" (BASNET), initiated by Lithuanian women physicists and financed by the European Commission.
29 CFR 794.124 - Computations on a fiscal year basis.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS ACT Exemption From Overtime Pay Requirements Under Section 7(b)(3) of the Act Annual Gross Volume of Sales § 794.124 Computations on a fiscal year basis. Some enterprises operate on a fiscal year, consisting of an annual period different from the calendar year, for income tax or sales or other accounting...
Principals' Conceptions of Their Current Power Basis Revealed through Phenomenography
ERIC Educational Resources Information Center
Özaslan, Gökhan
2018-01-01
Purpose: The purpose of this paper is to describe the variations in the ways that principals conceptualize their basis of power in schools. Design/methodology/approach: Phenomenography was used as the research method of this study. The interviewees consisted of 16 principals, eight from public schools and eight from private schools. Findings: The…
USDA-ARS?s Scientific Manuscript database
The purpose of this study was to characterize the genetic basis underlying variation in feed efficiency in mid-lactation Holstein dairy cows. A genome-wide association study was performed for residual feed intake (RFI) and related traits using a large data set, consisting of nearly 5,000 cows. It wa...
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
2017-09-04
In this paper, we present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support ourmore » construction with numerical experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Lastly, our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, R.; Stinis, P.; Tartakovsky, A. M.
We present a novel approach for solving steady-state stochastic partial differential equations (PDEs) with high-dimensional random parameter space. The proposed approach combines spatial domain decomposition with basis adaptation for each subdomain. The basis adaptation is used to address the curse of dimensionality by constructing an accurate low-dimensional representation of the stochastic PDE solution (probability density function and/or its leading statistical moments) in each subdomain. Restricting the basis adaptation to a specific subdomain affords finding a locally accurate solution. Then, the solutions from all of the subdomains are stitched together to provide a global solution. We support our construction with numericalmore » experiments for a steady-state diffusion equation with a random spatially dependent coefficient. Our results show that highly accurate global solutions can be obtained with significantly reduced computational costs.« less
Hunting statistics: what data for what use? An account of an international workshop
Nichols, J.D.; Lancia, R.A.; Lebreton, J.D.
2001-01-01
Hunting interacts with the underlying dynamics of game species in several different ways and is, at the same time, a source of valuable information not easily obtained from populations that are not subjected to hunting. Specific questions, including the sustainability of hunting activities, can be addressed using hunting statistics. Such investigations will frequently require that hunting statistics be combined with data from other sources of population-level information. Such reflections served as a basis for the meeting, ?Hunting Statistics: What Data for What Use,? held on January 15-18, 2001 in Saint-Benoist, France. We review here the 20 talks held during the workshop and the contribution of hunting statistics to our knowledge of the population dynamics of game species. Three specific topics (adaptive management, catch-effort models, and dynamics of exploited populations) were highlighted as important themes and are more extensively presented as boxes.
NASA Technical Reports Server (NTRS)
Lo, C. F.; Wu, K.; Whitehead, B. A.
1993-01-01
The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.
Polish Adaptation of the Psychache Scale by Ronald Holden and Co-workers.
Chodkiewicz, Jan; Miniszewska, Joanna; Strzelczyk, Dorota; Gąsior, Krzysztof
2017-04-30
The conducted study was aimed at making a Polish adaptation of the Scale of Psychache by Ronald Holden and co-workers. The scale is a self-assessment method which comprises 13 statements and is designed to assess subjectively experienced psychological pain. 300 persons were examined - undergraduates and postgraduates of the University of Lodz and the Technical University of Lodz. The group of the study participants consisted of 185 women and 115 men. Moreover, there were examined 150 alcohol addicted men, 50 co-addicted women and 50 major depressive episode (MDE) patients. The Polish version of the Scale is a reliable and valid tool. The exploratory and confirmatory factor analysis has proved the existence of one factor. The internal consistency, assessed on the basis of Cronbach's alpha, equalled 0.93. The method displays positive and statistically significant relationships to levels of depression, hopelessness, anxiety, anhedonia and negative relations to levels of optimism, life satisfaction, and positive orientation. Alcohol addicted men with presently diagnosed suicidal thoughts were characterised by a significantly higher level of psychological pain as compared to alcoholics without such thoughts. A higher level of psychache was also reported in people with depression who have a history of attempted suicide compared with those who have not attempted suicide. The effect of the conducted adaptation works on the Psychache Scale speaks for recommending the method for scientific research and use in therapeutic practice.
Cai, Chunhua; Zhang, Liangshun; Lin, Jiaping; Wang, Liquan
2008-10-09
We investigated, both experimentally and theoretically, the self-assembly behaviors of pH- and thermosensitive poly(L-glutamic acid)- b-poly(propylene oxide)-b-poly(L-glutamic acid) (PLGA-b-PPO-b-PLGA) triblock copolymers in aqueous solution by means of transmission electron microscopy (TEM), scanning electron microscopy (SEM), dynamic light scattering (DLS), circular dichroism (CD), and self-consistent field theory (SCFT) simulations. Vesicles were observed when the hydrophilic PLGA block length is shorter or the pH value of solution is lower. The vesicles were found to transform to spherical micelles when the PLGA block length increases or its conformation changes from helix to coil with increasing the pH value. In addition, increasing temperature gives rise to a decrease in the size of aggregates, which is related to the dehydration of the PPO segments at higher temperatures. The SCFT simulation results show that the vesicles transform to the spherical micelles with increasing the fraction or statistical length of A block in model ABA triblock copolymer, which corresponds to the increase in the PLGA length or its conformation change from helix to coil in experiments, respectively. The SCFT calculations also provide chain distribution information in the aggregates. On the basis of both experimental and SCFT results, the mechanism of the structure change of the PLGA- b-PPO- b-PLGA aggregates was proposed.
d'Amato, T; Waksman, G; Martinez, M; Laurent, C; Gorwood, P; Campion, D; Jay, M; Petit, C; Savoye, C; Bastard, C
1994-05-01
In a previous study, we reported a nonrandom segregation between schizophrenia and the pseudoautosomal locus DXYS14 in a sample of 33 sibships. That study has been extended by the addition of 16 new sibships from 16 different families. Data from six other loci of the pseudoautosomal region and of the immediately adjacent part of the X specific region have also been analyzed. Two methods of linkage analysis were used: the affected sibling pair (ASP) method and the lod-score method. Lod-score analyses were performed on the basis of three different models--A, B, and C--all shown to be consistent with the epidemiological data on schizophrenia. No clear evidence for linkage was obtained with any of these models. However, whatever the genetic model and the disease classification, maximum lod scores were positive with most of the markers, with the highest scores generally being obtained for the DXYS14 locus. When the ASP method was used, the earlier finding of nonrandom segregation between schizophrenia and the DXYS14 locus was still supported in this larger data set, at an increased level of statistical significance. Findings of ASP analyses were not significant for the other loci. Thus, findings obtained from analyses using the ASP method, but not the lod-score method, were consistent with the pseudoautosomal hypothesis for schizophrenia.
Kusche, Daniel; Kuhnt, Katrin; Ruebesam, Karin; Rohrer, Carsten; Nierop, Andreas F M; Jahreis, Gerhard; Baars, Ton
2015-02-01
Intensification of organic dairy production leads to the question of whether the implementation of intensive feeding incorporating maize silage and concentrates is altering milk quality. Therefore the fatty acid (FA) and antioxidant (AO) profiles of milk on 24 farms divided into four system groups in three replications (n = 71) during the outdoor period were analyzed. In this system comparison, a differentiation of the system groups and the effects of the main system factors 'intensification level' (high-input versus low-input) and 'origin' (organic versus conventional) were evaluated in a multivariate statistical approach. Consistent differentiation of milk from the system groups due to feeding-related impacts was possible in general and on the basis of 15 markers. The prediction of the main system factors was based on four or five markers. The prediction of 'intensification level' was based mainly on CLA c9,t11 and C18:1 t11, whereas that of 'origin' was based on n-3 PUFA. It was possible to demonstrate consistent differences in the FA and AO profiles of organic and standard conventional milk samples. Highest concentrations of nutritionally beneficial compounds were found in the low-input organic system. Adapted grass-based feeding strategies including pasture offer the potential to produce a distinguishable organic milk product quality. © 2014 Society of Chemical Industry.
NASA Technical Reports Server (NTRS)
Hough, D. H.; Readhead, A. C. S.
1989-01-01
A complete, flux-density-limited sample of double-lobed radio quasars is defined, with nuclei bright enough to be mapped with the Mark III VLBI system. It is shown that the statistics of linear size, nuclear strength, and curvature are consistent with the assumption of random source orientations and simple relativistic beaming in the nuclei. However, these statistics are also consistent with the effects of interaction between the beams and the surrounding medium. The distribution of jet velocities in the nuclei, as measured with VLBI, will provide a powerful test of physical theories of extragalactic radio sources.
Kwiatkowska, Małgorzata; Walczak, Zbigniew
2016-01-01
Nutrition plays an important role in the elderly stage of life. A proper proportion of the individual nutritional ingredients in a diet may positively impact the ageing body. This positive influence consists in slowing down the undesired and unfavourable physiological alterations leading inevitably to the general weakness of the body. The aim of the paper was to perform a qualitative analysis with the Starzyńska scoring system for diets, the daily food rations (DFR), among students of the University of the Third Age at the Koszalin University of Technology (Poland). The studied materials consisted of the 7-day current records made by 79 students (16 males and 63 females) of the University of the Third Age at the Koszalin University of Technology and the measurements of body weight, height and waistline. The records were qualitatively evaluated with Starzyńska's test. It was found that approximately half of the students were overweight or obese. The majority consumed the recommended number of meals. About 44% of the students consumed animal protein with all meals. Milk and cheese were ingested daily with at least two meals by approximately 11% of the students. Fruit and vegetables are eaten on a daily basis by about 60% of the students. Almost 40% ate wholegrain bread, groats, and dried legumes. The statistical analysis of the means for the points of individual indicators did not reveal any statistically significant difference between women and men (p>0.05). Approximately ¾ of the evaluated diets were inaccurately formulated and required radical modification. The low frequency of animal protein, milk and cheese, wholegrain bread, groats and dried legume consumption may result in deficiencies in certain nutrients. Nutritional education is recommended, focusing on the correct way to formulate meals. The recorded level of overweight and obesity in the students indicates a need for a quantitative assessment of consumption considering, among others, the energy input in their diets.
Evaluation of a seismic quiescence pattern in southeastern sicily
NASA Astrophysics Data System (ADS)
Mulargia, F.; Broccio, F.; Achilli, V.; Baldi, P.
1985-07-01
Southeastern Sicily experienced a very peculiar seismic activity in historic times, with a long series of ruinous earthquakes. A last large event, with magnitude probably in excess of 7.5, occurred on Jan., 11, 1693, totally destroying the city of Catania and killing 60,000 people. Only a few moderate events were reported since then, and a seismic gap issue has been proposed on this basis. A close scrutiny of the available data further shows that all significant seismic activity ceased after year 1850, suggesting one of the largest quiescence patterns ever encountered. This is examined together with the complex tectonic setting of the region, characterized by a wrenching mechanism with most significant seismicity located in its northern graben structure. An attempt to ascertain the imminence and the size of a future earthquake through commonly accepted empirical relations based on size and duration of the quiescence pattern did not provide any feasible result. A precision levelling survey which we recently completed yielded a relative subsidence of ~ 3 mm/yr, consistent with an aseismic slip on the northern graben structure at a rate of ~ 15 mm/yr. Comparing these results with sedimentological and tidal data suggests that the area is undergoing an accelerated deformation process; this issue is further supported by Rikitake's ultimate strain statistics. If the imminence of a damaging ( M = 5.4) event is strongly favoured by Weibull statistics applied to the time series of occurrence of large events, the accumulated strain does not appear sufficient for a large earthquake ( M ⪸ 7.0). Within the limits of reliability of present semi-empirical approaches we conclude that the available evidence is consistent with the occurrence of a moderate-to-large ( M ≅ 6.0) event in the near future. Several questions regarding the application of simple models to real (and complex) tectonic settings remain nevertheless unanswered.
Intermediate/Advanced Research Design and Statistics
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2009-01-01
The purpose of this module is To provide Institutional Researchers (IRs) with an understanding of the principles of advanced research design and the intermediate/advanced statistical procedures consistent with such designs
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salazar, Keith D., E-mail: Salazar.keith@epa.gov; Brinkerhoff, Christopher J., E-mail: Brinkerhoff.Chris@epa.gov; Lee, Janice S., E-mail: Lee.JaniceS@epa.gov
Subchronic and chronic studies in rats of the gasoline oxygenates ethyl tert-butyl ether (ETBE) and tert-butanol (TBA) report similar noncancer kidney and liver effects but differing results with respect to kidney and liver tumors. Because TBA is a major metabolite of ETBE, it is possible that TBA is the active toxic moiety in all these studies, with reported differences due simply to differences in the internal dose. To test this hypothesis, a physiologically-based pharmacokinetic (PBPK) model was developed for ETBE and TBA to calculate internal dosimetrics of TBA following either TBA or ETBE exposure. This model, based on earlier PBPKmore » models of methyl tert-butyl ether (MTBE), was used to evaluate whether kidney and liver effects are consistent across routes of exposure, as well as between ETBE and TBA studies, on the basis of estimated internal dose. The results demonstrate that noncancer kidney effects, including kidney weight changes, urothelial hyperplasia, and chronic progressive nephropathy (CPN), yielded consistent dose–response relationships across routes of exposure and across ETBE and TBA studies using TBA blood concentration as the dose metric. Relative liver weights were also consistent across studies on the basis of TBA metabolism, which is proportional to TBA liver concentrations. However, kidney and liver tumors were not consistent using any dose metric. These results support the hypothesis that TBA mediates the noncancer kidney and liver effects following ETBE administration; however, additional factors besides internal dose are necessary to explain the induction of liver and kidney tumors. - Highlights: • We model two metabolically-related fuel oxygenates to address toxicity data gaps. • Kidney and liver effects are compared on an internal dose basis. • Noncancer kidney effects are consistent using TBA blood concentration. • Liver weight changes are consistent using TBA metabolic rate. • Kidney and liver tumors are not consistent using any internal dose metric.« less
Gymnastics Safety and The Law.
ERIC Educational Resources Information Center
Dailey, Bob
Data collected from the National Electronic Injury Surveillance System (NEISS) and 26 tort liability cases are examined as a basis for recommendations for gymnastics instructors, supervisors, and administrators. Tables supply supportive statistics for a discussion of gymnastics injuries classified by sex, body part injured, severity, and…
Inverse modeling with RZWQM2 to predict water quality
USDA-ARS?s Scientific Manuscript database
Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...
Solutions to Some Nonlinear Equations from Nonmetric Data.
ERIC Educational Resources Information Center
Rule, Stanley J.
1979-01-01
A method to provide estimates of parameters of specified nonlinear equations from ordinal data generated from a crossed design is presented. The statistical basis for the method, called NOPE (nonmetric parameter estimation), as well as examples using artifical data, are presented. (Author/JKS)
Structure of wind-shear turbulence
NASA Technical Reports Server (NTRS)
Trevino, G.; Laituri, T. R.
1988-01-01
The statistical characteristics of wind-shear turbulence are modelled. Isotropic turbulence serves as the basis of comparison for the anisotropic turbulence which exists in wind shear. The question of how turbulence scales in a wind shear is addressed from the perspective of power spectral density.
On the optical search for Centaurus X-3.
NASA Technical Reports Server (NTRS)
Brucato, R. J.; Kristian, J.; Westphal, J. A.
1972-01-01
Elimination of the optical eclipsing binary LR Cen as a candidate for Cen X-3 on the basis of a real discrepancy of orbital periods. It is believed that the position coincidence of Wray 795 with Cen X-3 is not statistically significant.
Improving mobility for Wisconsin's elderly : brief.
DOT National Transportation Integrated Search
2011-10-01
By 2035, the number of elderly residents in Wisconsin is expected to nearly double, and one in four drivers on Wisconsin roads will be elderly. According to national statistics, the elderly are more likely to be involved in crashes on a per-mile basi...
DEVELOPING MEANINGFUL COHORTS FOR HUMAN EXPOSURE MODELS
This paper summarizes numerous statistical analyses focused on the U.S. Environmental Protection Agency's Consolidated Human Activity Database (CHAD), used by many exposure modelers as the basis for data on what people do and where they spend their time. In doing so, modelers ...
Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems
Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1996-01-01
The purpose of the propagation studies within the ACTS Project Office is to acquire 20 and 30 GHz rain fade statistics using the ACTS beacon links received at the NGS (NASA Ground Station) in Cleveland. Other than the raw, statistically unprocessed rain fade events that occur in real time, relevant rain fade statistics derived from such events are the cumulative rain fade statistics as well as fade duration statistics (beyond given fade thresholds) over monthly and yearly time intervals. Concurrent with the data logging exercise, monthly maximum rainfall levels recorded at the US Weather Service at Hopkins Airport are appended to the database to facilitate comparison of observed fade statistics with those predicted by the ACTS Rain Attenuation Model. Also, the raw fade data will be in a format, complete with documentation, for use by other investigators who require realistic fade event evolution in time for simulation purposes or further analysis for comparisons with other rain fade prediction models, etc. The raw time series data from the 20 and 30 GHz beacon signals is purged of non relevant data intervals where no rain fading has occurred. All other data intervals which contain rain fade events are archived with the accompanying time stamps. The definition of just what constitutes a rain fade event will be discussed later. The archived data serves two purposes. First, all rain fade event data is recombined into a contiguous data series every month and every year; this will represent an uninterrupted record of the actual (i.e., not statistically processed) temporal evolution of rain fade at 20 and 30 GHz at the location of the NGS. The second purpose of the data in such a format is to enable a statistical analysis of prevailing propagation parameters such as cumulative distributions of attenuation on a monthly and yearly basis as well as fade duration probabilities below given fade thresholds, also on a monthly and yearly basis. In addition, various subsidiary statistics such as attenuation rate probabilities are derived. The purged raw rain fade data as well as the results of the analyzed data will be made available for use by parties in the private sector upon their request. The process which will be followed in this dissemination is outlined in this paper.
Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A
2008-12-01
It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.
NASA Astrophysics Data System (ADS)
Gentilucci, Matteo; Bisci, Carlo; Fazzini, Massimiliano; Tognetti, Danilo
2016-04-01
The analysis is focused on more than 100 meteorological recording stations located in the Province of Macerata (Marche region, Adriatic side of Central Italy) and in its neighbours; it aims to check the time series of their climatological data (temperatures and precipitations), covering about one century of observations, in order to remove or rectify any errors. This small area (about 2.800Km2) features many different climate types, because of its varied topography ranging, moving westward, from the Adriatic coast to the Appennines (over 2.100m of altitude). In this irregular context, it is difficult to establish a common procedure for each sector; therefore, it has been followed the general guidelines of the WMO, with some important difference (mostly in the method). Data are classified on the basis of validation codes (VC): missing datum (VC=-1), correct or verified datum (VC=0), datum under investigation (VC=1), datum removed after the analysis (VC=2), datum reconstructed through interpolation or by estimating the errors of digitization (VC=3). The first step was the "Logical Control", consisting in the investigation of gross errors of digitization: the data found in this phase of the analysis has been removed without any other control (VC=2). The second step, represented by the "Internal Consistency Check", leads to the elimination (VC=2) of all the data out of range, estimated on the basis of the climate zone for each investigated variable. The third one is the "Tolerance Test", carried out comparing each datum with the historical record it belongs to, in order to apply this test, the normal distribution of data has been evaluated. The "Tolerance Test" usually defines only suspect data (VC=1) to be verified with further tests, such as the "Temporal Consistency" and the "Spatial Consistency". The "Temporal Consistency" allows an evaluation of the time sequence of data, setting a specified range for each station basing upon its historical records. Data out of range have been considered under investigation (VC=1). Data are finally compared with the ones contemporaneously recorded in a set of neighboring meteorological stations through the "Spatial Consistency" test, thus eliminating every suspicious datum (recoded VC=2 or VC=0, depending upon the results of this analysis). This procedure uses a series of different statistic steps to avoid uncertainties: at its end, all the investigated data are either accepted (VC=0) or refused (VC=2). Refused and missing data (VC=-1 and VC=2) have been reconstructed through interpolation using co-kriging techniques (assigning VC=3), when necessary, in the final stage of the process. All the above procedure has been developed using a database managing software in a GIS (ESRI ArcGIS ®) environment. The refused data are 1.286 in 77.021 (1,67%) for the precipitations and 375 in 1.821.054 for the temperatures (0,02%).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okada, S.; Shinada, M.; Matsuoka, O.
1990-10-01
A systematic calculation of new relativistic Gaussian basis sets is reported. The new basis sets are similar to the previously reported ones (J. Chem. Phys. {bold 91}, 4193 (1989)), but, in the calculation, the Breit interaction has been explicitly included besides the Dirac--Coulomb Hamiltonian. They have been adopted for the calculation of the self-consistent field effect on the Breit interaction energies and are expected to be useful for the studies on higher-order effects such as the electron correlations and other quantum electrodynamical effects.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Artificial Intelligence Approach to Support Statistical Quality Control Teaching
ERIC Educational Resources Information Center
Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno
2006-01-01
Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…
Web-Based Statistical Sampling and Analysis
ERIC Educational Resources Information Center
Quinn, Anne; Larson, Karen
2016-01-01
Consistent with the Common Core State Standards for Mathematics (CCSSI 2010), the authors write that they have asked students to do statistics projects with real data. To obtain real data, their students use the free Web-based app, Census at School, created by the American Statistical Association (ASA) to help promote civic awareness among school…
The Necessity of the Hippocampus for Statistical Learning
Covington, Natalie V.; Brown-Schmidt, Sarah; Duff, Melissa C.
2018-01-01
Converging evidence points to a role for the hippocampus in statistical learning, but open questions about its necessity remain. Evidence for necessity comes from Schapiro and colleagues who report that a single patient with damage to hippocampus and broader medial temporal lobe cortex was unable to discriminate new from old sequences in several statistical learning tasks. The aim of the current study was to replicate these methods in a larger group of patients who have either damage localized to hippocampus or a broader medial temporal lobe damage, to ascertain the necessity of the hippocampus in statistical learning. Patients with hippocampal damage consistently showed less learning overall compared with healthy comparison participants, consistent with an emerging consensus for hippocampal contributions to statistical learning. Interestingly, lesion size did not reliably predict performance. However, patients with hippocampal damage were not uniformly at chance and demonstrated above-chance performance in some task variants. These results suggest that hippocampus is necessary for statistical learning levels achieved by most healthy comparison participants but significant hippocampal pathology alone does not abolish such learning. PMID:29308986
Magnet Marketing: Drawing Prospects to Your Center Until They Enroll.
ERIC Educational Resources Information Center
Wassom, Julie
1999-01-01
The key to achieving effective marketing of centers and services is to make messages creative, consistent, and continual. Creative marketing involves creating an awareness of the program. A consistent, identifiable image will create program recognition. Effective, continual marketing utilizes a mix of marketing methods on an ongoing basis to…
40 CFR 63.2872 - What definitions apply to this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... NESHAP General Provisions. (c) In this section as follows: Accounting month means a time interval defined... consistent and regular basis. An accounting month will consist of approximately 4 to 5 calendar weeks and each accounting month will be of approximate equal duration. An accounting month may not correspond...
Standards Handbook. Version 4.0. What Works Clearinghouse™
ERIC Educational Resources Information Center
What Works Clearinghouse, 2017
2017-01-01
The What Works Clearinghouse (WWC) systematic review process is the basis of many of its products, enabling the WWC to use consistent, objective, and transparent standards and procedures in its reviews, while also ensuring comprehensive coverage of the relevant literature. The WWC systematic review process consists of five steps: (1) Developing…
Ferenczy, György G
2013-04-05
The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.
50 CFR 403.04 - Determinations and hearings under section 109(c) of the MMPA.
Code of Federal Regulations, 2010 CFR
2010-10-01
... management program the state must provide for a process, consistent with section 109(c) of the Act, to... must include the elements set forth below. (b) Basis, purpose, and scope. The process set forth in this... made solely on the basis of the record developed at the hearing. The state agency in making its final...
Analysis of Statistical Methods Currently used in Toxicology Journals
Na, Jihye; Yang, Hyeri
2014-01-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012
Analysis of Statistical Methods Currently used in Toxicology Journals.
Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min
2014-09-01
Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.
3D QSAR based design of novel oxindole derivative as 5HT7 inhibitors.
Chitta, Aparna; Sivan, Sree Kanth; Manga, Vijjulatha
2014-06-01
To understand the structural requirements of 5-hydroxytryptamine (5HT7) receptor inhibitors and to design new ligands against 5HT7 receptor with enhanced inhibitory potency, a three-dimensional quantitative structure-activity relationship study with comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) for a data set of 56 molecules consisting of oxindole, tetrahydronaphthalene, aryl ketone substituted arylpiperazinealkylamide derivatives was performed. Derived model showed good statistical reliability in terms of predicting 5HT7 inhibitory activity of the molecules, based on molecular property fields like steric, electrostatic, hydrophobic, hydrogen bond donor and hydrogen bond acceptor fields. This is evident from statistical parameters like conventional r2 and a cross validated (q2) values of 0.985, 0.743 for CoMFA and 0.970, 0.608 for CoMSIA, respectively. Predictive ability of the models to determine 5HT7 antagonistic activity is validated using a test set of 16 molecules that were not included in the training set. Predictive r2 obtained for the test set was 0.560 and 0.619 for CoMFA and CoMSIA, respectively. Steric, electrostatic fields majorly contributed toward activity which forms the basis for design of new molecules. Absorption, distribution, metabolism and elimination (ADME) calculation using QikProp 2.5 (Schrodinger 2010, Portland, OR) reveals that the molecules confer to Lipinski's rule of five in majority of the cases.
Kong, Grace; Tsai, Jack; Krishnan-Sarin, Suchitra; Cavallo, Dana A.; Hoff, Rani A.; Steinberg, Marvin A.; Rugle, Loreen; Potenza, Marc N.
2015-01-01
Objectives To identify subtypes of adolescent gamblers based on the 10 Diagnostic and Statistical Manual of Mental Disorders, fourth edition criteria for pathological gambling and the 9 Diagnostic and Statistical Manual of Mental Disorders, fifth edition criteria for gambling disorder and to examine associations between identified subtypes with gambling, other risk behaviors, and health/functioning characteristics. Methods Using cross-sectional survey data from 10 high schools in Connecticut (N = 3901), we conducted latent class analysis to classify adolescents who reported past-year gambling into gambling groups on the basis of items from the Massachusetts Gambling Screen. Adolescents also completed questions assessing demographic information, substance use (cigarette, marijuana, alcohol, and other drugs), gambling behaviors (relating to gambling formats, locations, motivations, and urges), and health/functioning characteristics (eg, extracurricular activities, mood, aggression, and body mass index). Results The optimal solution consisted of 4 classes that we termed low-risk gambling (86.4%), at-risk chasing gambling (7.6%), at-risk negative consequences gambling (3.7%), and problem gambling (PrG) (2.3%). At-risk and PrG classes were associated with greater negative functioning and more gambling behaviors. Different patterns of associations between at-risk and PrG classes were also identified. Conclusions Adolescent gambling classifies into 4 classes, which are differentially associated with demographic, gambling patterns, risk behaviors, and health/functioning characteristics. Early identification and interventions for adolescent gamblers should be sensitive to the heterogeneity of gambling subtypes. PMID:25275877
Uterine Fibroid Embolization for Symptomatic Fibroids: Study at a Teaching Hospital in Kenya
Mutai, John Kiprop; Vinayak, Sudhir; Stones, William; Hacking, Nigel; Mariara, Charles
2015-01-01
Objective: Characterization of magnetic (MRI) features in women undergoing uterine fibroid embolization (UFE) and identification of clinical correlates in an African population. Materials and Methods: Patients with symptomatic fibroids who are selected to undergo UFE at the hospital formed the study population. The baseline MRI features, baseline symptom score, short-term imaging outcome, and mid-term symptom scores were analyzed for interval changes. Assessment of potential associations between short-term imaging features and mid-term symptom scores was also done. Results: UFE resulted in statistically significant reduction (P < 0.001) of dominant fibroid, uterine volumes, and reduction of symptom severity scores, which were 43.7%, 40.1%, and 37.8%, respectively. Also, 59% of respondents had more than 10 fibroids. The predominant location of the dominant fibroid was intramural. No statistically significant association was found between clinical and radiological outcome. Conclusion: The response of uterine fibroids to embolization in the African population is not different from the findings reported in other studies from the west. The presence of multiple and large fibroids in this study is consistent with the case mix described in other studies of African-American populations. Patient counseling should emphasize the independence of volume reduction and symptom improvement. Though volume changes are of relevance for the radiologist in understanding the evolution of the condition and identifying potential technical treatment failures, it should not be the main basis of evaluation of treatment success. PMID:25883858
COSMIC MICROWAVE BACKGROUND LIKELIHOOD APPROXIMATION FOR BANDED PROBABILITY DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gjerløw, E.; Mikkelsen, K.; Eriksen, H. K.
We investigate sets of random variables that can be arranged sequentially such that a given variable only depends conditionally on its immediate predecessor. For such sets, we show that the full joint probability distribution may be expressed exclusively in terms of uni- and bivariate marginals. Under the assumption that the cosmic microwave background (CMB) power spectrum likelihood only exhibits correlations within a banded multipole range, Δl{sub C}, we apply this expression to two outstanding problems in CMB likelihood analysis. First, we derive a statistically well-defined hybrid likelihood estimator, merging two independent (e.g., low- and high-l) likelihoods into a single expressionmore » that properly accounts for correlations between the two. Applying this expression to the Wilkinson Microwave Anisotropy Probe (WMAP) likelihood, we verify that the effect of correlations on cosmological parameters in the transition region is negligible in terms of cosmological parameters for WMAP; the largest relative shift seen for any parameter is 0.06σ. However, because this may not hold for other experimental setups (e.g., for different instrumental noise properties or analysis masks), but must rather be verified on a case-by-case basis, we recommend our new hybridization scheme for future experiments for statistical self-consistency reasons. Second, we use the same expression to improve the convergence rate of the Blackwell-Rao likelihood estimator, reducing the required number of Monte Carlo samples by several orders of magnitude, and thereby extend it to high-l applications.« less
Bae, Jong-Myon
2016-01-01
A common method for conducting a quantitative systematic review (QSR) for observational studies related to nutritional epidemiology is the "highest versus lowest intake" method (HLM), in which only the information concerning the effect size (ES) of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM), a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES) between the HLM and ICM. A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.
Schramke, H; Meisgen, T J; Tewes, F J; Gomm, W; Roemer, E
2006-10-29
The mouse lymphoma thymidine kinase assay (MLA) has been optimized to quantitatively determine the in vitro mutagenicity of cigarette mainstream smoke particulate phase. To test whether the MLA is able to discriminate between different cigarette types, specially constructed cigarettes each containing a single tobacco type - Bright, Burley, or Oriental - were investigated. The mutagenic activity of the Burley cigarette was statistically significantly lower, up to approximately 40%, than that of the Bright and Oriental cigarettes. To determine the impact of two different sets of smoking conditions, American-blend cigarettes were smoked under US Federal Trade Commission/International Organisation for Standardisation conditions and under Massachusetts Department of Public Health (MDPH) conditions. Conventional cigarettes - eight from the US commercial market plus the Reference Cigarettes 1R4F and 2R4F - and an electrically heated cigarette smoking system (EHCSS) prototype were tested. There were no statistically significant differences between the two sets of smoking conditions on a per mg total particulate matter basis, although there was a consistent trend towards slightly lower mutagenic activity under MDPH conditions. The mutagenic activity of the EHCSS prototype was distinctly lower than that of the conventional cigarettes under both sets of smoking conditions. These results show that the MLA can be used to assess and compare the mutagenic activity of cigarette mainstream smoke particulate phase in the comprehensive toxicological assessment of cigarette smoke.
Self-assessed performance improves statistical fusion of image labels
Bryan, Frederick W.; Xu, Zhoubing; Asman, Andrew J.; Allen, Wade M.; Reich, Daniel S.; Landman, Bennett A.
2014-01-01
Purpose: Expert manual labeling is the gold standard for image segmentation, but this process is difficult, time-consuming, and prone to inter-individual differences. While fully automated methods have successfully targeted many anatomies, automated methods have not yet been developed for numerous essential structures (e.g., the internal structure of the spinal cord as seen on magnetic resonance imaging). Collaborative labeling is a new paradigm that offers a robust alternative that may realize both the throughput of automation and the guidance of experts. Yet, distributing manual labeling expertise across individuals and sites introduces potential human factors concerns (e.g., training, software usability) and statistical considerations (e.g., fusion of information, assessment of confidence, bias) that must be further explored. During the labeling process, it is simple to ask raters to self-assess the confidence of their labels, but this is rarely done and has not been previously quantitatively studied. Herein, the authors explore the utility of self-assessment in relation to automated assessment of rater performance in the context of statistical fusion. Methods: The authors conducted a study of 66 volumes manually labeled by 75 minimally trained human raters recruited from the university undergraduate population. Raters were given 15 min of training during which they were shown examples of correct segmentation, and the online segmentation tool was demonstrated. The volumes were labeled 2D slice-wise, and the slices were unordered. A self-assessed quality metric was produced by raters for each slice by marking a confidence bar superimposed on the slice. Volumes produced by both voting and statistical fusion algorithms were compared against a set of expert segmentations of the same volumes. Results: Labels for 8825 distinct slices were obtained. Simple majority voting resulted in statistically poorer performance than voting weighted by self-assessed performance. Statistical fusion resulted in statistically indistinguishable performance from self-assessed weighted voting. The authors developed a new theoretical basis for using self-assessed performance in the framework of statistical fusion and demonstrated that the combined sources of information (both statistical assessment and self-assessment) yielded statistically significant improvement over the methods considered separately. Conclusions: The authors present the first systematic characterization of self-assessed performance in manual labeling. The authors demonstrate that self-assessment and statistical fusion yield similar, but complementary, benefits for label fusion. Finally, the authors present a new theoretical basis for combining self-assessments with statistical label fusion. PMID:24593721
NASA Astrophysics Data System (ADS)
Bae, Minja; Park, Jihyun; Kim, Jongju; Xue, Dandan; Park, Kyu-Chil; Yoon, Jong Rak
2016-07-01
The bit error rate of an underwater acoustic communication system is related to multipath fading statistics, which determine the signal-to-noise ratio. The amplitude and delay of each path depend on sea surface roughness, propagation medium properties, and source-to-receiver range as a function of frequency. Therefore, received signals will show frequency-dependent fading. A shallow-water acoustic communication channel generally shows a few strong multipaths that interfere with each other and the resulting interference affects the fading statistics model. In this study, frequency-selective fading statistics are modeled on the basis of the phasor representation of the complex path amplitude. The fading statistics distribution is parameterized by the frequency-dependent constructive or destructive interference of multipaths. At a 16 m depth with a muddy bottom, a wave height of 0.2 m, and source-to-receiver ranges of 100 and 400 m, fading statistics tend to show a Rayleigh distribution at a destructive interference frequency, but a Rice distribution at a constructive interference frequency. The theoretical fading statistics well matched the experimental ones.
NASA Astrophysics Data System (ADS)
Girard, L.; Weiss, J.; Molines, J. M.; Barnier, B.; Bouillon, S.
2009-08-01
Sea ice drift and deformation from models are evaluated on the basis of statistical and scaling properties. These properties are derived from two observation data sets: the RADARSAT Geophysical Processor System (RGPS) and buoy trajectories from the International Arctic Buoy Program (IABP). Two simulations obtained with the Louvain-la-Neuve Ice Model (LIM) coupled to a high-resolution ocean model and a simulation obtained with the Los Alamos Sea Ice Model (CICE) were analyzed. Model ice drift compares well with observations in terms of large-scale velocity field and distributions of velocity fluctuations although a significant bias on the mean ice speed is noted. On the other hand, the statistical properties of ice deformation are not well simulated by the models: (1) The distributions of strain rates are incorrect: RGPS distributions of strain rates are power law tailed, i.e., exhibit "wild randomness," whereas models distributions remain in the Gaussian attraction basin, i.e., exhibit "mild randomness." (2) The models are unable to reproduce the spatial and temporal correlations of the deformation fields: In the observations, ice deformation follows spatial and temporal scaling laws that express the heterogeneity and the intermittency of deformation. These relations do not appear in simulated ice deformation. Mean deformation in models is almost scale independent. The statistical properties of ice deformation are a signature of the ice mechanical behavior. The present work therefore suggests that the mechanical framework currently used by models is inappropriate. A different modeling framework based on elastic interactions could improve the representation of the statistical and scaling properties of ice deformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawnsley, K.; Swaby, P.
1996-08-01
It is increasingly acknowledged that in order to understand and forecast the behavior of fracture influenced reservoirs we must attempt to reproduce the fracture system geometry and use this as a basis for fluid flow calculation. This article aims to present a recently developed fracture modelling prototype designed specifically for use in hydrocarbon reservoir environments. The prototype {open_quotes}FRAME{close_quotes} (FRActure Modelling Environment) aims to provide a tool which will allow the generation of realistic 3D fracture systems within a reservoir model, constrained to the known geology of the reservoir by both mechanical and statistical considerations, and which can be used asmore » a basis for fluid flow calculation. Two newly developed modelling techniques are used. The first is an interactive tool which allows complex fault surfaces and their associated deformations to be reproduced. The second is a {open_quotes}genetic{close_quotes} model which grows fracture patterns from seeds using conceptual models of fracture development. The user defines the mechanical input and can retrieve all the statistics of the growing fractures to allow comparison to assumed statistical distributions for the reservoir fractures. Input parameters include growth rate, fracture interaction characteristics, orientation maps and density maps. More traditional statistical stochastic fracture models are also incorporated. FRAME is designed to allow the geologist to input hard or soft data including seismically defined surfaces, well fractures, outcrop models, analogue or numerical mechanical models or geological {open_quotes}feeling{close_quotes}. The geologist is not restricted to {open_quotes}a priori{close_quotes} models of fracture patterns that may not correspond to the data.« less
PV System Component Fault and Failure Compilation and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
Photon-number statistics of twin beams: Self-consistent measurement, reconstruction, and properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peřina, Jan Jr.; Haderka, Ondřej; Michálek, Václav
2014-12-04
A method for the determination of photon-number statistics of twin beams using the joint signal-idler photocount statistics obtained by an iCCD camera is described. It also provides absolute quantum detection efficiency of the camera. Using the measured photocount statistics, quasi-distributions of integrated intensities are obtained. They attain negative values occurring in characteristic strips an a consequence of pairing of photons in twin beams.
2016-06-01
14 Table 2. Summary of Statistics from GGSS Data ........................................ 35 Table 3. Summary of Statistics from...similar approach are unsurprisingly quite consistent in outcomes within statistical variance. The model is used to estimate the effects of exogenous...of German residents (~82 million), excluding diplomats, foreign military and homeless persons. (German Federal Office of Statistics , 2013, p. 475
Mohammed A. Kalkhan; Robin M. Reich; Raymond L. Czaplewski
1996-01-01
A Monte Carlo simulation was used to evaluate the statistical properties of measures of association and the Kappa statistic under double sampling with replacement. Three error matrices representing three levels of classification accuracy of Landsat TM Data consisting of four forest cover types in North Carolina. The overall accuracy of the five indices ranged from 0.35...
Langford, Jim; Fitzharris, Michael; Koppel, Sjaanie; Newstead, Stuart
2004-12-01
Most licensing jurisdictions in Australia maintain mandatory assessment programs targeting older drivers, whereby a driver reaching a specified age is required to prove his or her fitness to drive through medical assessment and/or on-road testing. Previous studies both in Australia and elsewhere have consistently failed to demonstrate that age-based mandatory assessment results in reduced crash involvement for older drivers. However studies that have based their results upon either per-population or per-driver crash rates fail to take into account possible differences in driving activity. Because some older people maintain their driving licenses but rarely if ever drive, the proportion of inactive license-holders might be higher in jurisdictions without mandatory assessment relative to jurisdictions with periodic license assessment, where inactive drivers may more readily either surrender or lose their licenses. The failure to control for possible differences in driving activity across jurisdictions may be disguising possible safety benefits associated with mandatory assessment. The current study compared the crash rates of drivers in Melbourne, Australia, where there is no mandatory assessment and Sydney, Australia, where there is regular mandatory assessment from 80 years of age onward. The crash rate comparisons were based on four exposure measures: per population, per licensed driver, per distance driven, and per time spent driving. Poisson regression analysis incorporating an offset to control for inter-jurisdictional road safety differences indicated that there was no difference in crash risk for older drivers based on population. However drivers aged 80 years and older in the Sydney region had statistically higher rates of casualty crash involvement than their Melbourne counterparts on a per license issued basis (RR: 1.15, 1.02-1.29, p=0.02) and time spent driving basis (RR: 1.19, 1.06-1.34, p=0.03). A similar trend was apparent based on distance travelled but was of borderline statistical significance (RR: 1.11, 0.99-1.25, p=0.07). Collectively, it can be inferred from these findings that mandatory license re-testing schemes of the type evaluated have no demonstrable road safety benefits overall. Further research to resolve this on-going policy debate is discussed and recommended.
Trace Element Study of H Chondrites: Evidence for Meteoroid Streams.
NASA Astrophysics Data System (ADS)
Wolf, Stephen Frederic
1993-01-01
Multivariate statistical analyses, both linear discriminant analysis and logistic regression, of the volatile trace elemental concentrations in H4-6 chondrites reveal compositionally distinguishable subpopulations. Observed difference in volatile trace element composition between Antarctic and non-Antarctic H4-6 chondrites (Lipschutz and Samuels, 1991) can be explained by a compositionaily distinct subpopulation found in Victoria Land, Antarctica. This population of H4-6 chondrites is compositionally distinct from non-Antarctic H4-6 chondrites and from Antarctic H4 -6 chondrites from Queen Maud Land. Comparisons of Queen Maud Land H4-6 chondrites with non-Antarctic H4-6 chondrites do not give reason to believe that these two populations are distinguishable from each other on the basis of the ten volatile trace element concentrations measured. ANOVA indicates that these differences are not the result of trivial causes such as weathering and analytical bias. Thermoluminescence properties of these populations parallels the results of volatile trace element comparisons. Given the differences in terrestrial age between Victoria Land, Queen Maud Land, and modern H4-6 chondrite falls, these results are consistent with a variation in H4-6 chondrite flux on a 300 ky timescale. This conclusion requires the existence of co-orbital meteoroid streams. Statistical analyses of the volatile trace elemental concentrations in non-Antarctic modern falls of H4-6 chondrites also demonstrate that a group of 13 H4-6 chondrites, Cluster 1, selected exclusively for their distinct fall parameters (Dodd, 1992) is compositionally distinguishable from a control group of 45 non-Antarctic modern H4-6 chondrites on the basis of the ten volatile trace element concentrations measured. Model-independent randomization-simulations based on both linear discriminant analysis and logistic regression verify these results. While ANOVA identifies two possible causes for this difference, analytical bias and group classification, a test validation experiment verifies that group classification is the more significant cause of compositional difference between Cluster 1 and non-Cluster 1 modern H4-6 chondrite falls. Thermoluminescence properties of these populations parallels the results of volatile trace element comparisons. This suggests that these meteorites are fragments of a co-orbital meteorite stream derived from a single parent body.
A Synergy Cropland of China by Fusing Multiple Existing Maps and Statistics.
Lu, Miao; Wu, Wenbin; You, Liangzhi; Chen, Di; Zhang, Li; Yang, Peng; Tang, Huajun
2017-07-12
Accurate information on cropland extent is critical for scientific research and resource management. Several cropland products from remotely sensed datasets are available. Nevertheless, significant inconsistency exists among these products and the cropland areas estimated from these products differ considerably from statistics. In this study, we propose a hierarchical optimization synergy approach (HOSA) to develop a hybrid cropland map of China, circa 2010, by fusing five existing cropland products, i.e., GlobeLand30, Climate Change Initiative Land Cover (CCI-LC), GlobCover 2009, MODIS Collection 5 (MODIS C5), and MODIS Cropland, and sub-national statistics of cropland area. HOSA simplifies the widely used method of score assignment into two steps, including determination of optimal agreement level and identification of the best product combination. The accuracy assessment indicates that the synergy map has higher accuracy of spatial locations and better consistency with statistics than the five existing datasets individually. This suggests that the synergy approach can improve the accuracy of cropland mapping and enhance consistency with statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, H.; Wang, M.; Elgowainy, A.
Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors inmore » the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life-cycle modeling with GREET.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less
Annotated Bibliography of Relative Sea Level Change
1991-09-01
Millennial Basis, Morner and Kui-len, eds., D. Reidel. Publishing Company, pp 571-604. Over the last few years, considerable attention has been given to...historic erosion rates. Curiously , the ability to model statistically the historic shore erosion rate is best on those reaches already substantially
Network Motif Basis of Threshold Responses
There has been a long-running debate over the existence of thresholds for adverse effects. The difficulty stems from two fundamental challenges: (i) statistical analysis by itself cannot prove the existence of a threshold, i.e., a dose below which there is no effect; and (ii) the...
Sensing system development for HOV/HOT (high occupancy vehicle) lane monitoring.
DOT National Transportation Integrated Search
2011-02-01
With continued interest in the efficient use of roadways the ability to monitor the use of HOV/HOT lanes is essential for management, planning and operation. A system to reliably monitor these lanes on a continuous basis and provide usage statistics ...
Topics in Measurement: Reliability and Validity.
ERIC Educational Resources Information Center
Dick, Walter; Hagerty, Nancy
This text was developed on an autoinstructional basis to familiarize the reader with the various interpretations of reliability and validity, their measurement and evaluation, and factors influencing their measurement. The text enables those with prior knowledge of statistics to increase their understanding of variance and correlation. Review…
Tsatsarelis, Thomas; Antonopoulos, Ioannis; Karagiannidis, Avraam; Perkoulidis, George
2007-10-01
This study presents an assessment of the current status of open dumps in Laconia prefecture of Peloponnese in southern Greece, where all open dumps are targeted for closure by 2008. An extensive field survey was conducted in 2005 to register existing sites in the prefecture. The data collected included the site area and age, waste depth, type of disposed waste, distance from nearest populated area, local geographical features and observed practices of open burning and soil coverage. On the basis of the collected data, a GIS database was developed, and the above parameters were statistically analysed. Subsequently, a decision tool for the restoration of open dumps was implemented, which led to the prioritization of site restorations and specific decisions about appropriate restoration steps for each site. The sites requiring restoration were then further classified using Principal Component Analysis, in order to categorize them into groups suitable for similar restoration work, thus facilitating fund allocation and subsequent restoration project management.
NASA Astrophysics Data System (ADS)
Goyal, Sandeep K.; Singh, Rajeev; Ghosh, Sibasish
2016-01-01
Mixed states of a quantum system, represented by density operators, can be decomposed as a statistical mixture of pure states in a number of ways where each decomposition can be viewed as a different preparation recipe. However the fact that the density matrix contains full information about the ensemble makes it impossible to estimate the preparation basis for the quantum system. Here we present a measurement scheme to (seemingly) improve the performance of unsharp measurements. We argue that in some situations this scheme is capable of providing statistics from a single copy of the quantum system, thus making it possible to perform state tomography from a single copy. One of the by-products of the scheme is a way to distinguish between different preparation methods used to prepare the state of the quantum system. However, our numerical simulations disagree with our intuitive predictions. We show that a counterintuitive property of a biased classical random walk is responsible for the proposed mechanism not working.
Basis function models for animal movement
Hooten, Mevin B.; Johnson, Devin S.
2017-01-01
Advances in satellite-based data collection techniques have served as a catalyst for new statistical methodology to analyze these data. In wildlife ecological studies, satellite-based data and methodology have provided a wealth of information about animal space use and the investigation of individual-based animal–environment relationships. With the technology for data collection improving dramatically over time, we are left with massive archives of historical animal telemetry data of varying quality. While many contemporary statistical approaches for inferring movement behavior are specified in discrete time, we develop a flexible continuous-time stochastic integral equation framework that is amenable to reduced-rank second-order covariance parameterizations. We demonstrate how the associated first-order basis functions can be constructed to mimic behavioral characteristics in realistic trajectory processes using telemetry data from mule deer and mountain lion individuals in western North America. Our approach is parallelizable and provides inference for heterogenous trajectories using nonstationary spatial modeling techniques that are feasible for large telemetry datasets. Supplementary materials for this article are available online.
Kapalková, Svetlana; Slančová, Daniela
2017-01-01
This study compared a sample of children with primary language impairment (PLI) and typically developing age-matched children using the crosslinguistic lexical tasks (CLT-SK). We also compared the PLI children with typically developing language-matched younger children who were matched on the basis of receptive vocabulary. Overall, statistical testing showed that the vocabulary of the PLI children was significantly different from the vocabulary of the age-matched children, but not statistically different from the younger children who were matched on the basis of their receptive vocabulary size. Qualitative analysis of the correct answers revealed that the PLI children showed higher rigidity compared to the younger language-matched children who are able to use more synonyms or derivations across word class in naming tasks. Similarly, an examination of the children's naming errors indicated that the language-matched children exhibited more semantic errors, whereas PLI children showed more associative errors.
NETWORK ASSISTED ANALYSIS TO REVEAL THE GENETIC BASIS OF AUTISM1
Liu, Li; Lei, Jing; Roeder, Kathryn
2016-01-01
While studies show that autism is highly heritable, the nature of the genetic basis of this disorder remains illusive. Based on the idea that highly correlated genes are functionally interrelated and more likely to affect risk, we develop a novel statistical tool to find more potentially autism risk genes by combining the genetic association scores with gene co-expression in specific brain regions and periods of development. The gene dependence network is estimated using a novel partial neighborhood selection (PNS) algorithm, where node specific properties are incorporated into network estimation for improved statistical and computational efficiency. Then we adopt a hidden Markov random field (HMRF) model to combine the estimated network and the genetic association scores in a systematic manner. The proposed modeling framework can be naturally extended to incorporate additional structural information concerning the dependence between genes. Using currently available genetic association data from whole exome sequencing studies and brain gene expression levels, the proposed algorithm successfully identified 333 genes that plausibly affect autism risk. PMID:27134692
International Observe the Moon Night: An Effective Model for Public Engagement with NASA Content
NASA Technical Reports Server (NTRS)
Bleacher, L. V.; Jones, A. J. P.; Shaner, A.; Day, B.; Buxner, S.; Wegner, M.
2015-01-01
International Observe the Moon Night (InOMN) is an annual world-wide public engagement event designed with the goal of inspiring the public to want to learn more about NASAs contributions to planetary science and exploration, using the Earths Moon as an entryway, and to provide connections to do so [1,2,3]. InOMN will celebrate its 6th anniversary on September 19, 2015.Registration statistics from the past five years show an average of 500 InOMN events are held in 50 countries and 45 U.S. states per year (Figure 1), with over half of the events occurring outside the U.S. Host survey data indicate that approximately 55,000 to 75,000people participate in InOMN events each year. The consistent hosting of InOMN events across the U.S. and around the world indicates an interest by hosts in sharing lunar and planetary science with their local communities, as well as connecting with a larger international group of fellow space enthusiasts on an annual basis.
On the problem of boundaries and scaling for urban street networks
Masucci, A. Paolo; Arcaute, Elsa; Hatna, Erez; Stanilov, Kiril; Batty, Michael
2015-01-01
Urban morphology has presented significant intellectual challenges to mathematicians and physicists ever since the eighteenth century, when Euler first explored the famous Königsberg bridges problem. Many important regularities and scaling laws have been observed in urban studies, including Zipf's law and Gibrat's law, rendering cities attractive systems for analysis within statistical physics. Nevertheless, a broad consensus on how cities and their boundaries are defined is still lacking. Applying an elementary clustering technique to the street intersection space, we show that growth curves for the maximum cluster size of the largest cities in the UK and in California collapse to a single curve, namely the logistic. Subsequently, by introducing the concept of the condensation threshold, we show that natural boundaries of cities can be well defined in a universal way. This allows us to study and discuss systematically some of the regularities that are present in cities. We show that some scaling laws present consistent behaviour in space and time, thus suggesting the presence of common principles at the basis of the evolution of urban systems. PMID:26468071
On the problem of boundaries and scaling for urban street networks.
Masucci, A Paolo; Arcaute, Elsa; Hatna, Erez; Stanilov, Kiril; Batty, Michael
2015-10-06
Urban morphology has presented significant intellectual challenges to mathematicians and physicists ever since the eighteenth century, when Euler first explored the famous Königsberg bridges problem. Many important regularities and scaling laws have been observed in urban studies, including Zipf's law and Gibrat's law, rendering cities attractive systems for analysis within statistical physics. Nevertheless, a broad consensus on how cities and their boundaries are defined is still lacking. Applying an elementary clustering technique to the street intersection space, we show that growth curves for the maximum cluster size of the largest cities in the UK and in California collapse to a single curve, namely the logistic. Subsequently, by introducing the concept of the condensation threshold, we show that natural boundaries of cities can be well defined in a universal way. This allows us to study and discuss systematically some of the regularities that are present in cities. We show that some scaling laws present consistent behaviour in space and time, thus suggesting the presence of common principles at the basis of the evolution of urban systems. © 2015 The Authors.
Quantum interference in plasmonic circuits.
Heeres, Reinier W; Kouwenhoven, Leo P; Zwiller, Valery
2013-10-01
Surface plasmon polaritons (plasmons) are a combination of light and a collective oscillation of the free electron plasma at metal/dielectric interfaces. This interaction allows subwavelength confinement of light beyond the diffraction limit inherent to dielectric structures. As a result, the intensity of the electromagnetic field is enhanced, with the possibility to increase the strength of the optical interactions between waveguides, light sources and detectors. Plasmons maintain non-classical photon statistics and preserve entanglement upon transmission through thin, patterned metallic films or weakly confining waveguides. For quantum applications, it is essential that plasmons behave as indistinguishable quantum particles. Here we report on a quantum interference experiment in a nanoscale plasmonic circuit consisting of an on-chip plasmon beamsplitter with integrated superconducting single-photon detectors to allow efficient single plasmon detection. We demonstrate a quantum-mechanical interaction between pairs of indistinguishable surface plasmons by observing Hong-Ou-Mandel (HOM) interference, a hallmark non-classical interference effect that is the basis of linear optics-based quantum computation. Our work shows that it is feasible to shrink quantum optical experiments to the nanoscale and offers a promising route towards subwavelength quantum optical networks.
Multidisciplinary chronic pain management in a rural Canadian setting.
Burnham, Robert; Day, Jeremiah; Dudley, Wallace
2010-01-01
Chronic pain is prevalent, complex and most effectively treated by a multidisciplinary team, particularly if psychosocial issues are dominant. The limited access to and high costs of such services are often prohibitive for the rural patient. We describe the development and 18-month outcomes of a small multidisciplinary chronic pain management program run out of a physician's office in rural Alberta. The multidisciplinary team consisted of a family physician, physiatrist, psychologist, physical therapist, kinesiologist, nurse and dietician. The allied health professionals were involved on a part-time basis. The team triaged referral information and patients underwent either a spine or medical care assessment. Based on the findings of the assessment, the team managed the care of patients using 1 of 4 methods: consultation only, interventional spine care, supervised medication management or full multidisciplinary management. We prospectively and serially recorded self-reported measures of pain and disability for the supervised medication management and full multidisciplinary components of the program. Patients achieved clinically and statistically significant improvements in pain and disability. Successful multidisciplinary chronic pain management services can be provided in a rural setting.
Computational modeling of cardiovascular response to orthostatic stress
NASA Technical Reports Server (NTRS)
Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.
2002-01-01
The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.
Optical recognition of statistical patterns
NASA Astrophysics Data System (ADS)
Lee, S. H.
1981-12-01
Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.
Optical recognition of statistical patterns
NASA Technical Reports Server (NTRS)
Lee, S. H.
1981-01-01
Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.
Job attitudes, job satisfaction, and job affect: A century of continuity and of change.
Judge, Timothy A; Weiss, Howard M; Kammeyer-Mueller, John D; Hulin, Charles L
2017-03-01
Over the past 100 years, research on job attitudes has improved in the sophistication of methods and in the productive use of theory as a basis for fundamental research into questions of work psychology. Early research incorporated a diversity of methods for measuring potential predictors and outcomes of job attitudes. Over time, methods for statistically assessing these relationships became more rigorous, but the field also became narrower. In recent years, developments in theory and methodology have reinvigorated research, which now addresses a rich panoply of topics related to the daily flow of affect, the complexity of personal motives and dispositions, and the complex interplay of attitude objects and motivation in shaping behavior. Despite these apparent changes, a review of the concepts and substantive arguments that underpin this literature have remained remarkably consistent. We conclude by discussing how we expect that these major themes will be addressed in the future, emphasizing topics that have proven to be enduring guides for understanding the ways that people construe and react to their appraisals of their work. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Zhang, Juwei; Tan, Xiaojiang; Zheng, Pengbo
2017-01-01
Electromagnetic methods are commonly employed to detect wire rope discontinuities. However, determining the residual strength of wire rope based on the quantitative recognition of discontinuities remains problematic. We have designed a prototype device based on the residual magnetic field (RMF) of ferromagnetic materials, which overcomes the disadvantages associated with in-service inspections, such as large volume, inconvenient operation, low precision, and poor portability by providing a relatively small and lightweight device with improved detection precision. A novel filtering system consisting of the Hilbert-Huang transform and compressed sensing wavelet filtering is presented. Digital image processing was applied to achieve the localization and segmentation of defect RMF images. The statistical texture and invariant moment characteristics of the defect images were extracted as the input of a radial basis function neural network. Experimental results show that the RMF device can detect defects in various types of wire rope and prolong the service life of test equipment by reducing the friction between the detection device and the wire rope by accommodating a high lift-off distance. PMID:28300790
QTL Mapping of Genome Regions Controlling Manganese Uptake in Lentil Seed.
Ates, Duygu; Aldemir, Secil; Yagmur, Bulent; Kahraman, Abdullah; Ozkan, Hakan; Vandenberg, Albert; Tanyolac, Muhammed Bahattin
2018-05-04
This study evaluated Mn concentration in the seeds of 120 RILs of lentil developed from the cross "CDC Redberry" × "ILL7502". Micronutrient analysis using atomic absorption spectrometry indicated mean seed manganese (Mn) concentrations ranging from 8.5 to 26.8 mg/kg, based on replicated field trials grown at three locations in Turkey in 2012 and 2013. A linkage map of lentil was constructed and consisted of seven linkage groups with 5,385 DNA markers. The total map length was 973.1 cM, with an average distance between markers of 0.18 cM. A total of 6 QTL for Mn concentration were identified using composite interval mapping (CIM). All QTL were statistically significant and explained 15.3-24.1% of the phenotypic variation, with LOD scores ranging from 3.00 to 4.42. The high-density genetic map reported in this study will increase fundamental knowledge of the genome structure of lentil, and will be the basis for the development of micronutrient-enriched lentil genotypes to support biofortification efforts. Copyright © 2018 Ates et al.
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Prata, A. J.
1996-03-01
Previous work suggests that general circulation (global climate) models have excess net radiation at land surfaces, apparently due to overestimates in downwelling shortwave flux and underestimates in upwelling long-wave flux. Part of this excess, however, may be compensated for by an underestimate in downwelling longwave flux. Long term observations of the downwelling longwave component at several land stations in Europe, the United States, Australia, and Antarctica suggest that climate models (four are used, as in previous studies) underestimate this flux component on an annual basis by up to 10 W m2, yet with low statistical significance. It is probable that the known underestimate in boundary-layer air temperature contributes to this, as would low model cloudiness and neglect of minor gases such as methane, nitrogen oxide, and the freons. The bias in downwelling longwave flux, together with those found earlier for downwelling shortwave and upwlling long-wave fluxes, are consistent with the model bias found previously for net radiation. All annually averaged fluxes and biases are deduced for global land as a whole.
Analysis of the performance of a wireless optical multi-input to multi-output communication system.
Bushuev, Denis; Arnon, Shlomi
2006-07-01
We investigate robust optical wireless communication in a highly scattering propagation medium using multielement optical detector arrays. The communication setup consists of synchronized multiple transmitters that send information to a receiver array and an atmospheric propagation channel. The mathematical model that best describes this scenario is multi-input to multi-output communication through stochastic slow changing channels. In this model, signals from m transmitters are received by n receiver-detectors. The channel transfer function matrix is G, and its size is n x m. G(i,j) is the transfer function from transmitter i to detector j, and m > or = n. We adopt a quasi-stationary approach in which the channel time variation has a negligible effect on communication performance over a burst. The G matrix is calculated on the basis of the optical transfer function of the atmospheric channel (composed of aerosol and turbulence elements) and the receiver's optics. In this work we derive a performance model using environmental data, such as documented turbulence and aerosol models and noise statistics. We also present the results of simulations conducted for the proposed detection algorithm.
A method for estimating cost savings for population health management programs.
Murphy, Shannon M E; McGready, John; Griswold, Michael E; Sylvia, Martha L
2013-04-01
To develop a quasi-experimental method for estimating Population Health Management (PHM) program savings that mitigates common sources of confounding, supports regular updates for continued program monitoring, and estimates model precision. Administrative, program, and claims records from January 2005 through June 2009. Data are aggregated by member and month. Study participants include chronically ill adult commercial health plan members. The intervention group consists of members currently enrolled in PHM, stratified by intensity level. Comparison groups include (1) members never enrolled, and (2) PHM participants not currently enrolled. Mixed model smoothing is employed to regress monthly medical costs on time (in months), a history of PHM enrollment, and monthly program enrollment by intensity level. Comparison group trends are used to estimate expected costs for intervention members. Savings are realized when PHM participants' costs are lower than expected. This method mitigates many of the limitations faced using traditional pre-post models for estimating PHM savings in an observational setting, supports replication for ongoing monitoring, and performs basic statistical inference. This method provides payers with a confident basis for making investment decisions. © Health Research and Educational Trust.
NASA Astrophysics Data System (ADS)
Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.
2003-03-01
Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.
Megabase-Scale Inversion Polymorphism in the Wild Ancestor of Maize
Fang, Zhou; Pyhäjärvi, Tanja; Weber, Allison L.; Dawe, R. Kelly; Glaubitz, Jeffrey C.; González, José de Jesus Sánchez; Ross-Ibarra, Claudia; Doebley, John; Morrell, Peter L.; Ross-Ibarra, Jeffrey
2012-01-01
Chromosomal inversions are thought to play a special role in local adaptation, through dramatic suppression of recombination, which favors the maintenance of locally adapted alleles. However, relatively few inversions have been characterized in population genomic data. On the basis of single-nucleotide polymorphism (SNP) genotyping across a large panel of Zea mays, we have identified an ∼50-Mb region on the short arm of chromosome 1 where patterns of polymorphism are highly consistent with a polymorphic paracentric inversion that captures >700 genes. Comparison to other taxa in Zea and Tripsacum suggests that the derived, inverted state is present only in the wild Z. mays subspecies parviglumis and mexicana and is completely absent in domesticated maize. Patterns of polymorphism suggest that the inversion is ancient and geographically widespread in parviglumis. Cytological screens find little evidence for inversion loops, suggesting that inversion heterozygotes may suffer few crossover-induced fitness consequences. The inversion polymorphism shows evidence of adaptive evolution, including a strong altitudinal cline, a statistical association with environmental variables and phenotypic traits, and a skewed haplotype frequency spectrum for inverted alleles. PMID:22542971
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehra, J.
1987-05-01
In this paper, the main outlines of the discussions between Niels Bohr with Albert Einstein, Werner Heisenberg, and Erwin Schroedinger during 1920-1927 are treated. From the formulation of quantum mechanics in 1925-1926 and wave mechanics in 1926, there emerged Born's statistical interpretation of the wave function in summer 1926, and on the basis of the quantum mechanical transformation theory - formulated in fall 1926 by Dirac, London, and Jordan - Heisenberg formulated the uncertainty principle in early 1927. At the Volta Conference in Como in September 1927 and at the fifth Solvay Conference in Brussels the following month, Bohr publiclymore » enunciated his complementarity principle, which had been developing in his mind for several years. The Bohr-Einstein discussions about the consistency and completeness of quantum mechanics and of physical theory as such - formally begun in October 1927 at the fifth Solvay Conference and carried on at the sixth Solvay Conference in October 1930 - were continued during the next decades. All these aspects are briefly summarized.« less
The difference between a dynamic and mechanical approach to stroke treatment.
Helgason, Cathy M
2007-06-01
The current classification of stroke is based on causation, also called pathogenesis, and relies on binary logic faithful to the Aristotelian tradition. Accordingly, a pathology is or is not the cause of the stroke, is considered independent of others, and is the target for treatment. It is the subject for large double-blind randomized clinical therapeutic trials. The scientific view behind clinical trials is the fundamental concept that information is statistical, and causation is determined by probabilities. Therefore, the cause and effect relation will be determined by probability-theory-based statistics. This is the basis of evidence-based medicine, which calls for the results of such trials to be the basis for physician decisions regarding diagnosis and treatment. However, there are problems with the methodology behind evidence-based medicine. Calculations using probability-theory-based statistics regarding cause and effect are performed within an automatic system where there are known inputs and outputs. This method of research provides a framework of certainty with no surprise elements or outcomes. However, it is not a system or method that will come up with previously unknown variables, concepts, or universal principles; it is not a method that will give a new outcome; and it is not a method that allows for creativity, expertise, or new insight for problem solving.
NASA Astrophysics Data System (ADS)
Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel
2017-04-01
The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical approaches through an Observing System Simulation Experiment (OSSE) on a global scale. By changing the size of the random and systematic errors in the OSSE, we can determine the corresponding spatial and temporal resolutions at which useful flux signals could be detected from the OCO-2 data.