ERIC Educational Resources Information Center
Montoya, Isaac D.
2008-01-01
Three classification techniques (Chi-square Automatic Interaction Detection [CHAID], Classification and Regression Tree [CART], and discriminant analysis) were tested to determine their accuracy in predicting Temporary Assistance for Needy Families program recipients' future employment. Technique evaluation was based on proportion of correctly…
Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science
ERIC Educational Resources Information Center
Ju, Boryung; Jin, Tao
2013-01-01
Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…
ERIC Educational Resources Information Center
Karadag, Engin
2010-01-01
To assess research methods and analysis of statistical techniques employed by educational researchers, this study surveyed unpublished doctoral dissertation from 2003 to 2007. Frequently used research methods consisted of experimental research; a survey; a correlational study; and a case study. Descriptive statistics, t-test, ANOVA, factor…
USDA-ARS?s Scientific Manuscript database
Cover: The electrospinning technique was employed to obtain conducting nanofibers based on polyaniline and poly(lactic acid). A statistical model was employed to describe how the process factors (solution concentration, applied voltage, and flow rate) govern the fiber dimensions. Nanofibers down to ...
A Study on Predictive Analytics Application to Ship Machinery Maintenance
2013-09-01
Looking at the nature of the time series forecasting method , it would be better applied to offline analysis . The application for real- time online...other system attributes in future. Two techniques of statistical analysis , mainly time series models and cumulative sum control charts, are discussed in...statistical tool employed for the two techniques of statistical analysis . Both time series forecasting as well as CUSUM control charts are shown to be
Using Statistical Process Control to Make Data-Based Clinical Decisions.
ERIC Educational Resources Information Center
Pfadt, Al; Wheeler, Donald J.
1995-01-01
Statistical process control (SPC), which employs simple statistical tools and problem-solving techniques such as histograms, control charts, flow charts, and Pareto charts to implement continual product improvement procedures, can be incorporated into human service organizations. Examples illustrate use of SPC procedures to analyze behavioral data…
A Review of Statistical Disclosure Control Techniques Employed by Web-Based Data Query Systems.
Matthews, Gregory J; Harel, Ofer; Aseltine, Robert H
We systematically reviewed the statistical disclosure control techniques employed for releasing aggregate data in Web-based data query systems listed in the National Association for Public Health Statistics and Information Systems (NAPHSIS). Each Web-based data query system was examined to see whether (1) it employed any type of cell suppression, (2) it used secondary cell suppression, and (3) suppressed cell counts could be calculated. No more than 30 minutes was spent on each system. Of the 35 systems reviewed, no suppression was observed in more than half (n = 18); observed counts below the threshold were observed in 2 sites; and suppressed values were recoverable in 9 sites. Six sites effectively suppressed small counts. This inquiry has revealed substantial weaknesses in the protective measures used in data query systems containing sensitive public health data. Many systems utilized no disclosure control whatsoever, and the vast majority of those that did deployed it inconsistently or inadequately.
Techniques in teaching statistics : linking research production and research use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Moyano, I .; Smith, A.; Univ. of Massachusetts at Boston)
In the spirit of closing the 'research-practice gap,' the authors extend evidence-based principles to statistics instruction in social science graduate education. The authors employ a Delphi method to survey experienced statistics instructors to identify teaching techniques to overcome the challenges inherent in teaching statistics to students enrolled in practitioner-oriented master's degree programs. Among the teaching techniques identi?ed as essential are using real-life examples, requiring data collection exercises, and emphasizing interpretation rather than results. Building on existing research, preliminary interviews, and the ?ndings from the study, the authors develop a model describing antecedents to the strength of the link between researchmore » and practice.« less
Application of one-way ANOVA in completely randomized experiments
NASA Astrophysics Data System (ADS)
Wahid, Zaharah; Izwan Latiff, Ahmad; Ahmad, Kartini
2017-12-01
This paper describes an application of a statistical technique one-way ANOVA in completely randomized experiments with three replicates. This technique was employed to a single factor with four levels and multiple observations at each level. The aim of this study is to investigate the relationship between chemical oxygen demand index and location on-sites. Two different approaches are employed for the analyses; critical value and p-value. It also presents key assumptions of the technique to be satisfied by the data in order to obtain valid results. Pairwise comparisons by Turkey method are also considered and discussed to determine where the significant differences among the means is after the ANOVA has been performed. The results revealed that there are statistically significant relationship exist between the chemical oxygen demand index and the location on-sites.
A Statistical Decision Model for Periodical Selection for a Specialized Information Center
ERIC Educational Resources Information Center
Dym, Eleanor D.; Shirey, Donald L.
1973-01-01
An experiment is described which attempts to define a quantitative methodology for the identification and evaluation of all possibly relevant periodical titles containing toxicological-biological information. A statistical decision model was designed and employed, along with yes/no criteria questions, a training technique and a quality control…
Measurement of the relationship between perceived and computed color differences
NASA Astrophysics Data System (ADS)
García, Pedro A.; Huertas, Rafael; Melgosa, Manuel; Cui, Guihua
2007-07-01
Using simulated data sets, we have analyzed some mathematical properties of different statistical measurements that have been employed in previous literature to test the performance of different color-difference formulas. Specifically, the properties of the combined index PF/3 (performance factor obtained as average of three terms), widely employed in current literature, have been considered. A new index named standardized residual sum of squares (STRESS), employed in multidimensional scaling techniques, is recommended. The main difference between PF/3 and STRESS is that the latter is simpler and allows inferences on the statistical significance of two color-difference formulas with respect to a given set of visual data.
Antweiler, Ronald C.; Taylor, Howard E.
2008-01-01
The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.
Cooperative Learning in Virtual Environments: The Jigsaw Method in Statistical Courses
ERIC Educational Resources Information Center
Vargas-Vargas, Manuel; Mondejar-Jimenez, Jose; Santamaria, Maria-Letica Meseguer; Alfaro-Navarro, Jose-Luis; Fernandez-Aviles, Gema
2011-01-01
This document sets out a novel teaching methodology as used in subjects with statistical content, traditionally regarded by students as "difficult". In a virtual learning environment, instructional techniques little used in mathematical courses were employed, such as the Jigsaw cooperative learning method, which had to be adapted to the…
Simultaneous ocular and muscle artifact removal from EEG data by exploiting diverse statistics.
Chen, Xun; Liu, Aiping; Chen, Qiang; Liu, Yu; Zou, Liang; McKeown, Martin J
2017-09-01
Electroencephalography (EEG) recordings are frequently contaminated by both ocular and muscle artifacts. These are normally dealt with separately, by employing blind source separation (BSS) techniques relying on either second-order or higher-order statistics (SOS & HOS respectively). When HOS-based methods are used, it is usually in the setting of assuming artifacts are statistically independent to the EEG. When SOS-based methods are used, it is assumed that artifacts have autocorrelation characteristics distinct from the EEG. In reality, ocular and muscle artifacts do not completely follow the assumptions of strict temporal independence to the EEG nor completely unique autocorrelation characteristics, suggesting that exploiting HOS or SOS alone may be insufficient to remove these artifacts. Here we employ a novel BSS technique, independent vector analysis (IVA), to jointly employ HOS and SOS simultaneously to remove ocular and muscle artifacts. Numerical simulations and application to real EEG recordings were used to explore the utility of the IVA approach. IVA was superior in isolating both ocular and muscle artifacts, especially for raw EEG data with low signal-to-noise ratio, and also integrated usually separate SOS and HOS steps into a single unified step. Copyright © 2017 Elsevier Ltd. All rights reserved.
The effects of estimation of censoring, truncation, transformation and partial data vectors
NASA Technical Reports Server (NTRS)
Hartley, H. O.; Smith, W. B.
1972-01-01
The purpose of this research was to attack statistical problems concerning the estimation of distributions for purposes of predicting and measuring assembly performance as it appears in biological and physical situations. Various statistical procedures were proposed to attack problems of this sort, that is, to produce the statistical distributions of the outcomes of biological and physical situations which, employ characteristics measured on constituent parts. The techniques are described.
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
ERIC Educational Resources Information Center
Bowman, William R.
A study examined the feasibility of using a "nonexperimental" technique to evaluate Job Training Partnership Act (JTPA) programs for economically disadvantaged adults. New statistical techniques were applied to data about a sample of Utah JTPA participants and data about Employment Security registrants linked with their individual…
An unsupervised classification technique for multispectral remote sensing data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Cummings, R. E.
1973-01-01
Description of a two-part clustering technique consisting of (a) a sequential statistical clustering, which is essentially a sequential variance analysis, and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum-likelihood classification techniques.
Mali, Matilda; Dell'Anna, Maria Michela; Mastrorilli, Piero; Damiani, Leonardo; Ungaro, Nicola; Belviso, Claudia; Fiore, Saverio
2015-11-01
Sediment contamination by metals poses significant risks to coastal ecosystems and is considered to be problematic for dredging operations. The determination of the background values of metal and metalloid distribution based on site-specific variability is fundamental in assessing pollution levels in harbour sediments. The novelty of the present work consists of addressing the scope and limitation of analysing port sediments through the use of conventional statistical techniques (such as: linear regression analysis, construction of cumulative frequency curves and the iterative 2σ technique), that are commonly employed for assessing Regional Geochemical Background (RGB) values in coastal sediments. This study ascertained that although the tout court use of such techniques in determining the RGB values in harbour sediments seems appropriate (the chemical-physical parameters of port sediments fit well with statistical equations), it should nevertheless be avoided because it may be misleading and can mask key aspects of the study area that can only be revealed by further investigations, such as mineralogical and multivariate statistical analyses. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generic results of the space physics community survey
NASA Technical Reports Server (NTRS)
Sharma, Rikhi R.; Cohen, Nathaniel B.
1993-01-01
This report summarizes the results of a survey of the members of the space physics research community conducted in 1990-1991 to ascertain demographic information on the respondents and information on their views on a number of facets of their space physics research. The survey was conducted by questionnaire and the information received was compiled in a database and analyzed statistically. The statistical results are presented for the respondent population as a whole and by four different respondent cross sections: individual disciplines of space physics, type of employers, age groups, and research techniques employed. Data from a brief corresponding survey of the graduate students of respondents are also included.
Unsupervised classification of earth resources data.
NASA Technical Reports Server (NTRS)
Su, M. Y.; Jayroe, R. R., Jr.; Cummings, R. E.
1972-01-01
A new clustering technique is presented. It consists of two parts: (a) a sequential statistical clustering which is essentially a sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by existing supervised maximum liklihood classification technique.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1984-01-01
Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.
Valuing Eastern Visibility: A Field Test of the Contingent Valuation Method (1993)
The report describes the Eastern visibility survey design in detail, presents the implementation of and data obtained from the surveys, provides summary statistics on the overall response and discusses the econometric techniques employed to value benefits.
Inverse Problems in Geodynamics Using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.
2018-01-01
During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.
Harris, Michael; Radtke, Arthur S.
1976-01-01
Linear regression and discriminant analyses techniques were applied to gold, mercury, arsenic, antimony, barium, copper, molybdenum, lead, zinc, boron, tellurium, selenium, and tungsten analyses from drill holes into unoxidized gold ore at the Carlin gold mine near Carlin, Nev. The statistical treatments employed were used to judge proposed hypotheses on the origin and geochemical paragenesis of this disseminated gold deposit.
Geospatial methods and data analysis for assessing distribution of grazing livestock
USDA-ARS?s Scientific Manuscript database
Free-ranging livestock research must begin with a well conceived problem statement and employ appropriate data acquisition tools and analytical techniques to accomplish the research objective. These requirements are especially critical in addressing animal distribution. Tools and statistics used t...
ERIC Educational Resources Information Center
Kostadinov, Boyan
2013-01-01
This article attempts to introduce the reader to computational thinking and solving problems involving randomness. The main technique being employed is the Monte Carlo method, using the freely available software "R for Statistical Computing." The author illustrates the computer simulation approach by focusing on several problems of…
Modelling the effect of structural QSAR parameters on skin penetration using genetic programming
NASA Astrophysics Data System (ADS)
Chung, K. K.; Do, D. Q.
2010-09-01
In order to model relationships between chemical structures and biological effects in quantitative structure-activity relationship (QSAR) data, an alternative technique of artificial intelligence computing—genetic programming (GP)—was investigated and compared to the traditional method—statistical. GP, with the primary advantage of generating mathematical equations, was employed to model QSAR data and to define the most important molecular descriptions in QSAR data. The models predicted by GP agreed with the statistical results, and the most predictive models of GP were significantly improved when compared to the statistical models using ANOVA. Recently, artificial intelligence techniques have been applied widely to analyse QSAR data. With the capability of generating mathematical equations, GP can be considered as an effective and efficient method for modelling QSAR data.
Logistic regression for risk factor modelling in stuttering research.
Reed, Phil; Wu, Yaqionq
2013-06-01
To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Weimin; Anastasio, Mark A.
2018-03-01
It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.
Urban environmental health applications of remote sensing, summary report
NASA Technical Reports Server (NTRS)
Rush, M.; Goldstein, J.; Hsi, B. P.; Olsen, C. B.
1975-01-01
Health and its association with the physical environment was studied based on the hypothesis that there is a relationship between the man-made physical environment and health status of a population. The statistical technique of regression analysis was employed to show the degree of association and aspects of physical environment which accounted for the greater variation in health status. Mortality, venereal disease, tuberculosis, hepatitis, meningitis, shigella/salmonella, hypertension and cardiac arrest/myocardial infarction were examined. The statistical techniques were used to measure association and variation, not necessarily cause and effect. Conclusions drawn show that the association still exists in the decade of the 1970's and that it can be successfully monitored with the methodology of remote sensing.
Digital Forensics Using Local Signal Statistics
ERIC Educational Resources Information Center
Pan, Xunyu
2011-01-01
With the rapid growth of the Internet and the popularity of digital imaging devices, digital imagery has become our major information source. Meanwhile, the development of digital manipulation techniques employed by most image editing software brings new challenges to the credibility of photographic images as the definite records of events. We…
The composite sequential clustering technique for analysis of multispectral scanner data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
Remote sensing for urban planning
NASA Technical Reports Server (NTRS)
Davis, Bruce A.; Schmidt, Nicholas; Jensen, John R.; Cowen, Dave J.; Halls, Joanne; Narumalani, Sunil; Burgess, Bryan
1994-01-01
Utility companies are challenged to provide services to a highly dynamic customer base. With factory closures and shifts in employment becoming a routine occurrence, the utility industry must develop new techniques to maintain records and plan for expected growth. BellSouth Telecommunications, the largest of the Bell telephone companies, currently serves over 13 million residences and 2 million commercial customers. Tracking the movement of customers and scheduling the delivery of service are major tasks for BellSouth that require intensive manpower and sophisticated information management techniques. Through NASA's Commercial Remote Sensing Program Office, BellSouth is investigating the utility of remote sensing and geographic information system techniques to forecast residential development. This paper highlights the initial results of this project, which indicate a high correlation between the U.S. Bureau of Census block group statistics and statistics derived from remote sensing data.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
NASA Astrophysics Data System (ADS)
Zheng, Q.; Dickson, S.; Guo, Y.
2007-12-01
A good understanding of the physico-chemical processes (i.e., advection, dispersion, attachment/detachment, straining, sedimentation etc.) governing colloid transport in fractured media is imperative in order to develop appropriate bioremediation and/or bioaugmentation strategies for contaminated fractured aquifers, form management plans for groundwater resources to prevent pathogen contamination, and identify suitable radioactive waste disposal sites. However, research in this field is still in its infancy due to the complex heterogeneous nature of fractured media and the resulting difficulty in characterizing this media. The goal of this research is to investigate the effects of aperture field variability, flow rate and ionic strength on colloid transport processes in well characterized single fractures. A combination of laboratory-scale experiments, numerical simulations, and imaging techniques were employed to achieve this goal. Transparent replicas were cast from natural rock fractures, and a light transmission technique was employed to measure their aperture fields directly. The surface properties of the synthetic fractures were characterized by measuring the zeta-potential under different ionic strengths. A 33 (3 increased to the power of 3) factorial experiment was implemented to investigate the influence of aperture field variability, flow rate, and ionic strength on different colloid transport processes in the laboratory-scale fractures, specifically dispersion and attachment/detachment. A fluorescent stain technique was employed to photograph the colloid transport processes, and an analytical solution to the one-dimensional transport equation was fit to the colloid breakthrough curves to calculate the average transport velocity, dispersion coefficient, and attachment/detachment coefficient. The Reynolds equation was solved to obtain the flow field in the measured aperture fields, and the random walk particle tracking technique was employed to model the colloid transport experiments. The images clearly show the development of preferential pathways for colloid transport in the different aperture fields and under different flow conditions. Additionally, a correlation between colloid deposition and fracture wall topography was identified. This presentation will demonstrate (1) differential transport between colloid and solute in single fractures, and the relationship between differential transport and aperture field statistics; (2) the relationship between the colloid dispersion coefficient and aperture field statistics; and (3) the relationship between attachment/detachment, aperture field statistics, fracture wall topography, flow rate, and ionic strength. In addition, this presentation will provide insight into the application of the random walk particle tracking technique for modeling colloid transport in variable-aperture fractures.
ERIC Educational Resources Information Center
Karakaya-Ozyer, Kubra; Aksu-Dunya, Beyza
2018-01-01
Structural equation modeling (SEM) is one of the most popular multivariate statistical techniques in Turkish educational research. This study elaborates the SEM procedures employed by 75 educational research articles which were published from 2010 to 2015 in Turkey. After documenting and coding 75 academic papers, categorical frequencies and…
The Conceptualization and Measurement of Equity in School Finance in Virginia.
ERIC Educational Resources Information Center
Verstegen, Deborah A.; Salmon, Richard G.
1989-01-01
Employed various statistical techniques to measure fiscal equity in Virginia. The new state aid system for financing education was unable to mitigate large and increasing disparities in education revenues between more and less affluent localities and a strong and growing linkage between revenue and wealth. Includes 34 footnotes. (MLH)
Comparability of a Paper-Based Language Test and a Computer-Based Language Test.
ERIC Educational Resources Information Center
Choi, Inn-Chull; Kim, Kyoung Sung; Boo, Jaeyool
2003-01-01
Utilizing the Test of English Proficiency, developed by Seoul National University (TEPS), examined comparability between the paper-based language test and the computer-based language test based on content and construct validation employing content analyses based on corpus linguistic techniques in addition to such statistical analyses as…
Modeling Success: Using Preenrollment Data to Identify Academically At-Risk Students
ERIC Educational Resources Information Center
Gansemer-Topf, Ann M.; Compton, Jonathan; Wohlgemuth, Darin; Forbes, Greg; Ralston, Ekaterina
2015-01-01
Improving student success and degree completion is one of the core principles of strategic enrollment management. To address this principle, institutional data were used to develop a statistical model to identify academically at-risk students. The model employs multiple linear regression techniques to predict students at risk of earning below a…
ERIC Educational Resources Information Center
Subramaniam, Maithreyi; Hanafi, Jaffri; Putih, Abu Talib
2016-01-01
This study adopted 30 first year graphic design students' artwork, with critical analysis using Feldman's model of art criticism. Data were analyzed quantitatively; descriptive statistical techniques were employed. The scores were viewed in the form of mean score and frequencies to determine students' performances in their critical ability.…
New Directions for the Study of Within-Individual Variability in Development: The Power of "N = 1"
ERIC Educational Resources Information Center
Barbot, Baptiste; Perchec, Cyrille
2015-01-01
This article provides an introduction to the idiographic approach ("N = 1" research) in developmental psychology and an overview of methodological and statistical techniques employed to address the study of within-individual variability in development. Through a popularization of the idiographic approach and associated statistical…
"Dealing" with Incidence, Prevalence, and Odds Concepts in Undergraduate Epidemiology
ERIC Educational Resources Information Center
Senchina, David S.; Laurson, Kelly R.
2009-01-01
Concepts and associated statistical formulae of incidence, prevalence, and odds/odds ratios are core knowledge in epidemiology yet can be confusing for students. The purpose of this project was to develop, validate, and share one possible pedagogical technique using playing cards that could be employed to improve undergraduate understanding of…
Load balancing for massively-parallel soft-real-time systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hailperin, M.
1988-09-01
Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less
Mallineni, S K; Anthonappa, R P; King, N M
2016-12-01
To assess the reliability of the vertical tube shift technique (VTST) and horizontal tube shift technique (HTST) for the localisation of unerupted supernumerary teeth (ST) in the anterior region of the maxilla. A convenience sample of 83 patients who attended a major teaching hospital because of unerupted ST was selected. Only non-syndromic patients with ST and who had complete clinical and radiographic and surgical records were included in the study. Ten examiners independently rated the paired set of radiographs for each technique. Chi-square test, paired t test and kappa statistics were employed to assess the intra- and inter-examiner reliability. Paired sets of 1660 radiographs (830 pairs for each technique) were available for the analysis. The overall sensitivity for VTST and HTST was 80.6 and 72.1% respectively, with slight inter-examiner and good intra-examiner reliability. Statistically significant differences were evident between the two localisation techniques (p < 0.05). Localisation of unerupted ST using VTST was more successful than HTST in the anterior region of the maxilla.
Structural equation modeling in pediatric psychology: overview and review of applications.
Nelson, Timothy D; Aylward, Brandon S; Steele, Ric G
2008-08-01
To describe the use of structural equation modeling (SEM) in the Journal of Pediatric Psychology (JPP) and to discuss the usefulness of SEM applications in pediatric psychology research. The use of SEM in JPP between 1997 and 2006 was examined and compared to leading journals in clinical psychology, clinical child psychology, and child development. SEM techniques were used in <4% of the empirical articles appearing in JPP between 1997 and 2006. SEM was used less frequently in JPP than in other clinically relevant journals over the past 10 years. However, results indicated a recent increase in JPP studies employing SEM techniques. SEM is an under-utilized class of techniques within pediatric psychology research, although investigations employing these methods are becoming more prevalent. Despite its infrequent use to date, SEM is a potentially useful tool for advancing pediatric psychology research with a number of advantages over traditional statistical methods.
Descriptive and inferential statistical methods used in burns research.
Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars
2010-05-01
Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals in the fields of biostatistics and epidemiology when using more advanced statistical techniques. Copyright 2009 Elsevier Ltd and ISBI. All rights reserved.
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S
2018-03-01
Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.
Spatial analysis on future housing markets: economic development and housing implications.
Liu, Xin; Wang, Lizhe
2014-01-01
A coupled projection method combining formal modelling and other statistical techniques was developed to delineate the relationship between economic and social drivers for net new housing allocations. Using the example of employment growth in Tyne and Wear, UK, until 2016, the empirical analysis yields housing projections at the macro- and microspatial levels (e.g., region to subregion to elected ward levels). The results have important implications for the strategic planning of locations for housing and employment, demonstrating both intuitively and quantitatively how local economic developments affect housing demand.
Spatial Analysis on Future Housing Markets: Economic Development and Housing Implications
Liu, Xin; Wang, Lizhe
2014-01-01
A coupled projection method combining formal modelling and other statistical techniques was developed to delineate the relationship between economic and social drivers for net new housing allocations. Using the example of employment growth in Tyne and Wear, UK, until 2016, the empirical analysis yields housing projections at the macro- and microspatial levels (e.g., region to subregion to elected ward levels). The results have important implications for the strategic planning of locations for housing and employment, demonstrating both intuitively and quantitatively how local economic developments affect housing demand. PMID:24892097
INVESTIGATION OF THE USE OF STATISTICS IN COUNSELING STUDENTS.
ERIC Educational Resources Information Center
HEWES, ROBERT F.
THE OBJECTIVE WAS TO EMPLOY TECHNIQUES OF PROFILE ANALYSIS TO DEVELOP THE JOINT PROBABILITY OF SELECTING A SUITABLE SUBJECT MAJOR AND OF ASSURING TO A HIGH DEGREE GRADUATION FROM COLLEGE WITH THAT MAJOR. THE SAMPLE INCLUDED 1,197 MIT FRESHMEN STUDENTS IN 1952-53, AND THE VALIDATION GROUP INCLUDED 699 ENTRANTS IN 1954. DATA INCLUDED SECONDARY…
ERIC Educational Resources Information Center
Luna-Torres, Maria; McKinney, Lyle; Horn, Catherine; Jones, Sara
2018-01-01
This study examined a sample of community college students from a diverse, large urban community college system in Texas. To gain a deeper understanding about the effects of background characteristics on student borrowing behaviors and enrollment outcomes, the study employed descriptive statistics and regression techniques to examine two separate…
Testing and Evaluating C3I Systems That Employ AI. Volume 1. Handbook for Testing Expert Systems
1991-01-31
Designs ....... ............. .. 6-29 Nonequivalent Control Group Design ...does not receive the system; and (c) nonequivalent (and nonrandomized) control group designs that rely on statistical techniques like analysis of...implementation); (b) multiple time-series designs using a control group ; and (c) nonequivalent control group designs that obtain pretest and
Neutron/Gamma-ray discrimination through measures of fit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek
2015-07-01
Statistical tests and their underlying measures of fit can be utilized to separate neutron/gamma-ray pulses in a mixed radiation field. In this article, first the application of a sample statistical test is explained. Fit measurement-based methods require true pulse shapes to be used as reference for discrimination. This requirement makes practical implementation of these methods difficult; typically another discrimination approach should be employed to capture samples of neutrons and gamma-rays before running the fit-based technique. In this article, we also propose a technique to eliminate this requirement. These approaches are applied to several sets of mixed neutron and gamma-ray pulsesmore » obtained through different digitizers using stilbene scintillator in order to analyze them and measure their discrimination quality. (authors)« less
White, H; Racine, J
2001-01-01
We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
NASA Technical Reports Server (NTRS)
Barkstrom, B. R.
1983-01-01
The measurement of the earth's radiation budget has been chosen to illustrate the technique of objective system design. The measurement process is an approximately linear transformation of the original field of radiant exitances, so that linear statistical techniques may be employed. The combination of variability, measurement strategy, and error propagation is presently made with the help of information theory, as suggested by Kondratyev et al. (1975) and Peckham (1974). Covariance matrices furnish the quantitative statement of field variability.
NASA Technical Reports Server (NTRS)
Coggeshall, M. E.; Hoffer, R. M.
1973-01-01
Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.
Toward Robust and Efficient Climate Downscaling for Wind Energy
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Rife, D.; Pinto, J. O.; Monaghan, A. J.; Davis, C. A.
2011-12-01
This presentation describes a more accurate and economical (less time, money and effort) wind resource assessment technique for the renewable energy industry, that incorporates innovative statistical techniques and new global mesoscale reanalyzes. The technique judiciously selects a collection of "case days" that accurately represent the full range of wind conditions observed at a given site over a 10-year period, in order to estimate the long-term energy yield. We will demonstrate that this new technique provides a very accurate and statistically reliable estimate of the 10-year record of the wind resource by intelligently choosing a sample of ±120 case days. This means that the expense of downscaling to quantify the wind resource at a prospective wind farm can be cut by two thirds from the current industry practice of downscaling a randomly chosen 365-day sample to represent winds over a "typical" year. This new estimate of the long-term energy yield at a prospective wind farm also has far less statistical uncertainty than the current industry standard approach. This key finding has the potential to reduce significantly market barriers to both onshore and offshore wind farm development, since insurers and financiers charge prohibitive premiums on investments that are deemed to be high risk. Lower uncertainty directly translates to lower perceived risk, and therefore far more attractive financing terms could be offered to wind farm developers who employ this new technique.
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
Statistical variances of diffusional properties from ab initio molecular dynamics simulations
NASA Astrophysics Data System (ADS)
He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei
2018-12-01
Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.
Recommendations for research design of telehealth studies.
Chumbler, Neale R; Kobb, Rita; Brennan, David M; Rabinowitz, Terry
2008-11-01
Properly designed randomized controlled trials (RCTs) are the gold standard to use when examining the effectiveness of telehealth interventions on clinical outcomes. Some published telehealth studies have employed well-designed RCTs. However, such methods are not always feasible and practical in particular settings. This white paper addresses not only the need for properly designed RCTs, but also offers alternative research designs, such as quasi-experimental designs, and statistical techniques that can be employed to rigorously assess the effectiveness of telehealth studies. This paper further offers design and measurement recommendations aimed at and relevant to administrative decision-makers, policymakers, and practicing clinicians.
Application of Taguchi methods to infrared window design
NASA Astrophysics Data System (ADS)
Osmer, Kurt A.; Pruszynski, Charles J.
1990-10-01
Dr. Genichi Taguchi, a prominent quality consultant, reduced a branch of statistics known as "Design of Experiments" to a cookbook methodology that can be employed by any competent engineer. This technique has been extensively employed by Japanese manufacturers, and is widely credited with helping them attain their current level of success in low cost, high quality product design and fabrication. Although this technique was originally put forth as a tool to streamline the determination of improved production processes, it can also be applied to a wide range of engineering problems. As part of an internal research project, this method of experimental design has been adapted to window trade studies and materials research. Two of these analyses are presented herein, and have been chosen to illustrate the breadth of applications to which the Taguchi method can be utilized.
Rapid analysis of pharmaceutical drugs using LIBS coupled with multivariate analysis.
Tiwari, P K; Awasthi, S; Kumar, R; Anand, R K; Rai, P K; Rai, A K
2018-02-01
Type 2 diabetes drug tablets containing voglibose having dose strengths of 0.2 and 0.3 mg of various brands have been examined, using laser-induced breakdown spectroscopy (LIBS) technique. The statistical methods such as the principal component analysis (PCA) and the partial least square regression analysis (PLSR) have been employed on LIBS spectral data for classifying and developing the calibration models of drug samples. We have developed the ratio-based calibration model applying PLSR in which relative spectral intensity ratios H/C, H/N and O/N are used. Further, the developed model has been employed to predict the relative concentration of element in unknown drug samples. The experiment has been performed in air and argon atmosphere, respectively, and the obtained results have been compared. The present model provides rapid spectroscopic method for drug analysis with high statistical significance for online control and measurement process in a wide variety of pharmaceutical industrial applications.
Reproducibility of ZrO2-based freeze casting for biomaterials.
Naleway, Steven E; Fickas, Kate C; Maker, Yajur N; Meyers, Marc A; McKittrick, Joanna
2016-04-01
The processing technique of freeze casting has been intensely researched for its potential to create porous scaffold and infiltrated composite materials for biomedical implants and structural materials. However, in order for this technique to be employed medically or commercially, it must be able to reliably produce materials in great quantities with similar microstructures and properties. Here we investigate the reproducibility of the freeze casting process by independently fabricating three sets of eight ZrO2-epoxy composite scaffolds with the same processing conditions but varying solid loading (10, 15 and 20 vol.%). Statistical analyses (One-way ANOVA and Tukey's HSD tests) run upon measurements of the microstructural dimensions of these composite scaffold sets show that, while the majority of microstructures are similar, in all cases the composite scaffolds display statistically significant variability. In addition, composite scaffolds where mechanically compressed and statistically analyzed. Similar to the microstructures, almost all of their resultant properties displayed significant variability though most composite scaffolds were similar. These results suggest that additional research to improve control of the freeze casting technique is required before scaffolds and composite scaffolds can reliably be reproduced for commercial or medical applications. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
Design and analysis issues in quantitative proteomics studies.
Karp, Natasha A; Lilley, Kathryn S
2007-09-01
Quantitative proteomics is the comparison of distinct proteomes which enables the identification of protein species which exhibit changes in expression or post-translational state in response to a given stimulus. Many different quantitative techniques are being utilized and generate large datasets. Independent of the technique used, these large datasets need robust data analysis to ensure valid conclusions are drawn from such studies. Approaches to address the problems that arise with large datasets are discussed to give insight into the types of statistical analyses of data appropriate for the various experimental strategies that can be employed by quantitative proteomic studies. This review also highlights the importance of employing a robust experimental design and highlights various issues surrounding the design of experiments. The concepts and examples discussed within will show how robust design and analysis will lead to confident results that will ensure quantitative proteomics delivers.
Modeling And Detecting Anomalies In Scada Systems
NASA Astrophysics Data System (ADS)
Svendsen, Nils; Wolthusen, Stephen
The detection of attacks and intrusions based on anomalies is hampered by the limits of specificity underlying the detection techniques. However, in the case of many critical infrastructure systems, domain-specific knowledge and models can impose constraints that potentially reduce error rates. At the same time, attackers can use their knowledge of system behavior to mask their manipulations, causing adverse effects to observed only after a significant period of time. This paper describes elementary statistical techniques that can be applied to detect anomalies in critical infrastructure networks. A SCADA system employed in liquefied natural gas (LNG) production is used as a case study.
Mann, Michael P.; Rizzardo, Jule; Satkowski, Richard
2004-01-01
Accurate streamflow statistics are essential to water resource agencies involved in both science and decision-making. When long-term streamflow data are lacking at a site, estimation techniques are often employed to generate streamflow statistics. However, procedures for accurately estimating streamflow statistics often are lacking. When estimation procedures are developed, they often are not evaluated properly before being applied. Use of unevaluated or underevaluated flow-statistic estimation techniques can result in improper water-resources decision-making. The California State Water Resources Control Board (SWRCB) uses two key techniques, a modified rational equation and drainage basin area-ratio transfer, to estimate streamflow statistics at ungaged locations. These techniques have been implemented to varying degrees, but have not been formally evaluated. For estimating peak flows at the 2-, 5-, 10-, 25-, 50-, and 100-year recurrence intervals, the SWRCB uses the U.S. Geological Surveys (USGS) regional peak-flow equations. In this study, done cooperatively by the USGS and SWRCB, the SWRCB estimated several flow statistics at 40 USGS streamflow gaging stations in the north coast region of California. The SWRCB estimates were made without reference to USGS flow data. The USGS used the streamflow data provided by the 40 stations to generate flow statistics that could be compared with SWRCB estimates for accuracy. While some SWRCB estimates compared favorably with USGS statistics, results were subject to varying degrees of error over the region. Flow-based estimation techniques generally performed better than rain-based methods, especially for estimation of December 15 to March 31 mean daily flows. The USGS peak-flow equations also performed well, but tended to underestimate peak flows. The USGS equations performed within reported error bounds, but will require updating in the future as peak-flow data sets grow larger. Little correlation was discovered between estimation errors and geographic locations or various basin characteristics. However, for 25-percentile year mean-daily-flow estimates for December 15 to March 31, the greatest estimation errors were at east San Francisco Bay area stations with mean annual precipitation less than or equal to 30 inches, and estimated 2-year/24-hour rainfall intensity less than 3 inches.
The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bihn T. Pham; Jeffrey J. Einerson
2010-06-01
This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automatedmore » processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.« less
Predicting Macroscale Effects Through Nanoscale Features
2012-01-01
errors become incorrectly computed by the basic OLS technique. To test for the presence of heteroscedasticity the Breusch - Pagan / Cook-Weisberg test ...is employed with the test statistics distributed as 2 with the degrees of freedom equal to the number of regressors. The Breusch - Pagan / Cook...between shock sensitivity and Sm does not exhibit any heteroscedasticity. The Breusch - Pagan / Cook-Weisberg test provides 2(1)=1.73, which
Sayago, Ana; González-Domínguez, Raúl; Beltrán, Rafael; Fernández-Recamales, Ángeles
2018-09-30
This work explores the potential of multi-element fingerprinting in combination with advanced data mining strategies to assess the geographical origin of extra virgin olive oil samples. For this purpose, the concentrations of 55 elements were determined in 125 oil samples from multiple Spanish geographic areas. Several unsupervised and supervised multivariate statistical techniques were used to build classification models and investigate the relationship between mineral composition of olive oils and their provenance. Results showed that Spanish extra virgin olive oils exhibit characteristic element profiles, which can be differentiated on the basis of their origin in accordance with three geographical areas: Atlantic coast (Huelva province), Mediterranean coast and inland regions. Furthermore, statistical modelling yielded high sensitivity and specificity, principally when random forest and support vector machines were employed, thus demonstrating the utility of these techniques in food traceability and authenticity research. Copyright © 2018 Elsevier Ltd. All rights reserved.
Artificial neural networks in evaluation and optimization of modified release solid dosage forms.
Ibrić, Svetlana; Djuriš, Jelena; Parojčić, Jelena; Djurić, Zorica
2012-10-18
Implementation of the Quality by Design (QbD) approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE) as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM) is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN) in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms.
Artificial Neural Networks in Evaluation and Optimization of Modified Release Solid Dosage Forms
Ibrić, Svetlana; Djuriš, Jelena; Parojčić, Jelena; Djurić, Zorica
2012-01-01
Implementation of the Quality by Design (QbD) approach in pharmaceutical development has compelled researchers in the pharmaceutical industry to employ Design of Experiments (DoE) as a statistical tool, in product development. Among all DoE techniques, response surface methodology (RSM) is the one most frequently used. Progress of computer science has had an impact on pharmaceutical development as well. Simultaneous with the implementation of statistical methods, machine learning tools took an important place in drug formulation. Twenty years ago, the first papers describing application of artificial neural networks in optimization of modified release products appeared. Since then, a lot of work has been done towards implementation of new techniques, especially Artificial Neural Networks (ANN) in modeling of production, drug release and drug stability of modified release solid dosage forms. The aim of this paper is to review artificial neural networks in evaluation and optimization of modified release solid dosage forms. PMID:24300369
Statistical mechanics of broadcast channels using low-density parity-check codes.
Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David
2003-03-01
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
Zapater, E; Moreno, S; Fortea, M A; Campos, A; Armengot, M; Basterra, J
2000-11-01
Many studies have investigated prognostic factors in laryngeal carcinoma, with sometimes conflicting results. Apart from the importance of environmental factors, the different statistical methods employed may have influenced such discrepancies. A program based on artificial intelligence techniques is designed to determine the prognostic factors in a series of 122 laryngeal carcinomas. The results obtained are compared with those derived from two classical statistical methods (Cox regression and mortality tables). Tumor location was found to be the most important prognostic factor by all methods. The proposed intelligent system is found to be a sound method capable of detecting exceptional cases.
van der Wel, Kjetil A; Dahl, Espen; Thielen, Karsten
2011-12-01
The aim of this paper is to examine educational inequalities in the risk of non-employment among people with illnesses and how they vary between European countries with different welfare state characteristics. In doing so, the paper adds to the growing literature on welfare states and social inequalities in health by studying the often overlooked 'sickness'-dimension of health, namely employment behaviour among people with illnesses. We use European Union Statistics on Income and Living Conditions (EU-SILC) data from 2005 covering 26 European countries linked to country characteristics derived from Eurostat and OECD that include spending on active labour market policies, benefit generosity, income inequality, and employment protection. Using multilevel techniques we find that comprehensive welfare states have lower absolute and relative social inequalities in sickness, as well as more favourable general rates of non-employment. Hence, regarding sickness, welfare resources appear to trump welfare disincentives. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ensembles of radial basis function networks for spectroscopic detection of cervical precancer
NASA Technical Reports Server (NTRS)
Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.
1998-01-01
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.
Estimation of urban runoff and water quality using remote sensing and artificial intelligence.
Ha, S R; Park, S Y; Park, D H
2003-01-01
Water quality and quantity of runoff are strongly dependent on the landuse and landcover (LULC) criteria. In this study, we developed a more improved parameter estimation procedure for the environmental model using remote sensing (RS) and artificial intelligence (AI) techniques. Landsat TM multi-band (7bands) and Korea Multi-Purpose Satellite (KOMPSAT) panchromatic data were selected for input data processing. We employed two kinds of artificial intelligence techniques, RBF-NN (radial-basis-function neural network) and ANN (artificial neural network), to classify LULC of the study area. A bootstrap resampling method, a statistical technique, was employed to generate the confidence intervals and distribution of the unit load. SWMM was used to simulate the urban runoff and water quality and applied to the study watershed. The condition of urban flow and non-point contaminations was simulated with rainfall-runoff and measured water quality data. The estimated total runoff, peak time, and pollutant generation varied considerably according to the classification accuracy and percentile unit load applied. The proposed procedure would efficiently be applied to water quality and runoff simulation in a rapidly changing urban area.
Decision rules for unbiased inventory estimates
NASA Technical Reports Server (NTRS)
Argentiero, P. D.; Koch, D.
1979-01-01
An efficient and accurate procedure for estimating inventories from remote sensing scenes is presented. In place of the conventional and expensive full dimensional Bayes decision rule, a one-dimensional feature extraction and classification technique was employed. It is shown that this efficient decision rule can be used to develop unbiased inventory estimates and that for large sample sizes typical of satellite derived remote sensing scenes, resulting accuracies are comparable or superior to more expensive alternative procedures. Mathematical details of the procedure are provided in the body of the report and in the appendix. Results of a numerical simulation of the technique using statistics obtained from an observed LANDSAT scene are included. The simulation demonstrates the effectiveness of the technique in computing accurate inventory estimates.
Strategies for Fermentation Medium Optimization: An In-Depth Review
Singh, Vineeta; Haque, Shafiul; Niwas, Ram; Srivastava, Akansha; Pasupuleti, Mukesh; Tripathi, C. K. M.
2017-01-01
Optimization of production medium is required to maximize the metabolite yield. This can be achieved by using a wide range of techniques from classical “one-factor-at-a-time” to modern statistical and mathematical techniques, viz. artificial neural network (ANN), genetic algorithm (GA) etc. Every technique comes with its own advantages and disadvantages, and despite drawbacks some techniques are applied to obtain best results. Use of various optimization techniques in combination also provides the desirable results. In this article an attempt has been made to review the currently used media optimization techniques applied during fermentation process of metabolite production. Comparative analysis of the merits and demerits of various conventional as well as modern optimization techniques have been done and logical selection basis for the designing of fermentation medium has been given in the present review. Overall, this review will provide the rationale for the selection of suitable optimization technique for media designing employed during the fermentation process of metabolite production. PMID:28111566
Sutlive, Thomas G; Mabry, Lance M; Easterling, Emmanuel J; Durbin, Jose D; Hanson, Stephen L; Wainner, Robert S; Childs, John D
2009-07-01
To determine whether military health care beneficiaries with low back pain (LBP) who are likely to respond successfully to spinal manipulation experience a difference in short-term clinical outcomes based on the manipulation technique that is used. Sixty patients with LBP identified as likely responders to manipulation underwent a standardized clinical examination and were randomized to receive a lumbopelvic (LP) or lumbar neutral gap (NG) manipulation technique. Outcome measures were a numeric pain rating scale and the modified Oswestry Disability Questionnaire. Both the LP and NG groups experienced statistically significant reductions in pain and disability at 48 hours postmanipulation. The improvements seen in each group were small because of the short follow-up. There were no statistically significant or clinically meaningful differences in pain or disability between the two groups. The two manipulation techniques used in this study were equally effective at reducing pain and disability when compared at 48 hours posttreatment. Clinicians may employ either technique for the treatment of LBP and can expect similar outcomes in those who satisfy the clinical prediction rule (CPR). Further research is required to determine whether differences exist at longer-term follow-up periods, after multiple treatment sessions, or in different clinical populations.
May, Philip A.; Tabachnick, Barbara G.; Gossage, J. Phillip; Kalberg, Wendy O.; Marais, Anna-Susan; Robinson, Luther K.; Manning, Melanie A.; Blankenship, Jason; Buckley, David; Hoyme, H. Eugene; Adnams, Colleen M.
2013-01-01
Objective To provide an analysis of multiple predictors of cognitive and behavioral traits for children with fetal alcohol spectrum disorders (FASD). Method Multivariate correlation techniques were employed with maternal and child data from epidemiologic studies in a community in South Africa. Data on 561 first grade children with fetal alcohol syndrome (FAS), partial FAS (PFAS), and not FASD and their mothers were analyzed by grouping 19 maternal variables into categories (physical, demographic, childbearing, and drinking) and employed in structural equation models (SEM) to assess correlates of child intelligence (verbal and non-verbal) and behavior. Results A first SEM utilizing only seven maternal alcohol use variables to predict cognitive/behavioral traits was statistically significant (B = 3.10, p < .05), but explained only 17.3% of the variance. The second model incorporated multiple maternal variables and was statistically significant explaining 55.3% of the variance. Significantly correlated with low intelligence and problem behavior were demographic (B = 3.83, p < .05) (low maternal education, low socioeconomic status (SES), and rural residence) and maternal physical characteristics (B = 2.70, p < .05) (short stature, small head circumference, and low weight). Childbearing history and alcohol use composites were not statistically significant in the final complex model, and were overpowered by SES and maternal physical traits. Conclusions While other analytic techniques have amply demonstrated the negative effects of maternal drinking on intelligence and behavior, this highly-controlled analysis of multiple maternal influences reveals that maternal demographics and physical traits make a significant enabling or disabling contribution to child functioning in FASD. PMID:23751886
Carney, Timothy Jay; Morgan, Geoffrey P.; Jones, Josette; McDaniel, Anna M.; Weaver, Michael; Weiner, Bryan; Haggstrom, David A.
2014-01-01
Our conceptual model demonstrates our goal to investigate the impact of clinical decision support (CDS) utilization on cancer screening improvement strategies in the community health care (CHC) setting. We employed a dual modeling technique using both statistical and computational modeling to evaluate impact. Our statistical model used the Spearman’s Rho test to evaluate the strength of relationship between our proximal outcome measures (CDS utilization) against our distal outcome measure (provider self-reported cancer screening improvement). Our computational model relied on network evolution theory and made use of a tool called Construct-TM to model the use of CDS measured by the rate of organizational learning. We employed the use of previously collected survey data from community health centers Cancer Health Disparities Collaborative (HDCC). Our intent is to demonstrate the added valued gained by using a computational modeling tool in conjunction with a statistical analysis when evaluating the impact a health information technology, in the form of CDS, on health care quality process outcomes such as facility-level screening improvement. Significant simulated disparities in organizational learning over time were observed between community health centers beginning the simulation with high and low clinical decision support capability. PMID:24953241
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Prevention of the Posttraumatic Fibrotic Response in Joints
2015-10-01
surgical procedures and subsequent collection of tissues have been developed and are currently used on a regular basis. Major Task 4: Evaluating the...needed to evaluate the utility of the inhibitory antibody to reduce the flexion contracture of injured knee joints. The employed techniques include...second surgery to remove a pin, and it did not change by the end of the 32nd week 1. Major Task 5: Task 4. Data analysis and statistical evaluation
Stellar photometry with the Wide Field/Planetary Camera of the Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Holtzman, Jon A.
1990-07-01
Simulations of Wide Field/Planetary Camera (WF/PC) images are analyzed in order to discover the most effective techniques for stellar photometry and to evaluate the accuracy and limitations of these techniques. The capabilities and operation of the WF/PC and the simulations employed in the study are described. The basic techniques of stellar photometry and methods to improve these techniques for the WF/PC are discussed. The correct parameters for star detection, aperture photometry, and point-spread function (PSF) fitting with the DAOPHOT software of Stetson (1987) are determined. Consideration is given to undersampling of the stellar images by the detector; variations in the PSF; and the crowding of the stellar images. It is noted that, with some changes DAOPHOT, is able to generate photometry almost to the level of photon statistics.
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses
Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.
2014-01-01
The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295
Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.
Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A
2014-01-01
The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Factors contributing to academic achievement: a Bayesian structure equation modelling study
NASA Astrophysics Data System (ADS)
Payandeh Najafabadi, Amir T.; Omidi Najafabadi, Maryam; Farid-Rohani, Mohammad Reza
2013-06-01
In Iran, high school graduates enter university after taking a very difficult entrance exam called the Konkoor. Therefore, only the top-performing students are admitted by universities to continue their bachelor's education in statistics. Surprisingly, statistically, most of such students fall into the following categories: (1) do not succeed in their education despite their excellent performance on the Konkoor and in high school; (2) graduate with a grade point average (GPA) that is considerably lower than their high school GPA; (3) continue their master's education in majors other than statistics and (4) try to find jobs unrelated to statistics. This article employs the well-known and powerful statistical technique, the Bayesian structural equation modelling (SEM), to study the academic success of recent graduates who have studied statistics at Shahid Beheshti University in Iran. This research: (i) considered academic success as a latent variable, which was measured by GPA and other academic success (see below) of students in the target population; (ii) employed the Bayesian SEM, which works properly for small sample sizes and ordinal variables; (iii), which is taken from the literature, developed five main factors that affected academic success and (iv) considered several standard psychological tests and measured characteristics such as 'self-esteem' and 'anxiety'. We then study the impact of such factors on the academic success of the target population. Six factors that positively impact student academic success were identified in the following order of relative impact (from greatest to least): 'Teaching-Evaluation', 'Learner', 'Environment', 'Family', 'Curriculum' and 'Teaching Knowledge'. Particularly, influential variables within each factor have also been noted.
Segmentation of prostate boundaries from ultrasound images using statistical shape model.
Shen, Dinggang; Zhan, Yiqiang; Davatzikos, Christos
2003-04-01
This paper presents a statistical shape model for the automatic prostate segmentation in transrectal ultrasound images. A Gabor filter bank is first used to characterize the prostate boundaries in ultrasound images in both multiple scales and multiple orientations. The Gabor features are further reconstructed to be invariant to the rotation of the ultrasound probe and incorporated in the prostate model as image attributes for guiding the deformable segmentation. A hierarchical deformation strategy is then employed, in which the model adaptively focuses on the similarity of different Gabor features at different deformation stages using a multiresolution technique, i.e., coarse features first and fine features later. A number of successful experiments validate the algorithm.
Australian Oceanographic Data Centre Bulletin 16.
1983-05-01
iable that with the quantities of data involved sonic bad data will be archived. In order to exclude this various filtering techniques will be employed. 4...analysed for statistical properties (e.g. burst nican. variance, exceedance and spectral properties) and certain values are correlated with relevant forcing...seconds) < DAY N 0 : 281 z. -15 ,E: o E < INSTRUMENT: MMI 585 .- X AXIS BEARING: 280 0 DATA POINT Z MEAN RESOLVED CURRENT - 15 MAGNITUDE: 7. 1 Cm/s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mestrovic, Ante; Clark, Brenda G.; Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia
2005-11-01
Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for differentmore » treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.« less
Contact thermal shock test of ceramics
NASA Technical Reports Server (NTRS)
Rogers, W. P.; Emery, A. F.
1992-01-01
A novel quantitative thermal shock test of ceramics is described. The technique employs contact between a metal-cooling rod and hot disk-shaped specimen. In contrast with traditional techniques, the well-defined thermal boundary condition allows for accurate analyses of heat transfer, stress, and fracture. Uniform equibiaxial tensile stresses are induced in the center of the test specimen. Transient specimen temperature and acoustic emission are monitored continuously during the thermal stress cycle. The technique is demonstrated with soda-lime glass specimens. Experimental results are compared with theoretical predictions based on a finite-element method thermal stress analysis combined with a statistical model of fracture. Material strength parameters are determined using concentric ring flexure tests. Good agreement is found between experimental results and theoretical predictions of failure probability as a function of time and initial specimen temperature.
Comparison Of The Performance Of Hybrid Coders Under Different Configurations
NASA Astrophysics Data System (ADS)
Gunasekaran, S.; Raina J., P.
1983-10-01
Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.
Classifiers utilized to enhance acoustic based sensors to identify round types of artillery/mortar
NASA Astrophysics Data System (ADS)
Grasing, David; Desai, Sachi; Morcos, Amir
2008-04-01
Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.
Artillery/mortar type classification based on detected acoustic transients
NASA Astrophysics Data System (ADS)
Morcos, Amir; Grasing, David; Desai, Sachi
2008-04-01
Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.
Artillery/mortar round type classification to increase system situational awareness
NASA Astrophysics Data System (ADS)
Desai, Sachi; Grasing, David; Morcos, Amir; Hohil, Myron
2008-04-01
Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.
Surface and Flow Field Measurements on the FAITH Hill Model
NASA Technical Reports Server (NTRS)
Bell, James H.; Heineck, James T.; Zilliac, Gregory; Mehta, Rabindra D.; Long, Kurtis R.
2012-01-01
A series of experimental tests, using both qualitative and quantitative techniques, were conducted to characterize both surface and off-surface flow characteristics of an axisymmetric, modified-cosine-shaped, wall-mounted hill named "FAITH" (Fundamental Aero Investigates The Hill). Two separate models were employed: a 6" high, 18" base diameter machined aluminum model that was used for wind tunnel tests and a smaller scale (2" high, 6" base diameter) sintered nylon version that was used in the water channel facility. Wind tunnel and water channel tests were conducted at mean test section speeds of 165 fps (Reynolds Number based on height = 500,000) and 0.1 fps (Reynolds Number of 1000), respectively. The ratio of model height to boundary later height was approximately 3 for both tests. Qualitative techniques that were employed to characterize the complex flow included surface oil flow visualization for the wind tunnel tests, and dye injection for the water channel tests. Quantitative techniques that were employed to characterize the flow included Cobra Probe to determine point-wise steady and unsteady 3D velocities, Particle Image Velocimetry (PIV) to determine 3D velocities and turbulence statistics along specified planes, Pressure Sensitive Paint (PSP) to determine mean surface pressures, and Fringe Imaging Skin Friction (FISF) to determine surface skin friction (magnitude and direction). This initial report summarizes the experimental set-up, techniques used, data acquired and describes some details of the dataset that is being constructed for use by other researchers, especially the CFD community. Subsequent reports will discuss the data and their interpretation in more detail
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Career satisfaction of Jordanian dental hygienists.
Malkawi, Z A
2016-11-01
The aim of this study was to determine the factors that affect Jordanian dental hygienists with their career satisfaction including financial issues, employment settings and policies. Randomized sample of 102 dental hygienists with a bachelor's degree were selected from the entire population of Jordanian dental hygienists. Participants received a cover letter with a questionnaire. Findings were analysed using descriptive data techniques. Chi-square test was used to determine the statistically significant differences across demographic variables and career satisfaction's factors. About 22.5% of the participants are not working as dental hygienist. Dental hygiene profession in Jordan includes predominantly (74.0%) females. Majority of them (51.9%) were employed in JUST, and minority (6.3%) in MOH. Most of them (56.4%) were aged 24-29 years old, and mostly 62.2% with ≤1 child. About 53.1% employed by general dentist. Almost 35.3% had ≥4 years' job experience. Majority (47.6%) expressed high level of satisfaction with dental materials and equipment to practice work; however, only 2.0% expressed very high level of satisfaction with employment policies. Almost 32.4% expressed low level of satisfaction with salary level. Minority (2.0%) expressed dissatisfaction with quality of dentist's work. Statistically significant association was found between workplace, and dental materials and equipment to practice work, salary level, employment policies (P = 0.003, P = 0.003, P = 0.026), and number of children with flexibility in work hours (P = 0.001). Jordanian dental hygienists' workplacesatisfaction w as significantly associated with dental materials and equipment to practice work, salary level, and employment policies. Understanding the working patterns of dental hygienists in Jordan is important to increase their career satisfaction levels. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Hill, C. L.
1984-01-01
A computer-implemented classification has been derived from Landsat-4 Thematic Mapper data acquired over Baldwin County, Alabama on January 15, 1983. One set of spectral signatures was developed from the data by utilizing a 3x3 pixel sliding window approach. An analysis of the classification produced from this technique identified forested areas. Additional information regarding only the forested areas. Additional information regarding only the forested areas was extracted by employing a pixel-by-pixel signature development program which derived spectral statistics only for pixels within the forested land covers. The spectral statistics from both approaches were integrated and the data classified. This classification was evaluated by comparing the spectral classes produced from the data against corresponding ground verification polygons. This iterative data analysis technique resulted in an overall classification accuracy of 88.4 percent correct for slash pine, young pine, loblolly pine, natural pine, and mixed hardwood-pine. An accuracy assessment matrix has been produced for the classification.
Lucas, Rico; Groeneveld, Jürgen; Harms, Hauke; Johst, Karin; Frank, Karin; Kleinsteuber, Sabine
2017-01-01
In times of global change and intensified resource exploitation, advanced knowledge of ecophysiological processes in natural and engineered systems driven by complex microbial communities is crucial for both safeguarding environmental processes and optimising rational control of biotechnological processes. To gain such knowledge, high-throughput molecular techniques are routinely employed to investigate microbial community composition and dynamics within a wide range of natural or engineered environments. However, for molecular dataset analyses no consensus about a generally applicable alpha diversity concept and no appropriate benchmarking of corresponding statistical indices exist yet. To overcome this, we listed criteria for the appropriateness of an index for such analyses and systematically scrutinised commonly employed ecological indices describing diversity, evenness and richness based on artificial and real molecular datasets. We identified appropriate indices warranting interstudy comparability and intuitive interpretability. The unified diversity concept based on 'effective numbers of types' provides the mathematical framework for describing community composition. Additionally, the Bray-Curtis dissimilarity as a beta-diversity index was found to reflect compositional changes. The employed statistical procedure is presented comprising commented R-scripts and example datasets for user-friendly trial application. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Modeling Ka-band low elevation angle propagation statistics
NASA Technical Reports Server (NTRS)
Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.
1995-01-01
The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.
Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel
2018-02-09
New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
Neutral gas sympathetic cooling of an ion in a Paul trap.
Chen, Kuang; Sullivan, Scott T; Hudson, Eric R
2014-04-11
A single ion immersed in a neutral buffer gas is studied. An analytical model is developed that gives a complete description of the dynamics and steady-state properties of the ions. An extension of this model, using techniques employed in the mathematics of economics and finance, is used to explain the recent observation of non-Maxwellian statistics for these systems. Taken together, these results offer an explanation of the long-standing issues associated with sympathetic cooling of an ion by a neutral buffer gas.
Neutral Gas Sympathetic Cooling of an Ion in a Paul Trap
NASA Astrophysics Data System (ADS)
Chen, Kuang; Sullivan, Scott T.; Hudson, Eric R.
2014-04-01
A single ion immersed in a neutral buffer gas is studied. An analytical model is developed that gives a complete description of the dynamics and steady-state properties of the ions. An extension of this model, using techniques employed in the mathematics of economics and finance, is used to explain the recent observation of non-Maxwellian statistics for these systems. Taken together, these results offer an explanation of the long-standing issues associated with sympathetic cooling of an ion by a neutral buffer gas.
Atmospheric propagation issues relevant to optical communications
NASA Technical Reports Server (NTRS)
Churnside, James H.; Shaik, Kamran
1989-01-01
Atmospheric propagation issues relevant to space-to-ground optical communications for near-earth applications are studied. Propagation effects, current optical communication activities, potential applications, and communication techniques are surveyed. It is concluded that a direct-detection space-to-ground link using redundant receiver sites and temporal encoding is likely to be employed to transmit earth-sensing satellite data to the ground some time in the future. Low-level, long-term studies of link availability, fading statistics, and turbulence climatology are recommended to support this type of application.
E-Area LLWF Vadose Zone Model: Probabilistic Model for Estimating Subsided-Area Infiltration Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyer, J.; Flach, G.
A probabilistic model employing a Monte Carlo sampling technique was developed in Python to generate statistical distributions of the upslope-intact-area to subsided-area ratio (Area UAi/Area SAi) for closure cap subsidence scenarios that differ in assumed percent subsidence and the total number of intact plus subsided compartments. The plan is to use this model as a component in the probabilistic system model for the E-Area Performance Assessment (PA), contributing uncertainty in infiltration estimates.
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
2007-01-01
Statistical aspects of the North Atlantic basin tropical cyclones for the interval 1945- 2005 are examined, including the variation of the yearly frequency of occurrence for various subgroups of storms (all tropical cyclones, hurricanes, major hurricanes, U.S. landfalling hurricanes, and category 4/5 hurricanes); the yearly variation of the mean latitude and longitude (genesis location) of all tropical cyclones and hurricanes; and the yearly variation of the mean peak wind speeds, lowest pressures, and durations for all tropical cyclones, hurricanes, and major hurricanes. Also examined is the relationship between inferred trends found in the North Atlantic basin tropical cyclonic activity and natural variability and global warming, the latter described using surface air temperatures from the Armagh Observatory Armagh, Northern Ireland. Lastly, a simple statistical technique is employed to ascertain the expected level of North Atlantic basin tropical cyclonic activity for the upcoming 2007 season.
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Statistical Learning Analysis in Neuroscience: Aiming for Transparency
Hanke, Michael; Halchenko, Yaroslav O.; Haxby, James V.; Pollmann, Stefan
2009-01-01
Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires “neuroscience-aware” technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities. PMID:20582270
Gender and Employment. Current Statistics and Their Implications.
ERIC Educational Resources Information Center
Equity Issues, 1996
1996-01-01
This publication contains three fact sheets on gender and employment statistics and their implications. The fact sheets are divided into two sections--statistics and implications. The statistics present the current situation of men and women workers as they relate to occupations, education, and earnings. The implications express suggestions for…
Selvarasu, Suresh; Kim, Do Yun; Karimi, Iftekhar A; Lee, Dong-Yup
2010-10-01
We present an integrated framework for characterizing fed-batch cultures of mouse hybridoma cells producing monoclonal antibody (mAb). This framework systematically combines data preprocessing, elemental balancing and statistical analysis technique. Initially, specific rates of cell growth, glucose/amino acid consumptions and mAb/metabolite productions were calculated via curve fitting using logistic equations, with subsequent elemental balancing of the preprocessed data indicating the presence of experimental measurement errors. Multivariate statistical analysis was then employed to understand physiological characteristics of the cellular system. The results from principal component analysis (PCA) revealed three major clusters of amino acids with similar trends in their consumption profiles: (i) arginine, threonine and serine, (ii) glycine, tyrosine, phenylalanine, methionine, histidine and asparagine, and (iii) lysine, valine and isoleucine. Further analysis using partial least square (PLS) regression identified key amino acids which were positively or negatively correlated with the cell growth, mAb production and the generation of lactate and ammonia. Based on these results, the optimal concentrations of key amino acids in the feed medium can be inferred, potentially leading to an increase in cell viability and productivity, as well as a decrease in toxic waste production. The study demonstrated how the current methodological framework using multivariate statistical analysis techniques can serve as a potential tool for deriving rational medium design strategies. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Navard, Sharon E.
1989-01-01
In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.
21 CFR 820.250 - Statistical techniques.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Statistical techniques. 820.250 Section 820.250...) MEDICAL DEVICES QUALITY SYSTEM REGULATION Statistical Techniques § 820.250 Statistical techniques. (a... statistical techniques required for establishing, controlling, and verifying the acceptability of process...
21 CFR 820.250 - Statistical techniques.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Statistical techniques. 820.250 Section 820.250...) MEDICAL DEVICES QUALITY SYSTEM REGULATION Statistical Techniques § 820.250 Statistical techniques. (a... statistical techniques required for establishing, controlling, and verifying the acceptability of process...
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio
2017-04-01
Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-29
... by dividing the Bureau of Labor Statistics Occupational Employment Statistics Survey (OES survey... DEPARTMENT OF LABOR Employment and Training Administration 20 CFR Part 655 RIN 1205-AB61 Wage Methodology for the Temporary Non-Agricultural Employment H- 2B Program; Delay of Effective Date AGENCY...
NASA Technical Reports Server (NTRS)
Bridges, James
2002-01-01
As part of the Advanced Subsonic Technology Program, a series of experiments was conducted at NASA Glenn Research Center on the effect of mixing enhancement devices on the aeroacoustic performance of separate flow nozzles. Initial acoustic evaluations of the devices showed that they reduced jet noise significantly, while creating very little thrust loss. The explanation for the improvement required that turbulence measurements, namely single point mean and RMS statistics and two-point spatial correlations, be made to determine the change in the turbulence caused by the mixing enhancement devices that lead to the noise reduction. These measurements were made in the summer of 2000 in a test program called Separate Nozzle Flow Test 2000 (SFNT2K) supported by the Aeropropulsion Research Program at NASA Glenn Research Center. Given the hot high-speed flows representative of a contemporary bypass ratio 5 turbofan engine, unsteady flow field measurements required the use of an optical measurement method. To achieve the spatial correlations, the Particle Image Velocimetry technique was employed, acquiring high-density velocity maps of the flows from which the required statistics could be derived. This was the first successful use of this technique for such flows, and shows the utility of this technique for future experimental programs. The extensive statistics obtained were likewise unique and give great insight into the turbulence which produces noise and how the turbulence can be modified to reduce jet noise.
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
NASA Technical Reports Server (NTRS)
Engberg, Robert; Ooi, Teng K.
2004-01-01
New methods for structural health monitoring are being assessed, especially in high-performance, extreme environment, safety-critical applications. One such application is for composite cryogenic fuel tanks. The work presented here attempts to characterize and investigate the feasibility of using imbedded piezoelectric sensors to detect cracks and delaminations under cryogenic and ambient conditions. A variety of damage detection methods and different Sensors are employed in the different composite plate samples to aid in determining an optimal algorithm, sensor placement strategy, and type of imbedded sensor to use. Variations of frequency, impedance measurements, and pulse echoing techniques of the sensors are employed and compared. Statistical and analytic techniques are then used to determine which method is most desirable for a specific type of damage. These results are furthermore compared with previous work using externally mounted sensors. Results and optimized methods from this work can then be incorporated into a larger composite structure to validate and assess its structural health. This could prove to be important in the development and qualification of any 2" generation reusable launch vehicle using composites as a structural element.
NASA Astrophysics Data System (ADS)
Eckert, R.; Neyhart, J. T.; Burd, L.; Polikar, R.; Mandayam, S. A.; Tseng, M.
2003-03-01
Mammography is the best method available as a non-invasive technique for the early detection of breast cancer. The radiographic appearance of the female breast consists of radiolucent (dark) regions due to fat and radiodense (light) regions due to connective and epithelial tissue. The amount of radiodense tissue can be used as a marker for predicting breast cancer risk. Previously, we have shown that the use of statistical models is a reliable technique for segmenting radiodense tissue. This paper presents improvements in the model that allow for further development of an automated system for segmentation of radiodense tissue. The segmentation algorithm employs a two-step process. In the first step, segmentation of tissue and non-tissue regions of a digitized X-ray mammogram image are identified using a radial basis function neural network. The second step uses a constrained Neyman-Pearson algorithm, developed especially for this research work, to determine the amount of radiodense tissue. Results obtained using the algorithm have been validated by comparing with estimates provided by a radiologist employing previously established methods.
NASA Astrophysics Data System (ADS)
Glitzner, M.; Crijns, S. P. M.; de Senneville, B. Denis; Lagendijk, J. J. W.; Raaymakers, B. W.
2015-03-01
For motion adaptive radiotherapy, dynamic multileaf collimator tracking can be employed to reduce treatment margins by steering the beam according to the organ motion. The Elekta Agility 160 MLC has hitherto not been evaluated for its tracking suitability. Both dosimetric performance and latency are key figures and need to be assessed generically, independent of the used motion sensor. In this paper, we propose the use of harmonic functions directly fed to the MLC to determine its latency during continuous motion. Furthermore, a control variable is extracted from a camera system and fed to the MLC. Using this setup, film dosimetry and subsequent γ statistics are performed, evaluating the response when tracking (MRI)-based physiologic motion in a closed-loop. The delay attributed to the MLC itself was shown to be a minor contributor to the overall feedback chain as compared to the impact of imaging components such as MRI sequences. Delay showed a linear phase behaviour of the MLC employed in continuously dynamic applications, which enables a general MLC-characterization. Using the exemplary feedback chain, dosimetry showed a vast increase in pass rate employing γ statistics. In this early stage, the tracking performance of the Agility using the test bench yielded promising results, making the technique eligible for translation to tracking using clinical imaging modalities.
McNabb, Matthew; Cao, Yu; Devlin, Thomas; Baxter, Blaise; Thornton, Albert
2012-01-01
Mechanical Embolus Removal in Cerebral Ischemia (MERCI) has been supported by medical trials as an improved method of treating ischemic stroke past the safe window of time for administering clot-busting drugs, and was released for medical use in 2004. The importance of analyzing real-world data collected from MERCI clinical trials is key to providing insights on the effectiveness of MERCI. Most of the existing data analysis on MERCI results has thus far employed conventional statistical analysis techniques. To the best of our knowledge, advanced data analytics and data mining techniques have not yet been systematically applied. To address the issue in this thesis, we conduct a comprehensive study on employing state of the art machine learning algorithms to generate prediction criteria for the outcome of MERCI patients. Specifically, we investigate the issue of how to choose the most significant attributes of a data set with limited instance examples. We propose a few search algorithms to identify the significant attributes, followed by a thorough performance analysis for each algorithm. Finally, we apply our proposed approach to the real-world, de-identified patient data provided by Erlanger Southeast Regional Stroke Center, Chattanooga, TN. Our experimental results have demonstrated that our proposed approach performs well.
Noninvasive fetal QRS detection using an echo state network and dynamic programming.
Lukoševičius, Mantas; Marozas, Vaidotas
2014-08-01
We address a classical fetal QRS detection problem from abdominal ECG recordings with a data-driven statistical machine learning approach. Our goal is to have a powerful, yet conceptually clean, solution. There are two novel key components at the heart of our approach: an echo state recurrent neural network that is trained to indicate fetal QRS complexes, and several increasingly sophisticated versions of statistics-based dynamic programming algorithms, which are derived from and rooted in probability theory. We also employ a standard technique for preprocessing and removing maternal ECG complexes from the signals, but do not take this as the main focus of this work. The proposed approach is quite generic and can be extended to other types of signals and annotations. Open-source code is provided.
Exploring the statistics of magnetic reconnection X-points in kinetic particle-in-cell turbulence
NASA Astrophysics Data System (ADS)
Haggerty, C. C.; Parashar, T. N.; Matthaeus, W. H.; Shay, M. A.; Yang, Y.; Wan, M.; Wu, P.; Servidio, S.
2017-10-01
Magnetic reconnection is a ubiquitous phenomenon in turbulent plasmas. It is an important part of the turbulent dynamics and heating of space and astrophysical plasmas. We examine the statistics of magnetic reconnection using a quantitative local analysis of the magnetic vector potential, previously used in magnetohydrodynamics simulations, and now employed to fully kinetic particle-in-cell (PIC) simulations. Different ways of reducing the particle noise for analysis purposes, including multiple smoothing techniques, are explored. We find that a Fourier filter applied at the Debye scale is an optimal choice for analyzing PIC data. Finally, we find a broader distribution of normalized reconnection rates compared to the MHD limit with rates as large as 0.5 but with an average of approximately 0.1.
Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M; Grün, Sonja
2017-01-01
Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis.
Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M.; Grün, Sonja
2017-01-01
Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis. PMID:28596729
Castilho, Jéssica; Ferreira, Luiz Alfredo Braun; Pereira, Wagner Menna; Neto, Hugo Pasini; Morelli, José Geraldo da Silva; Brandalize, Danielle; Kerppers, Ivo Ilvan; Oliveira, Claudia Santos
2012-07-01
Hypertonia is prevalent in anti-gravity muscles, such as the biceps brachii. Neural mobilization is one of the techniques currently used to reduce spasticity. The aim of the present study was to assess electromyographic (EMG) activity in spastic biceps brachii muscles before and after neural mobilization of the upper limb contralateral to the hemiplegia. Repeated pre-test and post-test EMG measurements were performed on six stroke victims with grade 1 or 2 spasticity (Modified Ashworth Scale). The Upper Limb Neurodynamic Test (ULNT1) was the mobilization technique employed. After neural mobilization contralateral to the lesion, electromyographic activity in the biceps brachii decreased by 17% and 11% for 90° flexion and complete extension of the elbow, respectively. However, the results were not statistically significant (p gt; 0.05). When performed using contralateral techniques, neural mobilization alters the electrical signal of spastic muscles. Copyright © 2011 Elsevier Ltd. All rights reserved.
Daily pan evaporation modelling using a neuro-fuzzy computing technique
NASA Astrophysics Data System (ADS)
Kişi, Özgür
2006-10-01
SummaryEvaporation, as a major component of the hydrologic cycle, is important in water resources development and management. This paper investigates the abilities of neuro-fuzzy (NF) technique to improve the accuracy of daily evaporation estimation. Five different NF models comprising various combinations of daily climatic variables, that is, air temperature, solar radiation, wind speed, pressure and humidity are developed to evaluate degree of effect of each of these variables on evaporation. A comparison is made between the estimates provided by the NF model and the artificial neural networks (ANNs). The Stephens-Stewart (SS) method is also considered for the comparison. Various statistic measures are used to evaluate the performance of the models. Based on the comparisons, it was found that the NF computing technique could be employed successfully in modelling evaporation process from the available climatic data. The ANN also found to perform better than the SS method.
Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco
2012-10-01
Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.
Meta-analysis in applied ecology.
Stewart, Gavin
2010-02-23
This overview examines research synthesis in applied ecology and conservation. Vote counting and pooling unweighted averages are widespread despite the superiority of syntheses based on weighted combination of effects. Such analyses allow exploration of methodological uncertainty in addition to consistency of effects across species, space and time, but exploring heterogeneity remains controversial. Meta-analyses are required to generalize in ecology, and to inform evidence-based decision-making, but the more sophisticated statistical techniques and registers of research used in other disciplines must be employed in ecology to fully realize their benefits.
NASA Astrophysics Data System (ADS)
Svirina, Anna; Shindor, Olga; Tatmyshevsky, Konstantin
2014-12-01
The paper deals with the main problems of Russian energy system development that proves necessary to provide educational programs in the field of renewable and alternative energy. In the paper the process of curricula development and defining teaching techniques on the basis of expert opinion evaluation is defined, and the competence model for renewable and alternative energy processing master students is suggested. On the basis of a distributed questionnaire and in-depth interviews, the data for statistical analysis was obtained. On the basis of this data, an optimization of curricula structure was performed, and three models of a structure for optimizing teaching techniques were developed. The suggested educational program structure which was adopted by employers is presented in the paper. The findings include quantitatively estimated importance of systemic thinking and professional skills and knowledge as basic competences of a masters' program graduate; statistically estimated necessity of practice-based learning approach; and optimization models for structuring curricula in renewable and alternative energy processing. These findings allow the establishment of a platform for the development of educational programs.
Developing an industry-oriented safety curriculum using the Delphi technique.
Chen, Der-Fa; Wu, Tsung-Chih; Chen, Chi-Hsiang; Chang, Shu-Hsuan; Yao, Kai-Chao; Liao, Chin-Wen
2016-09-01
In this study, we examined the development of industry-oriented safety degree curricula at a college level. Based on a review of literature on the practices and study of the development of safety curricula, we classified occupational safety and health curricula into the following three domains: safety engineering, health engineering, and safety and health management. We invited 44 safety professionals to complete a four-round survey that was designed using a modified Delphi technique. We used Chi-square statistics to test the panel experts' consensus on the significance of the items in the three domains and employed descriptive statistics to rank the participants' rating of each item. The results showed that the top three items for each of the three domains were Risk Assessment, Dangerous Machinery and Equipment, and Fire and Explosion Prevention for safety engineering; Ergonomics, Industrial Toxicology, and Health Risk Assessment for health engineering; and Industrial Safety and Health Regulations, Accident Investigation and Analysis, and Emergency Response for safety and health management. Only graduates from safety programmes who possess practical industry-oriented abilities can satisfy industry demands and provide value to the existence of college safety programmes.
Analysis of Acoustic Emission Parameters from Corrosion of AST Bottom Plate in Field Testing
NASA Astrophysics Data System (ADS)
Jomdecha, C.; Jirarungsatian, C.; Suwansin, W.
Field testing of aboveground storage tank (AST) to monitor corrosion of the bottom plate is presented in this chapter. AE testing data of the ten AST with different sizes, materials, and products were employed to monitor the bottom plate condition. AE sensors of 30 and 150 kHz were used to monitor the corrosion activity of up to 24 channels including guard sensors. Acoustic emission (AE) parameters were analyzed to explore the AE parameter patterns of occurring corrosion compared to the laboratory results. Amplitude, count, duration, and energy were main parameters of analysis. Pattern recognition technique with statistical was implemented to eliminate the electrical and environmental noises. The results showed the specific AE patterns of corrosion activities related to the empirical results. In addition, plane algorithm was utilized to locate the significant AE events from corrosion. Both results of parameter patterns and AE event locations can be used to interpret and locate the corrosion activities. Finally, basic statistical grading technique was used to evaluate the bottom plate condition of the AST.
NASA Astrophysics Data System (ADS)
Luo, Shuwen; Chen, Changshui; Mao, Hua; Jin, Shaoqin
2013-06-01
The feasibility of early detection of gastric cancer using near-infrared (NIR) Raman spectroscopy (RS) by distinguishing premalignant lesions (adenomatous polyp, n=27) and cancer tissues (adenocarcinoma, n=33) from normal gastric tissues (n=45) is evaluated. Significant differences in Raman spectra are observed among the normal, adenomatous polyp, and adenocarcinoma gastric tissues at 936, 1003, 1032, 1174, 1208, 1323, 1335, 1450, and 1655 cm-1. Diverse statistical methods are employed to develop effective diagnostic algorithms for classifying the Raman spectra of different types of ex vivo gastric tissues, including principal component analysis (PCA), linear discriminant analysis (LDA), and naive Bayesian classifier (NBC) techniques. Compared with PCA-LDA algorithms, PCA-NBC techniques together with leave-one-out, cross-validation method provide better discriminative results of normal, adenomatous polyp, and adenocarcinoma gastric tissues, resulting in superior sensitivities of 96.3%, 96.9%, and 96.9%, and specificities of 93%, 100%, and 95.2%, respectively. Therefore, NIR RS associated with multivariate statistical algorithms has the potential for early diagnosis of gastric premalignant lesions and cancer tissues in molecular level.
Nguyen, Phung Anh; Yang, Hsuan-Chia; Xu, Rong; Li, Yu-Chuan Jack
2018-01-01
Traditional Chinese Medicine utilization has rapidly increased worldwide. However, there is limited database provides the information of TCM herbs and diseases. The study aims to identify and evaluate the meaningful associations between TCM herbs and breast cancer by using the association rule mining (ARM) techniques. We employed the ARM techniques for 19.9 million TCM prescriptions by using Taiwan National Health Insurance claim database from 1999 to 2013. 364 TCM herbs-breast cancer associations were derived from those prescriptions and were then filtered by their support of 20. Resulting of 296 associations were evaluated by comparing to a gold-standard that was curated information from Chinese-Wikipedia with the following terms, cancer, tumor, malignant. All 14 TCM herbs-breast cancer associations with their confidence of 1% were valid when compared to gold-standard. For other confidences, the statistical results showed consistently with high precisions. We thus succeed to identify the TCM herbs-breast cancer associations with useful techniques.
Detection efficiency of auditory steady state evoked by modulated noise.
Santos, T S; Silva, J J; Lins, O G; Melges, D B; Tierra-Criollo, C J
2016-09-01
This study aimed to investigate the efficiency of Magnitude Squared Coherence (MSC) and Spectral F test (SFT) for the detection of auditory steady state responses (ASSR) obtained by amplitude-modulated noises. Twenty individuals (12 women) without any history of neurological or audiological diseases, aged from 18 to 59 years (mean ± standard deviation = 26.45 ± 3.9 years), who provided written informed consent, participated in the study. The Audiostim system was used for stimulating and ASSR recording. The tested stimuli were amplitude-modulated Wide-band noise (WBN), Low-band noise (LBN), High-band noise (HBN), Two-band noise (TBN) between 77 and 110 Hz, applied in intensity levels of 55, 45, and 25 dB sound pressure level (SPL). MSC and SFT, two statistical-based detection techniques, were applied with a significance level of 5%. Detection times and rates were compared using the Friedman test and Tukey-Kramer as post hoc analysis. Also based on the stimulation parameters (stimuli types and intensity levels) and detection techniques (MSC or SFT), 16 different pass/fail protocols, for which the true negatives (TN) were calculated. The median detection times ranged from 68 to 157s for 55 dB SPL, 68-99s for 45 dB SPL, and 84-118s for 25 dB SPL. No statistical difference was found between MSC and STF considering the median detection times (p > 0.05). The detection rates ranged from 100% to 55.6% in 55 dB SPL, 97.2%-38.9% in 45 dB SPL and 66.7%-8.3% in 25 dB SPL. Also for detection rates, no statistical difference was observed between MSC and STF (p > 0.05). True negatives (TN) above 90% were found for Protocols that employed WBN or HBN, at 55 dB SPL or that used WBN or HBN, at 45 dB SPL. For Protocols employing TBN, at 55 dB SPL or 45 dB SPL TN below 60% were found due to the low detection rates of stimuli that included low-band frequencies. The stimuli that include high-frequency content showed higher detection rates (>90%) and lower detection times (<3 min). The noise composed by two bands applied separately (TBN) is not feasible for clinical applications since it requires prolonging the exam duration, and also led to a reduced percentage of true negatives. On the other hand, WBN and HBN achieved high detection performance and high TN and should be investigated to implement pass/fail protocol for hearing screening with clinical population. Finally, both WBN and HBN seemed to be indifferent to the employed technique (SFT or MSC), which can be seen as another advantage of ASSR employment. Copyright © 2016 Elsevier B.V. All rights reserved.
2015-01-01
Changes in glycosylation have been shown to have a profound correlation with development/malignancy in many cancer types. Currently, two major enrichment techniques have been widely applied in glycoproteomics, namely, lectin affinity chromatography (LAC)-based and hydrazide chemistry (HC)-based enrichments. Here we report the LC–MS/MS quantitative analyses of human blood serum glycoproteins and glycopeptides associated with esophageal diseases by LAC- and HC-based enrichment. The separate and complementary qualitative and quantitative data analyses of protein glycosylation were performed using both enrichment techniques. Chemometric and statistical evaluations, PCA plots, or ANOVA test, respectively, were employed to determine and confirm candidate cancer-associated glycoprotein/glycopeptide biomarkers. Out of 139, 59 common glycoproteins (42% overlap) were observed in both enrichment techniques. This overlap is very similar to previously published studies. The quantitation and evaluation of significantly changed glycoproteins/glycopeptides are complementary between LAC and HC enrichments. LC–ESI–MS/MS analyses indicated that 7 glycoproteins enriched by LAC and 11 glycoproteins enriched by HC showed significantly different abundances between disease-free and disease cohorts. Multiple reaction monitoring quantitation resulted in 13 glycopeptides by LAC enrichment and 10 glycosylation sites by HC enrichment to be statistically different among disease cohorts. PMID:25134008
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-27
... for OMB Review; Comment Request; Report on Current Employment Statistics ACTION: Notice. SUMMARY: The Department of Labor (DOL) is submitting the revised Bureau of Labor Statistics (BLS) sponsored information collection request (ICR) titled, ``Report on Current Employment Statistics,'' to the Office of Management and...
A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume
NASA Astrophysics Data System (ADS)
Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration
2017-11-01
An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.
Summary Statistics of Public TV Licensees, 1972.
ERIC Educational Resources Information Center
Lee, S. Young; Pedone, Ronald J.
Statistics in the areas of finance, employment, broadcast and production for public TV licenses in 1972 are given in this report. Tables in the area of finance are presented specifying total funds, income, direct operating costs, and capital expenditures. Employment is divided into all employment with subdivisions for full- and part-time employees…
Some Aspects of Part-Time Work.
ERIC Educational Resources Information Center
Australian Dept. of Labour and National Service, Melbourne. Women's Bureau.
Of major importance to many married women seeking employment in Australia is the availability of part-time work. To describe the economic aspects of part-time employment for women, a review was made of statistics published by the Commonwealth Bureau of Census and Statistics and of research on part-time employment in overseas countries, and a…
Adaptive correction of ensemble forecasts
NASA Astrophysics Data System (ADS)
Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane
2017-04-01
Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.
Yue, Lilly Q
2012-01-01
In the evaluation of medical products, including drugs, biological products, and medical devices, comparative observational studies could play an important role when properly conducted randomized, well-controlled clinical trials are infeasible due to ethical or practical reasons. However, various biases could be introduced at every stage and into every aspect of the observational study, and consequently the interpretation of the resulting statistical inference would be of concern. While there do exist statistical techniques for addressing some of the challenging issues, often based on propensity score methodology, these statistical tools probably have not been as widely employed in prospectively designing observational studies as they should be. There are also times when they are implemented in an unscientific manner, such as performing propensity score model selection for a dataset involving outcome data in the same dataset, so that the integrity of observational study design and the interpretability of outcome analysis results could be compromised. In this paper, regulatory considerations on prospective study design using propensity scores are shared and illustrated with hypothetical examples.
Adams, K M; Brown, G G; Grant, I
1985-08-01
Analysis of Covariance (ANCOVA) is often used in neuropsychological studies to effect ex-post-facto adjustment of performance variables amongst groups of subjects mismatched on some relevant demographic variable. This paper reviews some of the statistical assumptions underlying this usage. In an attempt to illustrate the complexities of this statistical technique, three sham studies using actual patient data are presented. These staged simulations have varying relationships between group test performance differences and levels of covariate discrepancy. The results were robust and consistent in their nature, and were held to support the wisdom of previous cautions by statisticians concerning the employment of ANCOVA to justify comparisons between incomparable groups. ANCOVA should not be used in neuropsychological research to equate groups unequal on variables such as age and education or to exert statistical control whose objective is to eliminate consideration of the covariate as an explanation for results. Finally, the report advocates by example the use of simulation to further our understanding of neuropsychological variables.
Grid indentation analysis of mechanical properties of composite electrodes in Li-ion batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasconcelos, Luize Scalco de; Xu, Rong; Li, Jianlin
We report that electrodes in commercial rechargeable batteries are microscopically heterogeneous materials. The constituent components, including active materials, polymeric binders, and porous conductive matrix, often have large variation in their mechanical properties, making the mechanical characterization of composite electrodes a challenging task. In a model system of LiNi 0.5Mn 0.3Co 0.2O 2 cathode, we employ the instrumented grid indentation to determine the elastic modulus and hardness of the constituent phases. The approach relies on a large array of nanoindentation experiments and statistical analysis of the resulting data provided that the maximum indentation depth is carefully chosen. The statistically extracted propertiesmore » of the active particles and the surrounding medium are in good agreement with the tests of targeted indentation at selected sites. Lastly, the combinatory technique of grid indentation and statistical deconvolution represents a fast and reliable route to quantify the mechanical properties of composite electrodes that feed the parametric input for the mechanics models.« less
Grid indentation analysis of mechanical properties of composite electrodes in Li-ion batteries
Vasconcelos, Luize Scalco de; Xu, Rong; Li, Jianlin; ...
2016-03-09
We report that electrodes in commercial rechargeable batteries are microscopically heterogeneous materials. The constituent components, including active materials, polymeric binders, and porous conductive matrix, often have large variation in their mechanical properties, making the mechanical characterization of composite electrodes a challenging task. In a model system of LiNi 0.5Mn 0.3Co 0.2O 2 cathode, we employ the instrumented grid indentation to determine the elastic modulus and hardness of the constituent phases. The approach relies on a large array of nanoindentation experiments and statistical analysis of the resulting data provided that the maximum indentation depth is carefully chosen. The statistically extracted propertiesmore » of the active particles and the surrounding medium are in good agreement with the tests of targeted indentation at selected sites. Lastly, the combinatory technique of grid indentation and statistical deconvolution represents a fast and reliable route to quantify the mechanical properties of composite electrodes that feed the parametric input for the mechanics models.« less
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Employment program for patients with severe mental illness in Malaysia: a 3-month outcome.
Wan Kasim, Syarifah Hafizah; Midin, Marhani; Abu Bakar, Abdul Kadir; Sidi, Hatta; Nik Jaafar, Nik Ruzyanei; Das, Srijit
2014-01-01
This study aimed to examine the rate and predictive factors of successful employment at 3 months upon enrolment into an employment program among patients with severe mental illness (SMI). A cross-sectional study using universal sampling technique was conducted on patients with SMI who completed a 3-month period of being employed at Hospital Permai, Malaysia. A total of 147 patients were approached and 126 were finally included in the statistical analyses. Successful employment was defined as the ability to work 40 or more hours per month. Factors significantly associated with successful employment from bivariate analyses were entered into a multiple logistic regression analysis to identify predictors of successful employment. The rate of successful employment at 3 months was 68.3% (n=81). Significant factors associated with successful employment from bivariate analyses were having past history of working, good family support, less number of psychiatric admissions, good compliance to medicine, good interest in work, living in hostel, being motivated to work, satisfied with the job or salary, getting a preferred job, being in competitive or supported employment and having higher than median scores of PANNS on the positive, negative and general psychopathology. Significant predictors of employment, from a logistic regression model were having good past history of working (p<0.021; OR 6.12; [95% CI 2.1-11.9]) and getting a preferred job (p<0.032; [OR 4.021; 95% CI 1.83-12.1]). Results showed a high employment rate among patients with SMI. Good past history of working and getting a preferred job were significant predictors of successful employment. Copyright © 2014 Elsevier Inc. All rights reserved.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
From the necessary to the possible: the genesis of the spin-statistics theorem
NASA Astrophysics Data System (ADS)
Blum, Alexander
2014-12-01
The spin-statistics theorem, which relates the intrinsic angular momentum of a single particle to the type of quantum statistics obeyed by a system of many such particles, is one of the central theorems in quantum field theory and the physics of elementary particles. It was first formulated in 1939/40 by Wolfgang Pauli and his assistant Markus Fierz. This paper discusses the developments that led up to this first formulation, starting from early attempts in the late 1920s to explain why charged matter particles obey Fermi-Dirac statistics, while photons obey Bose-Einstein statistics. It is demonstrated how several important developments paved the way from such general philosophical musings to a general (and provable) theorem, most notably the use of quantum field theory, the discovery of new elementary particles, and the generalization of the notion of spin. It is also discussed how the attempts to prove a spin-statistics connection were driven by Pauli from formal to more physical arguments, culminating in Pauli's 1940 proof. This proof was a major success for the beleaguered theory of quantum field theory and the methods Pauli employed proved essential for the renaissance of quantum field theory and the development of renormalization techniques in the late 1940s.
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
A Statistical Description of Neural Ensemble Dynamics
Long, John D.; Carmena, Jose M.
2011-01-01
The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486
Ogawa, Diogo M. O.; Moriya, Shigeharu; Tsuboi, Yuuri; Date, Yasuhiro; Prieto-da-Silva, Álvaro R. B.; Rádis-Baptista, Gandhi; Yamane, Tetsuo; Kikuchi, Jun
2014-01-01
We propose the technique of biogeochemical typing (BGC typing) as a novel methodology to set forth the sub-systems of organismal communities associated to the correlated chemical profiles working within a larger complex environment. Given the intricate characteristic of both organismal and chemical consortia inherent to the nature, many environmental studies employ the holistic approach of multi-omics analyses undermining as much information as possible. Due to the massive amount of data produced applying multi-omics analyses, the results are hard to visualize and to process. The BGC typing analysis is a pipeline built using integrative statistical analysis that can treat such huge datasets filtering, organizing and framing the information based on the strength of the various mutual trends of the organismal and chemical fluctuations occurring simultaneously in the environment. To test our technique of BGC typing, we choose a rich environment abounding in chemical nutrients and organismal diversity: the surficial freshwater from Japanese paddy fields and surrounding waters. To identify the community consortia profile we employed metagenomics as high throughput sequencing (HTS) for the fragments amplified from Archaea rRNA, universal 16S rRNA and 18S rRNA; to assess the elemental content we employed ionomics by inductively coupled plasma optical emission spectroscopy (ICP-OES); and for the organic chemical profile, metabolomics employing both Fourier transformed infrared (FT-IR) spectroscopy and proton nuclear magnetic resonance (1H-NMR) all these analyses comprised our multi-omics dataset. The similar trends between the community consortia against the chemical profiles were connected through correlation. The result was then filtered, organized and framed according to correlation strengths and peculiarities. The output gave us four BGC types displaying uniqueness in community and chemical distribution, diversity and richness. We conclude therefore that the BGC typing is a successful technique for elucidating the sub-systems of organismal communities with associated chemical profiles in complex ecosystems. PMID:25330259
Ogawa, Diogo M O; Moriya, Shigeharu; Tsuboi, Yuuri; Date, Yasuhiro; Prieto-da-Silva, Álvaro R B; Rádis-Baptista, Gandhi; Yamane, Tetsuo; Kikuchi, Jun
2014-01-01
We propose the technique of biogeochemical typing (BGC typing) as a novel methodology to set forth the sub-systems of organismal communities associated to the correlated chemical profiles working within a larger complex environment. Given the intricate characteristic of both organismal and chemical consortia inherent to the nature, many environmental studies employ the holistic approach of multi-omics analyses undermining as much information as possible. Due to the massive amount of data produced applying multi-omics analyses, the results are hard to visualize and to process. The BGC typing analysis is a pipeline built using integrative statistical analysis that can treat such huge datasets filtering, organizing and framing the information based on the strength of the various mutual trends of the organismal and chemical fluctuations occurring simultaneously in the environment. To test our technique of BGC typing, we choose a rich environment abounding in chemical nutrients and organismal diversity: the surficial freshwater from Japanese paddy fields and surrounding waters. To identify the community consortia profile we employed metagenomics as high throughput sequencing (HTS) for the fragments amplified from Archaea rRNA, universal 16S rRNA and 18S rRNA; to assess the elemental content we employed ionomics by inductively coupled plasma optical emission spectroscopy (ICP-OES); and for the organic chemical profile, metabolomics employing both Fourier transformed infrared (FT-IR) spectroscopy and proton nuclear magnetic resonance (1H-NMR) all these analyses comprised our multi-omics dataset. The similar trends between the community consortia against the chemical profiles were connected through correlation. The result was then filtered, organized and framed according to correlation strengths and peculiarities. The output gave us four BGC types displaying uniqueness in community and chemical distribution, diversity and richness. We conclude therefore that the BGC typing is a successful technique for elucidating the sub-systems of organismal communities with associated chemical profiles in complex ecosystems.
Finite Element Analysis of Reverberation Chambers
NASA Technical Reports Server (NTRS)
Bunting, Charles F.; Nguyen, Duc T.
2000-01-01
The primary motivating factor behind the initiation of this work was to provide a deterministic means of establishing the validity of the statistical methods that are recommended for the determination of fields that interact in -an avionics system. The application of finite element analysis to reverberation chambers is the initial step required to establish a reasonable course of inquiry in this particularly data-intensive study. The use of computational electromagnetics provides a high degree of control of the "experimental" parameters that can be utilized in a simulation of reverberating structures. As the work evolved there were four primary focus areas they are: 1. The eigenvalue problem for the source free problem. 2. The development of a complex efficient eigensolver. 3. The application of a source for the TE and TM fields for statistical characterization. 4. The examination of shielding effectiveness in a reverberating environment. One early purpose of this work was to establish the utility of finite element techniques in the development of an extended low frequency statistical model for reverberation phenomena. By employing finite element techniques, structures of arbitrary complexity can be analyzed due to the use of triangular shape functions in the spatial discretization. The effects of both frequency stirring and mechanical stirring are presented. It is suggested that for the low frequency operation the typical tuner size is inadequate to provide a sufficiently random field and that frequency stirring should be used. The results of the finite element analysis of the reverberation chamber illustrate io-W the potential utility of a 2D representation for enhancing the basic statistical characteristics of the chamber when operating in a low frequency regime. The basic field statistics are verified for frequency stirring over a wide range of frequencies. Mechanical stirring is shown to provide an effective frequency deviation.
Cossi, Marcus Vinícius Coutinho; de Almeida, Michelle Vieira; Dias, Mariane Rezende; de Arruda Pinto, Paulo Sérgiode; Nero, Luís Augusto
2012-01-01
The type of sampling technique used to obtain food samples is fundamental to the success of microbiological analysis. Destructive and nondestructive techniques, such as tissue excision and rinsing, respectively, are widely employed in obtaining samples from chicken carcasses. In this study, four sampling techniques used for chicken carcasses were compared to evaluate their performances in the enumeration of hygiene indicator microorganisms. Sixty fresh chicken carcasses were sampled by rinsing, tissue excision, superficial swabbing, and skin excision. All samples were submitted for enumeration of mesophilic aerobes, Enterobacteriaceae, coliforms, and Escherichia coli. The results were compared to determine the statistical significance of differences and correlation (P < 0.05). Tissue excision provided the highest microbial counts compared with the other procedures, with significant differences obtained only for coliforms and E. coli (P < 0.05). Significant correlations (P < 0.05) were observed for all the sampling techniques evaluated for most of the hygiene indicators. Despite presenting a higher recovery ability, tissue excision did not present significant differences for microorganism enumeration compared with other nondestructive techniques, such as rinsing, indicating its adequacy for microbiological analysis of chicken carcasses.
A proposed technique for vehicle tracking, direction, and speed determination
NASA Astrophysics Data System (ADS)
Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.
2004-12-01
A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.
Employing Introductory Statistics Students at "Stats Dairy"
ERIC Educational Resources Information Center
Keeling, Kellie
2011-01-01
To combat students' fear of statistics I employ my students at a fictional company, Stats Dairy, run by cows. Almost all examples used in the class notes, exercises, humour and exams use data "collected" from this company.
NASA Technical Reports Server (NTRS)
Bowles, Roland L.; Buck, Bill K.
2009-01-01
The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.
Shannahoff-Khalsa, D S; Ray, L E; Levine, S; Gallen, C C; Schwartz, B J; Sidorowich, J J
1999-12-01
The objective of this study was to compare efficacy of two meditation protocols for treating patients with obsessive-compulsive disorder (OCD). Patients were randomized to two groups-matched for sex, age, and medication status-and blinded to the comparison protocol. They were told the trial would last for 12 months, unless one protocol proved to be more efficacious. If so, groups would merge, and the group that received the less efficacious treatment would also be afforded 12 months of the more effective one. The study was conducted at Children's Hospital, San Diego, Calif. Patients were selected according to Diagnostic and Statistical Manual of Mental Disorders, Third Edition-Revised (DSM-III-R) criteria and recruited by advertisements and referral. At baseline, Group 1 included 11 adults and 1 adolescent, and Group 2 included 10 adults. Group 1 employed a kundalini yoga meditation protocol and Group 2 employed the Relaxation Response plus Mindfulness Meditation technique. Baseline and 3-month interval testing was conducted using the Yale-Brown Obsessive Compulsive Scale (Y-BOCS), Symptoms Checklist-90-Revised Obsessive Compulsive (SCL-90-R OC) and Global Severity Index (SCL-90-R GSI) scales, Profile of Moods scale (POMS), Perceived Stress Scale (PSS), and Purpose in Life (PIL) test. Seven adults in each group completed 3 months of therapy. At 3 months, Group 1 demonstrated greater improvements (Student's independent groups t-test) on the Y-BOCS, SCL-90-R OC and GSI scales, and POMS, and greater but nonsignificant improvements on the PSS and PIL test. An intent-to-treat analysis (Y-BOCS) for the baseline and 3-month tests showed that only Group 1 improved. Within-group statistics (Student's paired t-tests) showed that Group 1 significantly improved on all six scales, but Group 2 had no improvements. Groups were merged for an additional year using Group 1 techniques. At 15 months, the final group (N=11) improved 71%, 62%, 66%, 74%, 39%, and 23%, respectively, on the Y-BOCS, SCL-90-R OC, SCL-90-R GSI, POMS, PSS, and PIL; P<0.003 (analysis of variance). This study demonstrates that kundalini yoga techniques are effective in the treatment of OCD.
Limb-darkening and the structure of the Jovian atmosphere
NASA Technical Reports Server (NTRS)
Newman, W. I.; Sagan, C.
1978-01-01
By observing the transit of various cloud features across the Jovian disk, limb-darkening curves were constructed for three regions in the 4.6 to 5.1 mu cm band. Several models currently employed in describing the radiative or dynamical properties of planetary atmospheres are here examined to understand their implications for limb-darkening. The statistical problem of fitting these models to the observed data is reviewed and methods for applying multiple regression analysis are discussed. Analysis of variance techniques are introduced to test the viability of a given physical process as a cause of the observed limb-darkening.
Predictive Analytics for City Agencies: Lessons from Children's Services.
Shroff, Ravi
2017-09-01
Many municipal agencies maintain detailed and comprehensive electronic records of their interactions with citizens. These data, in combination with machine learning and statistical techniques, offer the promise of better decision making, and more efficient and equitable service delivery. However, a data scientist employed by an agency to implement these techniques faces numerous and varied choices that cumulatively can have significant real-world consequences. The data scientist, who may be the only person at an agency equipped to understand the technical complexity of a predictive algorithm, therefore, bears a good deal of responsibility in making judgments. In this perspective, I use a concrete example from my experience of working with New York City's Administration for Children's Services to illustrate the social and technical tradeoffs that can result from choices made in each step of data analysis. Three themes underlie these tradeoffs: the importance of frequent communication between the data scientist, agency leadership, and domain experts; the agency's resources and organizational constraints; and the necessity of an ethical framework to evaluate salient costs and benefits. These themes inform specific recommendations that I provide to guide agencies that employ data scientists and rely on their work in designing, testing, and implementing predictive algorithms.
Preferred Materials and Methods Employed for Endodontic Treatment by Iranian General Practitioners
Raoof, Maryam; Zeini, Negar; Haghani, Jahangir; Sadr, Saeedeh; Mohammadalizadeh, Sakineh
2015-01-01
Introduction: The aim of this study was to gather information on the materials and methods employed in root canal treatment (RCT) by general dental practitioners (GDPs) in Iran. Methods and Materials: A questionnaire was distributed among 450 dentists who attended the 53th Iranian Dental Association congress. Participants were asked to consider demographic variables and answer the questions regarding the materials and methods commonly used in RCT. Descriptive statistics were given as absolute frequencies and valid percentages. The chi-square test was used to investigate the influence of gender and the years of professional activity for the employed materials and techniques. Results: The response rate was 84.88%. The results showed that 61.5% of the participants did not perform pulp sensitivity tests prior to RCT. Less than half of the general dental practitioners (47.4%) said that they would trace a sinus tract before starting the treatment. Nearly 16% of practitioners preferred the rubber dam isolation method. Over 36% of the practitioners reported using formocresol for pulpotomy. The combined approach of working length (WL) radiographs and electronic apex locators was used by 35.2% of the practitioners. Most of the respondents used K-file hand instruments for canal preparation and the technique of choice was step-back (43.5%), while 40.1% of respondents used NiTi rotary files, mostly ProTaper and RaCe. The most widely used irrigant was normal saline (61.8%). Calcium hydroxide was the most commonly used inter appointment medicament (84.6%). The most popular obturation technique was cold lateral condensation (81.7%) with 51% using zinc oxide-eugenol-based sealers. Conclusions: The majority of Iranian GDPs who participated in the present survey do not comply with quality guidelines of endodontic treatment. PMID:25834595
Sheet, Debdoot; Karamalis, Athanasios; Eslami, Abouzar; Noël, Peter; Chatterjee, Jyotirmoy; Ray, Ajoy K; Laine, Andrew F; Carlier, Stephane G; Navab, Nassir; Katouzian, Amin
2014-01-01
Intravascular Ultrasound (IVUS) is a predominant imaging modality in interventional cardiology. It provides real-time cross-sectional images of arteries and assists clinicians to infer about atherosclerotic plaques composition. These plaques are heterogeneous in nature and constitute fibrous tissue, lipid deposits and calcifications. Each of these tissues backscatter ultrasonic pulses and are associated with a characteristic intensity in B-mode IVUS image. However, clinicians are challenged when colocated heterogeneous tissue backscatter mixed signals appearing as non-unique intensity patterns in B-mode IVUS image. Tissue characterization algorithms have been developed to assist clinicians to identify such heterogeneous tissues and assess plaque vulnerability. In this paper, we propose a novel technique coined as Stochastic Driven Histology (SDH) that is able to provide information about co-located heterogeneous tissues. It employs learning of tissue specific ultrasonic backscattering statistical physics and signal confidence primal from labeled data for predicting heterogeneous tissue composition in plaques. We employ a random forest for the purpose of learning such a primal using sparsely labeled and noisy samples. In clinical deployment, the posterior prediction of different lesions constituting the plaque is estimated. Folded cross-validation experiments have been performed with 53 plaques indicating high concurrence with traditional tissue histology. On the wider horizon, this framework enables learning of tissue-energy interaction statistical physics and can be leveraged for promising clinical applications requiring tissue characterization beyond the application demonstrated in this paper. Copyright © 2013 Elsevier B.V. All rights reserved.
Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach.
Borghetti, Brett J; Giametta, Joseph J; Rusnock, Christina F
2017-02-01
We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.
Statistical link between external climate forcings and modes of ocean variability
NASA Astrophysics Data System (ADS)
Malik, Abdul; Brönnimann, Stefan; Perona, Paolo
2017-07-01
In this study we investigate statistical link between external climate forcings and modes of ocean variability on inter-annual (3-year) to centennial (100-year) timescales using de-trended semi-partial-cross-correlation analysis technique. To investigate this link we employ observations (AD 1854-1999), climate proxies (AD 1600-1999), and coupled Atmosphere-Ocean-Chemistry Climate Model simulations with SOCOL-MPIOM (AD 1600-1999). We find robust statistical evidence that Atlantic multi-decadal oscillation (AMO) has intrinsic positive correlation with solar activity in all datasets employed. The strength of the relationship between AMO and solar activity is modulated by volcanic eruptions and complex interaction among modes of ocean variability. The observational dataset reveals that El Niño southern oscillation (ENSO) has statistically significant negative intrinsic correlation with solar activity on decadal to multi-decadal timescales (16-27-year) whereas there is no evidence of a link on a typical ENSO timescale (2-7-year). In the observational dataset, the volcanic eruptions do not have a link with AMO on a typical AMO timescale (55-80-year) however the long-term datasets (proxies and SOCOL-MPIOM output) show that volcanic eruptions have intrinsic negative correlation with AMO on inter-annual to multi-decadal timescales. The Pacific decadal oscillation has no link with solar activity, however, it has positive intrinsic correlation with volcanic eruptions on multi-decadal timescales (47-54-year) in reconstruction and decadal to multi-decadal timescales (16-32-year) in climate model simulations. We also find evidence of a link between volcanic eruptions and ENSO, however, the sign of relationship is not consistent between observations/proxies and climate model simulations.
SnapShot: Visualization to Propel Ice Hockey Analytics.
Pileggi, H; Stolper, C D; Boyle, J M; Stasko, J T
2012-12-01
Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel.
Optimization of Premix Powders for Tableting Use.
Todo, Hiroaki; Sato, Kazuki; Takayama, Kozo; Sugibayashi, Kenji
2018-05-08
Direct compression is a popular choice as it provides the simplest way to prepare the tablet. It can be easily adopted when the active pharmaceutical ingredient (API) is unstable in water or to thermal drying. An optimal formulation of preliminary mixed powders (premix powders) is beneficial if prepared in advance for tableting use. The aim of this study was to find the optimal formulation of the premix powders composed of lactose (LAC), cornstarch (CS), and microcrystalline cellulose (MCC) by using statistical techniques. Based on the "Quality by Design" concept, a (3,3)-simplex lattice design consisting of three components, LAC, CS, and MCC was employed to prepare the model premix powders. Response surface method incorporating a thin-plate spline interpolation (RSM-S) was applied for estimation of the optimum premix powders for tableting use. The effect of tablet shape identified by the surface curvature on the optimization was investigated. The optimum premix powder was effective when the premix was applied to a small quantity of API, although the function of premix was limited in the case of the formulation of large amount of API. Statistical techniques are valuable to exploit new functions of well-known materials such as LAC, CS, and MCC.
Detecting Spatial Patterns in Biological Array Experiments
ROOT, DAVID E.; KELLEY, BRIAN P.; STOCKWELL, BRENT R.
2005-01-01
Chemical genetic screening and DNA and protein microarrays are among a number of increasingly important and widely used biological research tools that involve large numbers of parallel experiments arranged in a spatial array. It is often difficult to ensure that uniform experimental conditions are present throughout the entire array, and as a result, one often observes systematic spatially correlated errors, especially when array experiments are performed using robots. Here, the authors apply techniques based on the discrete Fourier transform to identify and quantify spatially correlated errors superimposed on a spatially random background. They demonstrate that these techniques are effective in identifying common spatially systematic errors in high-throughput 384-well microplate assay data. In addition, the authors employ a statistical test to allow for automatic detection of such errors. Software tools for using this approach are provided. PMID:14567791
Avalappampatty Sivasamy, Aneetha; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
Sivasamy, Aneetha Avalappampatty; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better.
The Taylor saddle effacement: a new technique for correction of saddle nose deformity.
Taylor, S Mark; Rigby, Matthew H
2008-02-01
To describe a novel technique, the Taylor saddle effacement (TSE), for correction of saddle nose deformity using autologous grafts from the lower lateral cartilages. A prospective evaluation of six patients, all of whom had the TSE performed. Photographs were taken in combination with completion of a rhinoplasty outcomes questionnaire preoperatively and at 6 months. The questionnaire included a visual analogue scale (VAS) of nasal breathing and a rhinoplasty outcomes evaluation (ROE) of nasal function and esthetics. All six patients had improvement in both their global nasal airflow on the VAS and on their ROE that was statistically significant. The mean preoperative VAS score was 5.8 compared with our postoperative mean of 8.5 of a possible 10. Mean ROE scores improved from 34.7 to 85.5. At 6 months, all patients felt that their nasal appearance had improved. The TSE is a simple and reliable technique for correction of saddle nose deformity. This prospective study has demonstrated improvement in both nasal function and esthetics when it is employed.
Kong, Jessica; Giridharagopal, Rajiv; Harrison, Jeffrey S; Ginger, David S
2018-05-31
Correlating nanoscale chemical specificity with operational physics is a long-standing goal of functional scanning probe microscopy (SPM). We employ a data analytic approach combining multiple microscopy modes, using compositional information in infrared vibrational excitation maps acquired via photoinduced force microscopy (PiFM) with electrical information from conductive atomic force microscopy. We study a model polymer blend comprising insulating poly(methyl methacrylate) (PMMA) and semiconducting poly(3-hexylthiophene) (P3HT). We show that PiFM spectra are different from FTIR spectra, but can still be used to identify local composition. We use principal component analysis to extract statistically significant principal components and principal component regression to predict local current and identify local polymer composition. In doing so, we observe evidence of semiconducting P3HT within PMMA aggregates. These methods are generalizable to correlated SPM data and provide a meaningful technique for extracting complex compositional information that are impossible to measure from any one technique.
Yang, Jun-Ho; Yoh, Jack J
2018-01-01
A novel technique is reported for separating overlapping latent fingerprints using chemometric approaches that combine laser-induced breakdown spectroscopy (LIBS) and multivariate analysis. The LIBS technique provides the capability of real time analysis and high frequency scanning as well as the data regarding the chemical composition of overlapping latent fingerprints. These spectra offer valuable information for the classification and reconstruction of overlapping latent fingerprints by implementing appropriate statistical multivariate analysis. The current study employs principal component analysis and partial least square methods for the classification of latent fingerprints from the LIBS spectra. This technique was successfully demonstrated through a classification study of four distinct latent fingerprints using classification methods such as soft independent modeling of class analogy (SIMCA) and partial least squares discriminant analysis (PLS-DA). The novel method yielded an accuracy of more than 85% and was proven to be sufficiently robust. Furthermore, through laser scanning analysis at a spatial interval of 125 µm, the overlapping fingerprints were reconstructed as separate two-dimensional forms.
Benson, Nsikak U.; Asuquo, Francis E.; Williams, Akan B.; Essien, Joseph P.; Ekong, Cyril I.; Akpabio, Otobong; Olajire, Abaas A.
2016-01-01
Trace metals (Cd, Cr, Cu, Ni and Pb) concentrations in benthic sediments were analyzed through multi-step fractionation scheme to assess the levels and sources of contamination in estuarine, riverine and freshwater ecosystems in Niger Delta (Nigeria). The degree of contamination was assessed using the individual contamination factors (ICF) and global contamination factor (GCF). Multivariate statistical approaches including principal component analysis (PCA), cluster analysis and correlation test were employed to evaluate the interrelationships and associated sources of contamination. The spatial distribution of metal concentrations followed the pattern Pb>Cu>Cr>Cd>Ni. Ecological risk index by ICF showed significant potential mobility and bioavailability for Cu, Cu and Ni. The ICF contamination trend in the benthic sediments at all studied sites was Cu>Cr>Ni>Cd>Pb. The principal component and agglomerative clustering analyses indicate that trace metals contamination in the ecosystems was influenced by multiple pollution sources. PMID:27257934
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
Yuen, Tammy; Wells, Kayla; Benoit, Samantha; Yohanathan, Sahila; Capelletti, Lauren; Stuber, Kent
2013-01-01
Introduction: Due to different biomechanical, nutritional, and hormonal considerations, it is possible that chiropractors may employ different therapeutic interventions and recommendations for pregnant patients than non-pregnant ones. The objective of this study was to determine the therapeutic interventions that chiropractors who are members of the Ontario Chiropractic Association in the Greater Toronto Area most commonly provide to pregnant patients. Methods: An introductory e-mail was sent in October 2011 to 755 members of the Ontario Chiropractic Association within the Greater Toronto Area five days prior to a 15 question survey being distributed via e-mail. Reminder e-mails were sent 13 days and 27 days later. Using descriptive statistics, demographic information was reported along with reported use of different treatments and recommendations for pregnant patients Results: A response rate of 23% was obtained. The majority of the respondents (90%) reported using the Diversified technique on pregnant patients, followed by soft tissue therapy (62%) and Activator (42%). The most common adjunctive therapy recommended to pregnant patients was referral to massage therapy (90%). Most of the respondents (92%) indicated that they prescribe stretching exercises to pregnant patients and recommend a multivitamin (84%) or folic acid (81%) to pregnant patients. Conclusion: In agreement with previous research on chiropractic technique usage on non-pregnant patients, the majority of respondents indicated treating pregnant patients with the Diversified technique, with other chiropractic techniques being utilized at varying rates on pregnant patients. Most respondents indicated prescribing exercise, and making adjunctive and nutritional recommendations frequently for their pregnant patients. PMID:23754858
NASA Astrophysics Data System (ADS)
Garraffo, Z. D.; Nadiga, S.; Krasnopolsky, V.; Mehra, A.; Bayler, E. J.; Kim, H. C.; Behringer, D.
2016-02-01
A Neural Network (NN) technique is used to produce consistent global ocean color estimates, bridging multiple satellite ocean color missions by linking ocean color variability - primarily driven by biological processes - with the physical processes of the upper ocean. Satellite-derived surface variables - sea-surface temperature (SST) and sea-surface height (SSH) fields - are used as signatures of upper-ocean dynamics. The NN technique employs adaptive weights that are tuned by applying statistical learning (training) algorithms to past data sets, providing robustness with respect to random noise, accuracy, fast emulations, and fault-tolerance. This study employs Sea-viewing Wide Field-of-View Sensor (SeaWiFS) chlorophyll-a data for 1998-2010 in conjunction with satellite SSH and SST fields. After interpolating all data sets to the same two-degree latitude-longitude grid, the annual mean was removed and monthly anomalies extracted . The NN technique wass trained for even years of that period and tested for errors and bias for the odd years. The NN output are assessed for: (i) bias, (ii) variability, (iii) root-mean-square error (RMSE), and (iv) cross-correlation. A Jacobian is evaluated to estimate the impact of each input (SSH, SST) on the NN chlorophyll-a estimates. The differences between an ensemble of NNs vs a single NN are examined. After the NN is trained for the SeaWiFS period, the NN is then applied and validated for 2005-2015, a period covered by other satellite missions — the Moderate Resolution Imaging Spectroradiometer (MODIS AQUA) and the Visible Imaging Infrared Radiometer Suite (VIIRS).
Structural Health Monitoring of Composite Plates Under Ambient and Cryogenic Conditions
NASA Technical Reports Server (NTRS)
Engberg, Robert C.
2005-01-01
Methods for structural health monitoring are now being assessed, especially in high-performance, extreme environment, safety-critical applications. One such application is for composite cryogenic fuel tanks. The work presented here attempts to characterize and investigate the feasibility of using imbedded piezoelectric sensors to detect cracks and delaminations under cryogenic and ambient conditions. Different types of excitation and response signals and different sensors are employed in composite plate samples to aid in determining an optimal algorithm, sensor placement strategy, and type of imbedded sensor to use. Variations of frequency and high frequency chirps of the sensors are employed and compared. Statistical and analytic techniques are then used to determine which method is most desirable for a specific type of damage and operating environment. These results are furthermore compared with previous work using externally mounted sensors. More work is needed to accurately account for changes in temperature seen in these environments and be statistically significant. Sensor development and placement strategy are other areas of further work to make structural health monitoring more robust. Results from this and other work might then be incorporated into a larger composite structure to validate and assess its structural health. This could prove to be important in the development and qualification of any 2nd generation reusable launch vehicle using composites as a structural element.
Climate change adaptation: a panacea for food security in Ondo State, Nigeria
NASA Astrophysics Data System (ADS)
Fatuase, A. I.
2017-08-01
This paper examines the likely perceived causes of climate change, adaptation strategies employed and technical inefficiency of arable crop farmers in Ondo State, Nigeria. Data were obtained from primary sources using a set of structured questionnaire assisted with interview schedule. Multistage sampling technique was used. Data were analyzed using the following: descriptive statistics and the stochastic frontier production function. The findings showed that majority of the respondents (59.1 %) still believed that climate change is a natural phenomenon that is beyond man's power to abate while industrial release, improper sewage disposal, fossil fuel use, deforestation and bush burning were perceived as the most human factors that influence climate change by the category that chose human activities (40.9 %) as the main causes of climate change. The main employed adaptation strategies by the farmers were mixed cropping, planting early matured crop, planting of resistant crops and use of agrochemicals. The arable crop farmers were relatively technically efficient with about 53 % of them having technical efficiency above the average of 0.784 for the study area. The study observed that education, adaptation, perception, climate information and farming experience were statistically significant in decreasing inefficiency of arable crop production. Therefore, advocacy on climate change and its adaptation strategies should be intensified in the study area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akcakaya, Murat; Nehorai, Arye; Sen, Satyabrata
Most existing radar algorithms are developed under the assumption that the environment (clutter) is stationary. However, in practice, the characteristics of the clutter can vary enormously depending on the radar-operational scenarios. If unaccounted for, these nonstationary variabilities may drastically hinder the radar performance. Therefore, to overcome such shortcomings, we develop a data-driven method for target detection in nonstationary environments. In this method, the radar dynamically detects changes in the environment and adapts to these changes by learning the new statistical characteristics of the environment and by intelligibly updating its statistical detection algorithm. Specifically, we employ drift detection algorithms to detectmore » changes in the environment; incremental learning, particularly learning under concept drift algorithms, to learn the new statistical characteristics of the environment from the new radar data that become available in batches over a period of time. The newly learned environment characteristics are then integrated in the detection algorithm. Furthermore, we use Monte Carlo simulations to demonstrate that the developed method provides a significant improvement in the detection performance compared with detection techniques that are not aware of the environmental changes.« less
Park, Jangwoon; Ebert, Sheila M; Reed, Matthew P; Hallman, Jason J
2016-03-01
Previously published statistical models of driving posture have been effective for vehicle design but have not taken into account the effects of age. The present study developed new statistical models for predicting driving posture. Driving postures of 90 U.S. drivers with a wide range of age and body size were measured in laboratory mockup in nine package conditions. Posture-prediction models for female and male drivers were separately developed by employing a stepwise regression technique using age, body dimensions, vehicle package conditions, and two-way interactions, among other variables. Driving posture was significantly associated with age, and the effects of other variables depended on age. A set of posture-prediction models is presented for women and men. The results are compared with a previously developed model. The present study is the first study of driver posture to include a large cohort of older drivers and the first to report a significant effect of age. The posture-prediction models can be used to position computational human models or crash-test dummies for vehicle design and assessment. © 2015, Human Factors and Ergonomics Society.
Hidden explosives detector employing pulsed neutron and x-ray interrogation
Schultz, F.J.; Caldwell, J.T.
1993-04-06
Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.
Hidden explosives detector employing pulsed neutron and x-ray interrogation
Schultz, Frederick J.; Caldwell, John T.
1993-01-01
Methods and systems for the detection of small amounts of modern, highly-explosive nitrogen-based explosives, such as plastic explosives, hidden in airline baggage. Several techniques are employed either individually or combined in a hybrid system. One technique employed in combination is X-ray imaging. Another technique is interrogation with a pulsed neutron source in a two-phase mode of operation to image both nitrogen and oxygen densities. Another technique employed in combination is neutron interrogation to form a hydrogen density image or three-dimensional map. In addition, deliberately-placed neutron-absorbing materials can be detected.
Education and Employment Patterns of Bioscientists. A Statistical Report.
ERIC Educational Resources Information Center
National Academy of Sciences - National Research Council, Washington, DC.
This report contains a compilation of manpower statistics describing the education and employment of bioscientists. The tables also include data from other major disciplines to allow for comparisons with other scientists and nonscientists. Bioscientists include those with degrees in anatomy, biochemistry, biophysics, genetics, microbiology,…
OCCUPATIONS IN COLORADO. PART I, OUTLOOK BY INDUSTRIES.
ERIC Educational Resources Information Center
1966
CURRENT AND PROJECTED EMPLOYMENT STATISTICS ARE GIVEN FOR THE STATE AND FOR THE DENVER STANDARD METROPOLITAN STATISTICAL AREA WHICH INCLUDES ADAMS, ARAPAHOE, BOULDER, DENVER, AND JEFFERSON COUNTIES. DATA WERE OBTAINED FROM THE COLORADO DEPARTMENT OF EMPLOYMENT, DENVER RESEARCH INSTITUTE, U.S. CENSUS, UNIVERSITY OF COLORADO, MOUNTAIN STATES…
Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E
2018-05-15
Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.
Pradhan, Raunak; Kulkarni, Deepak
2017-01-01
Introduction Fear of dental pain is one of the most common reasons for delaying dental treatment. Local Anaesthesia (LA) is the most commonly employed technique of achieving pain control in dentistry. Pterygomandibular Nerve Block (PNB), for achieving mandibular anaesthesia has been the traditional technique used and is associated with a few set of complications which include pain, nerve injury, trismus, and rarely facial nerve palsy, and sustained soft tissue anaesthesia. These complications have resulted in a rapid need for research on alternative local anaesthetic techniques. Aim This study was undertaken with the objective to determine pain, duration, profoundness and complications associated with administration of Intraligamentary Injection Technique (ILT). Materials and Methods This study was conducted on 194 patients (male=122, female=72) who reported for dental extractions in mandibular posteriors. The ILT was administered with ligajet intraligamentary jet injector using cartridge containing lignocaine hydrochloride 2% with adrenaline 1:80000 and a 30 gauge needle at buccal (mesiobuccal), lingual, mesial and distal aspect of the mandibular molars. The data was analyzed by using statistical computer software SPSS 11.0 (Statistical package for social sciences 11.O version of SPSS Inc.). Median was derived for Pain on Injection (PI) and Pain during Procedure (PP). Mean and standard deviation was derived for Duration of Anaesthesia (DA). Results Various advantages were seen such as, localized soft tissue anaesthesia, decreased PI (SD=0.83), and minimal PP (SD=0.94). The DA (SD=4.62) and mean value of 24.06 minutes. Conclusion This study is one of its kinds where intraligamentary injection has been used for extraction of mandibular molars. It was also successfully used in patients with exaggerated gag reflex and patients suffering from trismus due to oral submucous fibrosis. The intraligamentary injection technique can thus be used effectively to anaesthetize mandibular molars, as a primary technique for extraction of mandibular posterior teeth. PMID:28274058
Assessment of statistical methods used in library-based approaches to microbial source tracking.
Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D
2003-12-01
Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.
Employment and Earnings. Volume 35, Number 3, March 1988.
ERIC Educational Resources Information Center
Employment and Earnings, 1988
1988-01-01
This document presents the following monthly statistical data for the population of United States: (1) employment status; (2) characteristics of the unemployed; (3) characteristics of the employed and their job categories; (4) seasonally adjusted employment and unemployment; (5) national employment; (6) employment in states and areas; (7) national…
Gis-Based Spatial Statistical Analysis of College Graduates Employment
NASA Astrophysics Data System (ADS)
Tang, R.
2012-07-01
It is urgently necessary to be aware of the distribution and employment status of college graduates for proper allocation of human resources and overall arrangement of strategic industry. This study provides empirical evidence regarding the use of geocoding and spatial analysis in distribution and employment status of college graduates based on the data from 2004-2008 Wuhan Municipal Human Resources and Social Security Bureau, China. Spatio-temporal distribution of employment unit were analyzed with geocoding using ArcGIS software, and the stepwise multiple linear regression method via SPSS software was used to predict the employment and to identify spatially associated enterprise and professionals demand in the future. The results show that the enterprises in Wuhan east lake high and new technology development zone increased dramatically from 2004 to 2008, and tended to distributed southeastward. Furthermore, the models built by statistical analysis suggest that the specialty of graduates major in has an important impact on the number of the employment and the number of graduates engaging in pillar industries. In conclusion, the combination of GIS and statistical analysis which helps to simulate the spatial distribution of the employment status is a potential tool for human resource development research.
Three Empirical Strategies for Teaching Statistics
ERIC Educational Resources Information Center
Marson, Stephen M.
2007-01-01
This paper employs a three-step process to analyze three empirically supported strategies for teaching statistics to BSW students. The strategies included: repetition, immediate feedback, and use of original data. First, each strategy is addressed through the literature. Second, the application of employing each of the strategies over the period…
Multidimensional competences of supply chain managers: an empirical study
NASA Astrophysics Data System (ADS)
Shou, Yongyi; Wang, Weijiao
2017-01-01
Supply chain manager competences have attracted increasing attention from both practitioners and scholars in recent years. This paper conducted an explorative study to understand the dimensionality of supply chain manager competences. Online job advertisements for supply chain managers were collected as secondary data, since these advertisements reflect employers' real job requirements. We adopted the multidimensional scaling (MDS) technique to process and analyse the data. Five dimensions of supply chain manager competences are identified: generic skills, functional skills, supply chain management (SCM) qualifications and leadership, SCM expertise, and industry-specific and senior management skills. Statistic tests indicate that supply chain manager competence saliences vary in different industries and regions.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
NASA Technical Reports Server (NTRS)
Dixon, C. M.
1981-01-01
Land cover information derived from LANDSAT is being utilized by Piedmont Planning District Commission located in the State of Virginia. Progress to date is reported on a level one land cover classification map being produced with nine categories. The nine categories of classification are defined. The computer compatible tape selection is presented. Two unsupervised classifications were done, with 50 and 70 classes respectively. Twenty-eight spectral classes were developed using the supervised technique, employing actual ground truth training sites. The accuracy of the unsupervised classifications are estimated through comparison with local county statistics and with an actual pixel count of LANDSAT information compared to ground truth.
Direct comparison of optical lattice clocks with an intercontinental baseline of 9000 km.
Hachisu, H; Fujieda, M; Nagano, S; Gotoh, T; Nogami, A; Ido, T; Falke, St; Huntemann, N; Grebing, C; Lipphardt, B; Lisdat, Ch; Piester, D
2014-07-15
We have demonstrated a direct frequency comparison between two ⁸⁷Sr lattice clocks operated in intercontinentally separated laboratories in real time. Two-way satellite time and frequency transfer technique, based on the carrier-phase, was employed for a direct comparison, with a baseline of 9000 km between Japan and Germany. A frequency comparison was achieved for 83,640 s, resulting in a fractional difference of (1.1±1.6)×10⁻¹⁵, where the statistical part is the largest contributor to the uncertainty. This measurement directly confirms the agreement of the two optical frequency standards on an intercontinental scale.
Gas detection by correlation spectroscopy employing a multimode diode laser.
Lou, Xiutao; Somesfalean, Gabriel; Zhang, Zhiguo
2008-05-01
A gas sensor based on the gas-correlation technique has been developed using a multimode diode laser (MDL) in a dual-beam detection scheme. Measurement of CO(2) mixed with CO as an interfering gas is successfully demonstrated using a 1570 nm tunable MDL. Despite overlapping absorption spectra and occasional mode hops, the interfering signals can be effectively excluded by a statistical procedure including correlation analysis and outlier identification. The gas concentration is retrieved from several pair-correlated signals by a linear-regression scheme, yielding a reliable and accurate measurement. This demonstrates the utility of the unsophisticated MDLs as novel light sources for gas detection applications.
Video methods in the quantification of children's exposures.
Ferguson, Alesia C; Canales, Robert A; Beamer, Paloma; Auyeung, Willa; Key, Maya; Munninghoff, Amy; Lee, Kevin Tse-Wing; Robertson, Alexander; Leckie, James O
2006-05-01
In 1994, Stanford University's Exposure Research Group (ERG) conducted its first pilot study to collect micro-level activity time series (MLATS) data for young children. The pilot study involved videotaping four children of farm workers in the Salinas Valley of California and converting their videotaped activities to valuable text files of contact behavior using video-translation techniques. These MLATS are especially useful for describing intermittent dermal (i.e., second-by-second account of surfaces and objects contacted) and non-dietary ingestion (second-by-second account of objects or hands placed in the mouth) contact behavior. Second-by-second records of children contact behavior are amenable to quantitative and statistical analysis and allow for more accurate model estimates of human exposure and dose to environmental contaminants. Activity patterns data for modeling inhalation exposure (i.e., accounts of microenvironments visited) can also be extracted from the MLATS data. Since the pilot study, ERG has collected an immense MLATS data set for 92 children using more developed and refined videotaping and video-translation methodologies. This paper describes all aspects required for the collection of MLATS including: subject recruitment techniques, videotaping and video-translation processes, and potential data analysis. This paper also describes the quality assurance steps employed for these new MLATS projects, including: training, data management, and the application of interobserver and intraobserver agreement during video translation. The discussion of these issues and ERG's experiences in dealing with them can assist other groups in the conduct of research that employs these more quantitative techniques.
Burns, Kara; Keating, Patrick; Free, Caroline
2016-08-12
Sexually transmitted infections (STIs) pose a serious public health problem globally. The rapid spread of mobile technology creates an opportunity to use innovative methods to reduce the burden of STIs. This systematic review identified recent randomised controlled trials that employed mobile technology to improve sexual health outcomes. The following databases were searched for randomised controlled trials of mobile technology based sexual health interventions with any outcome measures and all patient populations: MEDLINE, EMBASE, PsycINFO, Global Health, The Cochrane Library (Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, Cochrane Methodology Register, NHS Health Technology Assessment Database, and Web of Science (science and social science citation index) (Jan 1999-July 2014). Interventions designed to increase adherence to HIV medication were not included. Two authors independently extracted data on the following elements: interventions, allocation concealment, allocation sequence, blinding, completeness of follow-up, and measures of effect. Trials were assessed for methodological quality using the Cochrane risk of bias tool. We calculated effect estimates using intention to treat analysis. A total of ten randomised trials were identified with nine separate study groups. No trials had a low risk of bias. The trials targeted: 1) promotion of uptake of sexual health services, 2) reduction of risky sexual behaviours and 3) reduction of recall bias in reporting sexual activity. Interventions employed up to five behaviour change techniques. Meta-analysis was not possible due to heterogeneity in trial assessment and reporting. Two trials reported statistically significant improvements in the uptake of sexual health services using SMS reminders compared to controls. One trial increased knowledge. One trial reported promising results in increasing condom use but no trial reported statistically significant increases in condom use. Finally, one trial showed that collection of sexual health information using mobile technology was acceptable. The findings suggest interventions delivered by SMS interventions can increase uptake of sexual health services and STI testing. High quality trials of interventions using standardised objective measures and employing a wider range of behavioural change techniques are needed to assess if interventions delivered by mobile phone can alter safer sex behaviours carried out between couples and reduce STIs.
The impact of the 2007-2009 recession on workers' health coverage.
Fronstin, Paul
2011-04-01
IMPACT OF THE RECESSION: The 2007-2009 recession has taken its toll on the percentage of the population with employment-based health coverage. While, since 2000, there has been a slow erosion in the percentage of individuals under age 65 with employment-based health coverage, 2009 was the first year in which the percentage fell below 60 percent, and marked the largest one-year decline in coverage. FEWER WORKERS WITH COVERAGE: The percentage of workers with coverage through their own job fell from 53.2 percent in 2008 to 52 percent in 2009, a 2.4 percent decline in the likelihood that a worker has coverage through his or her own job. The percentage of workers with coverage as a dependent fell from 17 percent in 2008 to 16.3 percent in 2009, a 4.5 percent drop in the likelihood that a worker has coverage as a dependent. These declines occurred as the unemployment rate increased from an average of 5.8 percent in 2008 to 9.3 percent in 2009 (and reached a high of 10.1 percent during 2009). FIRM SIZE/INDUSTRY: The decline in the percentage of workers with coverage from their own job affected workers in private-sector firms of all sizes. Among public-sector workers, the decline from 73.4 percent to 73 percent was not statistically significant. Workers in all private-sector industries experienced a statistically significant decline in coverage between 2008 and 2009. HOURS WORKED: Full-time workers experienced a decline in coverage that was statistically significant while part-time workers did not. Among full-time workers, those employed full year experienced a statistically significant decline in coverage from their own job. Those employed full time but for only part of the year did not experience a statistically significant change in coverage. Among part-time workers, those employed full year experienced a statistically significant increase in the likelihood of having coverage in their own name, as did part-time workers employed for only part of the year. ANNUAL EARNINGS: The decline in the percentage of workers with coverage through their own job was limited to workers with lower annual earnings. Statistically significant declines were not found among any group of workers with annual earnings of at least $40,000. Workers with a high school education or less experienced a statistically significant decline in the likelihood of having coverage. Neither workers with a college degree nor those with a graduate degree experienced a statistically significant decline in coverage through their own job. Workers of all races experienced statistically significant declines in coverage between 2008 and 2009. Both men and women experienced a statistically significant decline in the percentage with health coverage through their own job. IMPACT OF STRUCTURAL CHANGES TO THE WORK FORCE: The movement of workers from the manufacturing industry to the service sector continued between 2008 and 2009. The percentage of workers employed on a full-time basis decreased while the percentage working part time increased. While there was an overall decline in the percentage of full-time workers, that decline was limited to workers employed full year. The percentage of workers employed on a full-time, part-year basis increased between 2008 and 2009. The distribution of workers by annual earnings shifted from middle-income workers to lower-income workers between 2008 and 2009.
Road following for blindBike: an assistive bike navigation system for low vision persons
NASA Astrophysics Data System (ADS)
Grewe, Lynne; Overell, William
2017-05-01
Road Following is a critical component of blindBike, our assistive biking application for the visually impaired. This paper talks about the overall blindBike system and goals prominently featuring Road Following, which is the task of directing the user to follow the right side of the road. This work unlike what is commonly found for self-driving cars does not depend on lane line markings. 2D computer vision techniques are explored to solve the problem of Road Following. Statistical techniques including the use of Gaussian Mixture Models are employed. blindBike is developed as an Android Application and is running on a smartphone device. Other sensors including Gyroscope and GPS are utilized. Both Urban and suburban scenarios are tested and results are given. The success and challenges faced by blindBike's Road Following module are presented along with future avenues of work.
Harikrishnan, N; Ravisankar, R; Chandrasekaran, A; Suresh Gandhi, M; Kanagasabapathy, K V; Prasad, M V R; Satapathy, K K
2017-08-15
The aim of this study was to determine the concentration of heavy metals in the sediments of Periyakalapet to Parangipettai coast, east coast of Tamil Nadu, by using energy-dispersive X-ray fluorescence (EDXRF) technique. The average heavy metal concentrations in the sediment samples were found in the order Al>Fe>Ca>Ti>K>Mg>Mn>Ba>V>Cr>Zn>La>Ni>Pb>Co>Cd>Cu. The average heavy metal concentrations were below the world crustal average. The degree of contamination by heavy metals was evaluated using pollution indices. The results of pollution indices revealed that titanium (Ti) and cadmium (Cd) were significantly enriched in sediments. Pearson correlation analysis was performed among heavy metal concentrations to know the existing relationship between them. Multivariate statistical technique was employed to identify the heavy metal pollution sources. Copyright © 2017 Elsevier Ltd. All rights reserved.
Detecting most influencing courses on students grades using block PCA
NASA Astrophysics Data System (ADS)
Othman, Osama H.; Gebril, Rami Salah
2014-12-01
One of the modern solutions adopted in dealing with the problem of large number of variables in statistical analyses is the Block Principal Component Analysis (Block PCA). This modified technique can be used to reduce the vertical dimension (variables) of the data matrix Xn×p by selecting a smaller number of variables, (say m) containing most of the statistical information. These selected variables can then be employed in further investigations and analyses. Block PCA is an adapted multistage technique of the original PCA. It involves the application of Cluster Analysis (CA) and variable selection throughout sub principal components scores (PC's). The application of Block PCA in this paper is a modified version of the original work of Liu et al (2002). The main objective was to apply PCA on each group of variables, (established using cluster analysis), instead of involving the whole large pack of variables which was proved to be unreliable. In this work, the Block PCA is used to reduce the size of a huge data matrix ((n = 41) × (p = 251)) consisting of Grade Point Average (GPA) of the students in 251 courses (variables) in the faculty of science in Benghazi University. In other words, we are constructing a smaller analytical data matrix of the GPA's of the students with less variables containing most variation (statistical information) in the original database. By applying the Block PCA, (12) courses were found to `absorb' most of the variation or influence from the original data matrix, and hence worth to be keep for future statistical exploring and analytical studies. In addition, the course Independent Study (Math.) was found to be the most influencing course on students GPA among the 12 selected courses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
Statistical methods in personality assessment research.
Schinka, J A; LaLone, L; Broeckel, J A
1997-06-01
Emerging models of personality structure and advances in the measurement of personality and psychopathology suggest that research in personality and personality assessment has entered a stage of advanced development, in this article we examine whether researchers in these areas have taken advantage of new and evolving statistical procedures. We conducted a review of articles published in the Journal of Personality, Assessment during the past 5 years. Of the 449 articles that included some form of data analysis, 12.7% used only descriptive statistics, most employed only univariate statistics, and fewer than 10% used multivariate methods of data analysis. We discuss the cost of using limited statistical methods, the possible reasons for the apparent reluctance to employ advanced statistical procedures, and potential solutions to this technical shortcoming.
Strong Sporadic E Occurrence Detected by Ground-Based GNSS
NASA Astrophysics Data System (ADS)
Sun, Wenjie; Ning, Baiqi; Yue, Xinan; Li, Guozhu; Hu, Lianhuan; Chang, Shoumin; Lan, Jiaping; Zhu, Zhengping; Zhao, Biqiang; Lin, Jian
2018-04-01
The ionospheric sporadic E (Es) layer has significant impact on radio wave propagation. The traditional techniques employed for Es layer observation, for example, ionosondes, are not dense enough to resolve the morphology and dynamics of Es layer in spatial distribution. The ground-based Global Navigation Satellite Systems (GNSS) technique is expected to shed light on the understanding of regional strong Es occurrence, owing to the facts that the critical frequency (foEs) of strong Es structure is usually high enough to cause pulse-like disturbances in GNSS total electron content (TEC), and a large number of GNSS receivers have been deployed all over the world. Based on the Chinese ground-based GNSS networks, including the Crustal Movement Observation Network of China and the Beidou Ionospheric Observation Network, a large-scale strong Es event was observed in the middle latitude of China. The strong Es shown as a band-like structure in the southwest-northeast direction extended more than 1,000 km. By making a comparative analysis of Es occurrences identified from the simultaneous observations by ionosondes and GNSS TEC receivers over China middle latitude statistically, we found that GNSS TEC can be well employed to observe strong Es occurrence with a threshold value of foEs, 14 MHz.
ERIC Educational Resources Information Center
Altonji, Joseph G.; Pierret, Charles R.
A statistical analysis was performed to test the hypothesis that, if profit-maximizing firms have limited information about the general productivity of new workers, they may choose to use easily observable characteristics such as years of education to discriminate statistically among workers. Information about employer learning was obtained by…
Summary Statistics of CPB-Qualified Public Radio Stations, Fiscal Year 1972.
ERIC Educational Resources Information Center
Lee, S. Young; Pedone, Ronald J.
Statistics in the areas of finance, employment, and broadcast and production for CPB-qualified (Corporation for Public Broadcasting) public radio stations are given in this report. Tables in the area of finance are presented specifying total funds, income, direct operating costs, and capital expenditure. Employment is divided into all employment…
A Study of Arizona Labor Market Demand Data for Vocational Education Planning.
ERIC Educational Resources Information Center
Gould, Albert W.; Manning, Doris E.
A study examined the project methodology used by the Bureau of Labor Statistics and the related projections made by the state employment security agencies. Findings from a literature review indicated that the system has steadily improved since 1979. Projections made from the Occupational Employment Statistics Surveys were remarkably accurate.…
Conference Report on Youth Unemployment: Its Measurements and Meaning.
ERIC Educational Resources Information Center
Employment and Training Administration (DOL), Washington, DC.
Thirteen papers presented at a conference on employment statistics and youth are contained in this report. Reviewed are the problems of gathering, interpreting, and applying employment and unemployment data relating to youth. The titles of the papers are as follow: "Counting Youth: A Comparison of Youth Labor Force Statistics in the Current…
Livieratos, L; Stegger, L; Bloomfield, P M; Schafers, K; Bailey, D L; Camici, P G
2005-07-21
High-resolution cardiac PET imaging with emphasis on quantification would benefit from eliminating the problem of respiratory movement during data acquisition. Respiratory gating on the basis of list-mode data has been employed previously as one approach to reduce motion effects. However, it results in poor count statistics with degradation of image quality. This work reports on the implementation of a technique to correct for respiratory motion in the area of the heart at no extra cost for count statistics and with the potential to maintain ECG gating, based on rigid-body transformations on list-mode data event-by-event. A motion-corrected data set is obtained by assigning, after pre-correction for detector efficiency and photon attenuation, individual lines-of-response to new detector pairs with consideration of respiratory motion. Parameters of respiratory motion are obtained from a series of gated image sets by means of image registration. Respiration is recorded simultaneously with the list-mode data using an inductive respiration monitor with an elasticized belt at chest level. The accuracy of the technique was assessed with point-source data showing a good correlation between measured and true transformations. The technique was applied on phantom data with simulated respiratory motion, showing successful recovery of tracer distribution and contrast on the motion-corrected images, and on patient data with C15O and 18FDG. Quantitative assessment of preliminary C15O patient data showed improvement in the recovery coefficient at the centre of the left ventricle.
Path statistics, memory, and coarse-graining of continuous-time random walks on networks
Kion-Crosby, Willow; Morozov, Alexandre V.
2015-01-01
Continuous-time random walks (CTRWs) on discrete state spaces, ranging from regular lattices to complex networks, are ubiquitous across physics, chemistry, and biology. Models with coarse-grained states (for example, those employed in studies of molecular kinetics) or spatial disorder can give rise to memory and non-exponential distributions of waiting times and first-passage statistics. However, existing methods for analyzing CTRWs on complex energy landscapes do not address these effects. Here we use statistical mechanics of the nonequilibrium path ensemble to characterize first-passage CTRWs on networks with arbitrary connectivity, energy landscape, and waiting time distributions. Our approach can be applied to calculating higher moments (beyond the mean) of path length, time, and action, as well as statistics of any conservative or non-conservative force along a path. For homogeneous networks, we derive exact relations between length and time moments, quantifying the validity of approximating a continuous-time process with its discrete-time projection. For more general models, we obtain recursion relations, reminiscent of transfer matrix and exact enumeration techniques, to efficiently calculate path statistics numerically. We have implemented our algorithm in PathMAN (Path Matrix Algorithm for Networks), a Python script that users can apply to their model of choice. We demonstrate the algorithm on a few representative examples which underscore the importance of non-exponential distributions, memory, and coarse-graining in CTRWs. PMID:26646868
Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti
2016-07-01
A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J.; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T.; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti
2016-01-01
Motivation: A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. Results: We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness. Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Availability and implementation: Code is available at https://github.com/aalto-ics-kepaco Contacts: anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153689
Vexler, Albert; Yu, Jihnhee
2018-04-13
A common statistical doctrine supported by many introductory courses and textbooks is that t-test type procedures based on normally distributed data points are anticipated to provide a standard in decision-making. In order to motivate scholars to examine this convention, we introduce a simple approach based on graphical tools of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. In this context, we propose employing a p-values-based method, taking into account the stochastic nature of p-values. We focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we extend the EPV concept to be considered in terms of the ROC curve technique. This provides expressive evaluations and visualizations of a wide spectrum of testing mechanisms' properties. We show that the conventional power characterization of tests is a partial aspect of the presented EPV/ROC technique. We desire that this explanation of the EPV/ROC approach convinces researchers of the usefulness of the EPV/ROC approach for depicting different characteristics of decision-making procedures, in light of the growing interest regarding correct p-values-based applications.
Face shape differs in phylogenetically related populations.
Hopman, Saskia M J; Merks, Johannes H M; Suttie, Michael; Hennekam, Raoul C M; Hammond, Peter
2014-11-01
3D analysis of facial morphology has delineated facial phenotypes in many medical conditions and detected fine grained differences between typical and atypical patients to inform genotype-phenotype studies. Next-generation sequencing techniques have enabled extremely detailed genotype-phenotype correlative analysis. Such comparisons typically employ control groups matched for age, sex and ethnicity and the distinction between ethnic categories in genotype-phenotype studies has been widely debated. The phylogenetic tree based on genetic polymorphism studies divides the world population into nine subpopulations. Here we show statistically significant face shape differences between two European Caucasian populations of close phylogenetic and geographic proximity from the UK and The Netherlands. The average face shape differences between the Dutch and UK cohorts were visualised in dynamic morphs and signature heat maps, and quantified for their statistical significance using both conventional anthropometry and state of the art dense surface modelling techniques. Our results demonstrate significant differences between Dutch and UK face shape. Other studies have shown that genetic variants influence normal facial variation. Thus, face shape difference between populations could reflect underlying genetic difference. This should be taken into account in genotype-phenotype studies and we recommend that in those studies reference groups be established in the same population as the individuals who form the subject of the study.
Barrows, Wesley; Dingreville, Rémi; Spearot, Douglas
2015-10-19
A statistical approach combined with molecular dynamics simulations is used to study the influence of hydrogen on intergranular decohesion. This methodology is applied to a Ni Σ3(112)[11¯0] symmetric tilt grain boundary. Hydrogenated grain boundaries with different H concentrations are constructed using an energy minimization technique with initial H atom positions guided by Monte Carlo simulation results. Decohesion behavior is assessed through extraction of a traction–separation relationship during steady-state crack propagation in a statistically meaningful approach, building upon prior work employing atomistic cohesive zone volume elements (CZVEs). A sensitivity analysis is performed on the numerical approach used to extract the traction–separationmore » relationships, clarifying the role of CZVE size, threshold parameters necessary to differentiate elastic and decohesion responses, and the numerical averaging technique. Results show that increasing H coverage at the Ni Σ3(112)[11¯0] grain boundary asymmetrically influences the crack tip velocity during propagation, leads to a general decrease in the work of separation required for crack propagation, and provides a reduction in the peak stress in the extracted traction–separation relationship. Furthermore the present framework offers a meaningful vehicle to pass atomistically derived interfacial behavior to higher length scale formulations for intergranular fracture.« less
Random field assessment of nanoscopic inhomogeneity of bone
Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu
2010-01-01
Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128
A Statistical Comparison of PSC Model Simulations and POAM Observations
NASA Technical Reports Server (NTRS)
Strawa, A. W.; Drdla, K.; Fromm, M.; Bokarius, K.; Gore, Warren J. (Technical Monitor)
2002-01-01
A better knowledge of PSC composition and formation mechanisms is important to better understand and predict stratospheric ozone depletion. Several past studies have attempted to compare modeling results with satellite observations. These comparisons have concentrated on case studies. In this paper we adopt a statistical approach. POAM PSC observations from several Arctic winters are categorized into Type Ia and Ib PSCs using a technique based on Strawa et al. The discrimination technique has been modified to employ the wavelengths dependence of the extinction signal at all wavelengths rather than only at 603 and 10 18 nm. Winter-long simulations for the 1999-2000 Arctic winter have been made using the IMPACT model. These simulations have been constrained by aircraft observations made during the SOLVE/THESEO 2000 campaign. A complete set of winter-long simulations was run for several different microphysical and PSC formation scenarios. The simulations give us perfect knowledge of PSC type (Ia, Ib, or II), composition, especially condensed phase HNO3 which is important for denitrification, and condensed phase H2O. Comparisons are made between the simulation and observation of PSC extinction at 1018 rim versus wavelength dependence, winter-long percentages of Ia and Ib occurrence, and temporal and altitude trends of the PSCs. These comparisons allow us to comment on how realistic some modeling scenarios are.
Computer-Assisted Instruction in Statistics. Technical Report.
ERIC Educational Resources Information Center
Cooley, William W.
A paper given at a conference on statistical computation discussed teaching statistics with computers. It concluded that computer-assisted instruction is most appropriately employed in the numerical demonstration of statistical concepts, and for statistical laboratory instruction. The student thus learns simultaneously about the use of computers…
Public health workforce employment in US public and private sectors.
Kennedy, Virginia C
2009-01-01
The purpose of this study was to describe the number and distribution of 26 administrative, professional, and technical public health occupations across the array of US governmental and nongovernmental industries. This study used data from the Occupational Employment Statistics program of the US Bureau of Labor Statistics. For each occupation of interest, the investigator determined the number of persons employed in 2006 in five industries and industry groups: government, nonprofit agencies, education, healthcare, and all other industries. Industry-specific employment profiles varied from one occupation to another. However, about three-fourths of all those engaged in these occupations worked in the private healthcare industry. Relatively few worked in nonprofit or educational settings, and less than 10 percent were employed in government agencies. The industry-specific distribution of public health personnel, particularly the proportion employed in the public sector, merits close monitoring. This study also highlights the need for a better understanding of the work performed by public health occupations in nongovernmental work settings. Finally, the Occupational Employment Statistics program has the potential to serve as an ongoing, national data collection system for public health workforce information. If this potential was realized, future workforce enumerations would not require primary data collection but rather could be accomplished using secondary data.
Jittangprasert, Piyada; Wilairat, Prapin; Pootrakul, Pensri
2004-12-01
This paper describes a comparison of two analytical techniques, one employing bathophenanthrolinedisulfonate (BPT), a most commonly-used reagent for Fe (II) determination, as chromogen and an electrothermal atomic absorption spectroscopy (ETAAS) for the quantification of non-transferrin bound iron (NTBI) in sera from thalassemic patients. Nitrilotriacetic acid (NTA) was employed as the ligand for binding iron from low molecular weight iron complexes present in the serum but without removing iron from the transferrin protein. After ultrafiltration the Fe (III)-NTA complex was then quantified by both methods. Kinetic study of the rate of the Fe (II)-BPT complex formation for various excess amounts of NTA ligand was also carried out. The kinetic data show that a minimum time duration (> 60 minutes) is necessary for complete complex formation when large excess of NTA is used. Calibration curves given by colorimetric and ETAAS methods were linear over the range of 0.15-20 microM iron (III). The colorimetric and ETAAS methods exhibited detection limit (3sigma) of 0.13 and 0.14 microM, respectively. The NTBI concentrations from 55 thalassemic serum samples measured employing BPT as chromogen were statistically compared with the results determined by ETAAS. No significant disagreement at 95% confidence level was observed. It is, therefore, possible to select any one of these two techniques for determination of NTBI in serum samples of thalassemic patients. However, the colorimetric procedure requires a longer analysis time because of a slow rate of exchange of NTA ligand with BPT, leading to the slow rate of formation of the colored complex.
Cytocompatibility, cytotoxicity and genotoxicity analysis of dental implants
NASA Astrophysics Data System (ADS)
Reigosa, M.; Labarta, V.; Molinari, G.; Bernales, D.
2007-11-01
Several types of materials are frequently used for dental prostheses in dental medicine. Different treatments with titanium are the most used. The aim of the present study was to analyze by means of cytotoxicity and cytocompatibility techniques the capacity of dental implants to integrate to the bone tissue. Cultures of UMR 106 cell line derived from an osteosarcoma were used for bioassays mainly because they show many of the properties of osteoblasts. Dental implant samples provided by B&W company were compared with others of recognized trademarks. The first ones contain ASTM titanium (8348 GR2) with acid printing. Cytotoxicity was analyzed by means of lysosome activity, using the neutral red technique and alkaline phosphatase enzyme activity. Cell variability was determined by means of the acridine ethidium-orange bromide technique. One-way ANOVA and Bonferroni and Duncan post-ANOVA tests were used for the statistical analysis. The assays did not show significant differences among the dental implants analyzed. Our findings show that the dental prostheses studied present high biocompatibility, quantified by the bioassays performed. The techniques employed revealed that they can be a useful tool for the analysis of other materials for dental medicine use.
Effect of the Smoke-Free Illinois Act on casino admissions and revenue.
Tauras, John A; Chaloupka, Frank J; Moor, Gregg; Henderson, Patricia Nez; Leischow, Scott J
2018-01-19
As part of the Smoke-Free Illinois Act, smoking on the gambling floors of all commercial casinos in Illinois became prohibited. This study examined the effects of the Smoke-Free Illinois Act on casino admissions per-capita and real per-capita adjusted gross receipts using 18 years of data (10 years before and 8 years after the Illinois law went into effect). We employed a difference-in-difference regression technique using monthly data for the states of Illinois, Indiana, Iowa and Missouri and control for numerous determinants expected to affect casino admissions and revenue. The Smoke-free Illinois Act was found not to be a statistically significant determinant of per-capita casino admissions and of real per-capita gross adjusted receipts in all the models we estimated. The estimates from this study clearly indicated that the Illinois law that banned smoking in casinos has had no significant negative economic consequences for casinos in terms of per-capita admissions or revenues. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Comparison of Conceptual and Neural Network Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Vidyarthi, V. K.; Jain, A.
2014-12-01
Rainfall-runoff (RR) model is a key component of any water resource application. There are two types of techniques usually employed for RR modeling: physics based and data-driven techniques. Although the physics based models have been used for operational purposes for a very long time, they provide only reasonable accuracy in modeling and forecasting. On the other hand, the Artificial Neural Networks (ANNs) have been reported to provide superior modeling performance; however, they have not been acceptable by practitioners, decision makers and water resources engineers as operational tools. The ANNs one of the data driven techniques, became popular for efficient modeling of the complex natural systems in the last couple of decades. In this paper, the comparative results for conceptual and ANN models in RR modeling are presented. The conceptual models were developed by the use of rainfall-runoff library (RRL) and genetic algorithm (GA) was used for calibration of these models. Feed-forward neural network model structure trained by Levenberg-Marquardt (LM) training algorithm has been adopted here to develop all the ANN models. The daily rainfall, runoff and various climatic data derived from Bird creek basin, Oklahoma, USA were employed to develop all the models included here. Daily potential evapotranspiration (PET), which was used in conceptual model development, was calculated by the use of Penman equation. The input variables were selected on the basis of correlation analysis. The performance evaluation statistics such as average absolute relative error (AARE), Pearson's correlation coefficient (R) and threshold statistics (TS) were used for assessing the performance of all the models developed here. The results obtained in this study show that the ANN models outperform the conventional conceptual models due to their ability to learn the non-linearity and complexity inherent in data of rainfall-runoff process in a more efficient manner. There is a strong need to carry out such studies to prove the superiority of ANN models over conventional methods in an attempt to make them acceptable by water resources community responsible for the operation of water resources systems.
ERIC Educational Resources Information Center
Loycano, Robert J.
The data presented in these tabulations are based on the 1976 National Science Foundation survey of scientific and engineering personnel employed at universities and colleges. The data are contained in 60 statistical tables organized under the following broad headings: trends; type of institution; field, employment status, control, educational…
ERIC Educational Resources Information Center
National Center for Health Statistics (DHEW/PHS), Hyattsville, MD.
First in a series of statistical reports on personnel providing vision and eye care assistance, the report presents data collected by the Bureau of Census (geographic location, age, sex, education, type and place of employment, training, specialties, activities, and time spent at work) concerning opticians actively engaged in their profession…
NASA Astrophysics Data System (ADS)
Bilionis, I.; Koutsourelakis, P. S.
2012-05-01
The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.
NASA Astrophysics Data System (ADS)
Bui-Thanh, T.; Girolami, M.
2014-11-01
We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint technique. Various numerical results up to 1025 parameters are presented to demonstrate the ability of the RMHMC method in exploring the geometric structure of the problem to propose (almost) uncorrelated/independent samples that are far away from each other, and yet the acceptance rate is almost unity. The results also suggest that for the PDE models considered the proposed fixed metric RMHMC can attain almost as high a quality performance as the original RMHMC, i.e. generating (almost) uncorrelated/independent samples, while being two orders of magnitude less computationally expensive.
Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.
2016-01-01
Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679
NASA Astrophysics Data System (ADS)
Suhir, E.
2014-05-01
The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.
Using data mining techniques to predict the severity of bicycle crashes.
Prati, Gabriele; Pietrantoni, Luca; Fraboni, Federico
2017-04-01
To investigate the factors predicting severity of bicycle crashes in Italy, we used an observational study of official statistics. We applied two of the most widely used data mining techniques, CHAID decision tree technique and Bayesian network analysis. We used data provided by the Italian National Institute of Statistics on road crashes that occurred on the Italian road network during the period ranging from 2011 to 2013. In the present study, the dataset contains information about road crashes occurred on the Italian road network during the period ranging from 2011 to 2013. We extracted 49,621 road accidents where at least one cyclist was injured or killed from the original database that comprised a total of 575,093 road accidents. CHAID decision tree technique was employed to establish the relationship between severity of bicycle crashes and factors related to crash characteristics (type of collision and opponent vehicle), infrastructure characteristics (type of carriageway, road type, road signage, pavement type, and type of road segment), cyclists (gender and age), and environmental factors (time of the day, day of the week, month, pavement condition, and weather). CHAID analysis revealed that the most important predictors were, in decreasing order of importance, road type (0.30), crash type (0.24), age of cyclist (0.19), road signage (0.08), gender of cyclist (0.07), type of opponent vehicle (0.05), month (0.04), and type of road segment (0.02). These eight most important predictors of the severity of bicycle crashes were included as predictors of the target (i.e., severity of bicycle crashes) in Bayesian network analysis. Bayesian network analysis identified crash type (0.31), road type (0.19), and type of opponent vehicle (0.18) as the most important predictors of severity of bicycle crashes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Yu; Dong, Fengqing; Wang, Yonghong
2016-09-01
With determined components and experimental reducibility, the chemically defined medium (CDM) and the minimal chemically defined medium (MCDM) are used in many metabolism and regulation studies. This research aimed to develop the chemically defined medium supporting high cell density growth of Bacillus coagulans, which is a promising producer of lactic acid and other bio-chemicals. In this study, a systematic methodology combining the experimental technique with flux balance analysis (FBA) was proposed to design and simplify a CDM. The single omission technique and single addition technique were employed to determine the essential and stimulatory compounds, before the optimization of their concentrations by the statistical method. In addition, to improve the growth rationally, in silico omission and addition were performed by FBA based on the construction of a medium-size metabolic model of B. coagulans 36D1. Thus, CDMs were developed to obtain considerable biomass production of at least five B. coagulans strains, in which two model strains B. coagulans 36D1 and ATCC 7050 were involved.
Meteorological Development Laboratory Student Career Experience Program
NASA Astrophysics Data System (ADS)
McCalla, C., Sr.
2007-12-01
The National Oceanic and Atmospheric Administration's (NOAA) National Weather Service (NWS) provides weather, hydrologic, and climate forecasts and warnings for the protection of life and property and the enhancement of the national economy. The NWS's Meteorological Development Laboratory (MDL) supports this mission by developing meteorological prediction methods. Given this mission, NOAA, NWS, and MDL all have a need to continually recruit talented scientists. One avenue for recruiting such talented scientist is the Student Career Experience Program (SCEP). Through SCEP, MDL offers undergraduate and graduate students majoring in meteorology, computer science, mathematics, oceanography, physics, and statistics the opportunity to alternate full-time paid employment with periods of full-time study. Using SCEP as a recruiting vehicle, MDL has employed students who possess some of the very latest technical skills and knowledge needed to make meaningful contributions to projects within the lab. MDL has recently expanded its use of SCEP and has increased the number of students (sometimes called co- ops) in its program. As a co-op, a student can expect to develop and implement computer based scientific techniques, participate in the development of statistical algorithms, assist in the analysis of meteorological data, and verify forecasts. This presentation will focus on describing recruitment, projects, and the application process related to MDL's SCEP. In addition, this presentation will also briefly explore the career paths of students who successfully completed the program.
Survey of statistical techniques used in validation studies of air pollution prediction models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornstein, R D; Anderson, S F
1979-03-01
Statistical techniques used by meteorologists to validate predictions made by air pollution models are surveyed. Techniques are divided into the following three groups: graphical, tabular, and summary statistics. Some of the practical problems associated with verification are also discussed. Characteristics desired in any validation program are listed and a suggested combination of techniques that possesses many of these characteristics is presented.
ERIC Educational Resources Information Center
Florida State Dept. of Labor and Employment Security, Tallahassee.
This report analyzes projected changes in population, labor force, and employment by industry and occupation for Florida between 1995 and 2005. More than 50 charts and graphs provide statistics on the following: Florida's population, labor force 1975-2005; employment 1975-2005; industry employment 1995-2005; occupational employment (general);…
Validation of motion correction techniques for liver CT perfusion studies
Chandler, A; Wei, W; Anderson, E F; Herron, D H; Ye, Z; Ng, C S
2012-01-01
Objectives Motion in images potentially compromises the evaluation of temporally acquired CT perfusion (CTp) data; image registration should mitigate this, but first requires validation. Our objective was to compare the relative performance of manual, rigid and non-rigid registration techniques to correct anatomical misalignment in acquired liver CTp data sets. Methods 17 data sets in patients with liver tumours who had undergone a CTp protocol were evaluated. Each data set consisted of a cine acquisition during a breath-hold (Phase 1), followed by six further sets of cine scans (each containing 11 images) acquired during free breathing (Phase 2). Phase 2 images were registered to a reference image from Phase 1 cine using two semi-automated intensity-based registration techniques (rigid and non-rigid) and a manual technique (the only option available in the relevant vendor CTp software). The performance of each technique to align liver anatomy was assessed by four observers, independently and blindly, on two separate occasions, using a semi-quantitative visual validation study (employing a six-point score). The registration techniques were statistically compared using an ordinal probit regression model. Results 306 registrations (2448 observer scores) were evaluated. The three registration techniques were significantly different from each other (p=0.03). On pairwise comparison, the semi-automated techniques were significantly superior to the manual technique, with non-rigid significantly superior to rigid (p<0.0001), which in turn was significantly superior to manual registration (p=0.04). Conclusion Semi-automated registration techniques achieved superior alignment of liver anatomy compared with the manual technique. We hope this will translate into more reliable CTp analyses. PMID:22374283
NASA Astrophysics Data System (ADS)
Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus
2016-04-01
We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
Cumulative rain fade statistics are used by space communications engineers to establish transmitter power and receiver sensitivities for systems operating under various geometries, climates, and radio frequencies. Space-diversity performance criteria are also of interest. This work represents a review, in which are examined the many elements involved in the employment of single nonattenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenuation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain-attenuation modeling. Suggestions are made for improving present methods.
Functional-analytical capabilities of GIS technology in the study of water use risks
NASA Astrophysics Data System (ADS)
Nevidimova, O. G.; Yankovich, E. P.; Yankovich, K. S.
2015-02-01
Regional security aspects of economic activities are of great importance for legal regulation in environmental management. This has become a critical issue due to climate change, especially in regions where severe climate conditions have a great impact on almost all types of natural resource uses. A detailed analysis of climate and hydrological situation in Tomsk Oblast considering water use risks was carried out. Based on developed author's techniques an informational and analytical database was created using ArcGIS software platform, which combines statistical (quantitative) and spatial characteristics of natural hazards and socio-economic factors. This system was employed to perform areal zoning according to the degree of water use risks involved.
The effect of sexual abstinence on females' educational attainment.
Sabia, Joseph J; Rees, Daniel I
2009-11-01
A number of studies have shown that teenagers who abstain from sex are more likely to graduate from high school and attend college than their sexually active peers. However it is unclear whether this association represents a causal relationship or can be explained by unmeasured heterogeneity. We employ a variety of statistical techniques to distinguish between these hypotheses, using data on females from the National Longitudinal Study of Adolescent Health. Our results provide evidence that delaying first intercourse leads to an increased likelihood of graduating from high school. This relationship appears to be strongest among respondents in the bottom third of the ability distribution. Controlling for fertility reduces, but does not eliminate, the estimated effect of delaying intercourse.
NASA Astrophysics Data System (ADS)
Riggi, S.; Antonuccio-Delogu, V.; Bandieramonte, M.; Becciani, U.; Costa, A.; La Rocca, P.; Massimino, P.; Petta, C.; Pistagna, C.; Riggi, F.; Sciacca, E.; Vitello, F.
2013-11-01
Muon tomographic visualization techniques try to reconstruct a 3D image as close as possible to the real localization of the objects being probed. Statistical algorithms under test for the reconstruction of muon tomographic images in the Muon Portal Project are discussed here. Autocorrelation analysis and clustering algorithms have been employed within the context of methods based on the Point Of Closest Approach (POCA) reconstruction tool. An iterative method based on the log-likelihood approach was also implemented. Relative merits of all such methods are discussed, with reference to full GEANT4 simulations of different scenarios, incorporating medium and high-Z objects inside a container.
Carlier, Bouwine E; Schuring, Merel; Burdorf, Alex
2018-03-01
Purpose To evaluate the influence of an interdisciplinary re-employment programme on labour force participation and perceived health among unemployed persons with common mental health problems. In addition, the influence of entering paid employment on self-rated physical health and mental health was investigated. Methods In this quasi-experimental study with 2 years follow up, 869 persons were enrolled after referral to an interdisciplinary re-employment programme (n = 380) or regular re-employment programme (n = 489). The propensity score technique was used to account for observed differences between the intervention and control group. The intervention programme was provided by an interdisciplinary team, consisting of mental health care professionals as well as employment specialists. Mental health problems were addressed through cognitive counselling and individual tailored job-search support was provided by an employment professional. Primary outcome measures were paid employment and voluntary work. Secondary outcome measures were self-rated mental and physical health, measured by the Short Form 12 Health Survey, and anxiety and depressive symptoms, measured by the Kessler Psychological Distress Scale. Changes in labour force participation and health were examined with repeated-measures logistic regression analyses by the generalized estimating equations method. Results The interdisciplinary re-employment programme did not have a positive influence on entering employment or physical or mental health among unemployed persons with mental health problems. After 2 years, 10% of the participants of the intervention programme worked fulltime, compared to 4% of the participants of the usual programmes (adjusted OR 1.65). The observed differences in labour force participation were not statistically significant. However, among persons who entered paid employment, physical health improved (+16%) and anxiety and depressive symptoms decreased (-15%), whereas health remained unchanged among persons who continued to be unemployed. Conclusions Policies to improve population health should take into account that promoting paid employment may be an effective intervention to improve health. It is recommended to invest in interdisciplinary re-employment programmes with a first place and train approach.
A statistical technique for determining rainfall over land employing Nimbus-6 ESMR measurements
NASA Technical Reports Server (NTRS)
Rodgers, E.; Siddalingaiah, H.; Chang, A. T. C.; Wilheit, T. T.
1978-01-01
At 37 GHz, the frequency at which the Nimbus 6 Electrically Scanning Microwave Radiometer (ESMR 6) measures upwelling radiance, it was shown theoretically that the atmospheric scattering and the relative independence on electromagnetic polarization of the radiances emerging from hydrometers make it possible to monitor remotely active rainfall over land. In order to verify experimentally these theoretical findings and to develop an algorithm to monitor rainfall over land, the digitized ESMR 6 measurements were examined statistically. Horizontally and vertically polarized brightness temperature pairs (TH, TV) from ESMR 6 were sampled for areas of rainfall over land as determined from the rain recording stations and the WSR 57 radar, and areas of wet and dry ground (whose thermodynamic temperatures were greater than 5 C) over the Southeastern United States. These three categories of brightness temperatures were found to be significantly different in the sense that the chances that the mean vectors of any two populations coincided were less than 1 in 100.
de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony
2016-10-03
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.
de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony
2016-01-01
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials. PMID:27694824
de Jong, Maarten; Chen, Wei; Notestine, Randy; ...
2016-10-03
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. Themore » approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.« less
Attitude determination using an adaptive multiple model filtering Scheme
NASA Technical Reports Server (NTRS)
Lam, Quang; Ray, Surendra N.
1995-01-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
Attitude determination using an adaptive multiple model filtering Scheme
NASA Astrophysics Data System (ADS)
Lam, Quang; Ray, Surendra N.
1995-05-01
Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown quantities such as spacecraft dynamics parameters, gyro biases, dynamic disturbances, or environment variations.
ERIC Educational Resources Information Center
Lewis, Gary
The extent to which occupational staffing patterns change over time was examined in a study focusing on the Food and Kindred Products industry--Standard Industrial Classification (SIC) 20. Data were taken from the 1977 and 1980 Occupational Employment Statistics program coordinated by the United States Department of Labor Statistics. Actual 1980…
ERIC Educational Resources Information Center
Reshad, Rosalind S.
One of six volumes summarizing through narrative and statistical tables data collected by the Equal Employment Opportunity Commission in its 1974 survey, this fifth volume details nationwide statistics on the employment status of minorities and women working in township governments. Data from 299 actual units of government in fourteen states were…
ERIC Educational Resources Information Center
Skinner, Alice W.
One of six volumes summarizing through narrative and statistical tables data collected by the Equal Employment Opportunity Commission in its 1974 survey, this fourth volume details the employment status of minorities and women in municipal governments. Based on reports filed by 2,230 municipalities, statistics in this study are designed to…
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
A novel data-driven learning method for radar target detection in nonstationary environments
Akcakaya, Murat; Nehorai, Arye; Sen, Satyabrata
2016-04-12
Most existing radar algorithms are developed under the assumption that the environment (clutter) is stationary. However, in practice, the characteristics of the clutter can vary enormously depending on the radar-operational scenarios. If unaccounted for, these nonstationary variabilities may drastically hinder the radar performance. Therefore, to overcome such shortcomings, we develop a data-driven method for target detection in nonstationary environments. In this method, the radar dynamically detects changes in the environment and adapts to these changes by learning the new statistical characteristics of the environment and by intelligibly updating its statistical detection algorithm. Specifically, we employ drift detection algorithms to detectmore » changes in the environment; incremental learning, particularly learning under concept drift algorithms, to learn the new statistical characteristics of the environment from the new radar data that become available in batches over a period of time. The newly learned environment characteristics are then integrated in the detection algorithm. Furthermore, we use Monte Carlo simulations to demonstrate that the developed method provides a significant improvement in the detection performance compared with detection techniques that are not aware of the environmental changes.« less
NASA Astrophysics Data System (ADS)
Wan, Xiaoqing; Zhao, Chunhui; Wang, Yanchun; Liu, Wu
2017-11-01
This paper proposes a novel classification paradigm for hyperspectral image (HSI) using feature-level fusion and deep learning-based methodologies. Operation is carried out in three main steps. First, during a pre-processing stage, wave atoms are introduced into bilateral filter to smooth HSI, and this strategy can effectively attenuate noise and restore texture information. Meanwhile, high quality spectral-spatial features can be extracted from HSI by taking geometric closeness and photometric similarity among pixels into consideration simultaneously. Second, higher order statistics techniques are firstly introduced into hyperspectral data classification to characterize the phase correlations of spectral curves. Third, multifractal spectrum features are extracted to characterize the singularities and self-similarities of spectra shapes. To this end, a feature-level fusion is applied to the extracted spectral-spatial features along with higher order statistics and multifractal spectrum features. Finally, stacked sparse autoencoder is utilized to learn more abstract and invariant high-level features from the multiple feature sets, and then random forest classifier is employed to perform supervised fine-tuning and classification. Experimental results on two real hyperspectral data sets demonstrate that the proposed method outperforms some traditional alternatives.
Moloi, Mothusi Walter; Kajawo, Shepherd; Noubiap, Jean Jacques; Mbah, Ikechukwu O; Ekrikpo, Udeme; Kengne, Andre Pascal; Bello, Aminu K; Okpechi, Ikechi G
2018-05-24
Continuous ambulatory peritoneal dialysis (CAPD) is the ideal modality for renal replacement therapy in most African settings given that it is relatively cheaper than haemodialysis (HD) and does not require in-centre care. CAPD is, however, not readily utilised as it is often complicated by peritonitis leading to high rates of technique failure. The objective of this study is to assess the prevalence of CAPD-related peritonitis and all-cause mortality in patients treated with CAPD in Africa. We will search PubMed, EMBASE, SCOPUS, Africa Journal Online and Google Scholar for studies conducted in Africa from 1 January 1980 to 30 June 2017 with no language restrictions. Eligible studies will include cross-sectional, prospective observational and cohort studies of patients treated with CAPD. Two authors will independently screen, select studies, extract data and conduct risk of bias assessment. Data consistently reported across studies will be pooled using random-effects meta-analysis. Heterogeneity will be evaluated using Cochrane's Q statistic and quantified using I 2 statistics. Graphical and formal statistical tests will be used to assess for publication bias. Ethical approval will not be needed for this study as data used will be extracted from already published studies. Results of this review will be published in a peer-reviewed journal and presented at conferences. The Preferred Reporting Items for Systematic reviews and Meta-Analyses for Protocols 2015 (PRISMA-P 2015) framework guided the development of this protocol. CRD42017072966. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Coronal Holes and Solar f -Mode Wave Scattering Off Linear Boundaries
NASA Astrophysics Data System (ADS)
Hess Webber, Shea A.
2016-11-01
Coronal holes (CHs) are solar atmospheric features that have reduced emission in the extreme ultraviolet (EUV) spectrum due to decreased plasma density along open magnetic field lines. CHs are the source of the fast solar wind, can influence other solar activity, and track the solar cycle. Our interest in them deals with boundary detection near the solar surface. Detecting CH boundaries is important for estimating their size and tracking their evolution through time, as well as for comparing the physical properties within and outside of the feature. In this thesis, we (1) investigate CHs using statistical properties and image processing techniques on EUV images to detect CH boundaries in the low corona and chromosphere. SOHO/EIT data is used to locate polar CH boundaries on the solar limb, which are then tracked through two solar cycles. Additionally, we develop an edge-detection algorithm that we use on SDO/AIA data of a polar hole extension with an approximately linear boundary. These locations are used later to inform part of the helioseismic investigation; (2) develop a local time-distance (TD) helioseismology technique that can be used to detect CH boundary signatures at the photospheric level. We employ a new averaging scheme that makes use of the quasi-linear topology of elongated scattering regions, and create simulated data to test the new technique and compare results of some associated assumptions. This method enhances the wave propagation signal in the direction perpendicular to the linear feature and reduces the computational time of the TD analysis. We also apply a new statistical analysis of the significance of differences between the TD results; and (3) apply the TD techniques to solar CH data from SDO/HMI. The data correspond to the AIA data used in the edge-detection algorithm on EUV images. We look for statistically significant differences between the TD results inside and outside the CH region. In investigation (1), we found that the polar CH areas did not change significantly between minima, even though the magnetic field strength weakened. The results of (2) indicate that TD helioseismology techniques can be extended to make use of feature symmetry in the domain. The linear technique used here produces results that differ between a linear scattering region and a circular scattering region, shown using the simulated data algorithm. This suggests that using usual TD methods on scattering regions that are radially asymmetric may produce results with signatures of the anisotropy. The results of (1) and (3) indicate that the TD signal within our CH is statistically significantly different compared to unrelated quiet sun results. Surprisingly, the TD results in the quiet sun near the CH boundary also show significant differences compared to the separate quiet sun.
Characterizing sources of uncertainty from global climate models and downscaling techniques
Wootten, Adrienne; Terando, Adam; Reich, Brian J.; Boyles, Ryan; Semazzi, Fred
2017-01-01
In recent years climate model experiments have been increasingly oriented towards providing information that can support local and regional adaptation to the expected impacts of anthropogenic climate change. This shift has magnified the importance of downscaling as a means to translate coarse-scale global climate model (GCM) output to a finer scale that more closely matches the scale of interest. Applying this technique, however, introduces a new source of uncertainty into any resulting climate model ensemble. Here we present a method, based on a previously established variance decomposition method, to partition and quantify the uncertainty in climate model ensembles that is attributable to downscaling. We apply the method to the Southeast U.S. using five downscaled datasets that represent both statistical and dynamical downscaling techniques. The combined ensemble is highly fragmented, in that only a small portion of the complete set of downscaled GCMs and emission scenarios are typically available. The results indicate that the uncertainty attributable to downscaling approaches ~20% for large areas of the Southeast U.S. for precipitation and ~30% for extreme heat days (> 35°C) in the Appalachian Mountains. However, attributable quantities are significantly lower for time periods when the full ensemble is considered but only a sub-sample of all models are available, suggesting that overconfidence could be a serious problem in studies that employ a single set of downscaled GCMs. We conclude with recommendations to advance the design of climate model experiments so that the uncertainty that accrues when downscaling is employed is more fully and systematically considered.
Song, Xiao-Dong; Zhang, Gan-Lin; Liu, Feng; Li, De-Cheng; Zhao, Yu-Guo
2016-11-01
The influence of anthropogenic activities and natural processes involved high uncertainties to the spatial variation modeling of soil available zinc (AZn) in plain river network regions. Four datasets with different sampling densities were split over the Qiaocheng district of Bozhou City, China. The difference of AZn concentrations regarding soil types was analyzed by the principal component analysis (PCA). Since the stationarity was not indicated and effective ranges of four datasets were larger than the sampling extent (about 400 m), two investigation tools, namely F3 test and stationarity index (SI), were employed to test the local non-stationarity. Geographically weighted regression (GWR) technique was performed to describe the spatial heterogeneity of AZn concentrations under the non-stationarity assumption. GWR based on grouped soil type information (GWRG for short) was proposed so as to benefit the local modeling of soil AZn within each soil-landscape unit. For reference, the multiple linear regression (MLR) model, a global regression technique, was also employed and incorporated the same predictors as in the GWR models. Validation results based on 100 times realization demonstrated that GWRG outperformed MLR and can produce similar or better accuracy than the GWR approach. Nevertheless, GWRG can generate better soil maps than GWR for limit soil data. Two-sample t test of produced soil maps also confirmed significantly different means. Variogram analysis of the model residuals exhibited weak spatial correlation, rejecting the use of hybrid kriging techniques. As a heuristically statistical method, the GWRG was beneficial in this study and potentially for other soil properties.
Effect of the image resolution on the statistical descriptors of heterogeneous media.
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Effect of the image resolution on the statistical descriptors of heterogeneous media
NASA Astrophysics Data System (ADS)
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Self-Regulated Learning Strategies in Relation with Statistics Anxiety
ERIC Educational Resources Information Center
Kesici, Sahin; Baloglu, Mustafa; Deniz, M. Engin
2011-01-01
Dealing with students' attitudinal problems related to statistics is an important aspect of statistics instruction. Employing the appropriate learning strategies may have a relationship with anxiety during the process of statistics learning. Thus, the present study investigated multivariate relationships between self-regulated learning strategies…
A Multi-Class, Interdisciplinary Project Using Elementary Statistics
ERIC Educational Resources Information Center
Reese, Margaret
2012-01-01
This article describes a multi-class project that employs statistical computing and writing in a statistics class. Three courses, General Ecology, Meteorology, and Introductory Statistics, cooperated on a project for the EPA's Student Design Competition. The continuing investigation has also spawned several undergraduate research projects in…
CAPSAS: Computer Assisted Program for the Selection of Appropriate Statistics.
ERIC Educational Resources Information Center
Shermis, Mark D.; Albert, Susan L.
A computer-assisted program has been developed for the selection of statistics or statistical techniques by both students and researchers. Based on Andrews, Klem, Davidson, O'Malley and Rodgers "A Guide for Selecting Statistical Techniques for Analyzing Social Science Data," this FORTRAN-compiled interactive computer program was…
Silva, Arlene S; Brandao, Geovani C; Matos, Geraldo D; Ferreira, Sergio L C
2015-11-01
The present work proposed an analytical method for the direct determination of chromium in infant formulas employing the high-resolution continuum source electrothermal atomic absorption spectrometry combined with the solid sample analysis (SS-HR-CS ET AAS). Sample masses up to 2.0mg were directly weighted on a solid sampling platform and introduced into the graphite tube. In order to minimize the formation of carbonaceous residues and to improve the contact of the modifier solution with the solid sample, a volume of 10 µL of a solution containing 6% (v/v) H2O2, 20% (v/v) ethanol and 1% (v/v) HNO3 was added. The pyrolysis and atomization temperatures established were 1600 and 2400 °C, respectively, using magnesium as chemical modifier. The calibration technique was evaluated by comparing the slopes of calibration curves established using aqueous and solid standards. This test revealed that chromium can be determined employing the external calibration technique using aqueous standards. Under these conditions, the method developed allows the direct determination of chromium with limit of quantification of 11.5 ng g(-1), precision expressed as relative standard deviation (RSD) in the range of 4.0-17.9% (n=3) and a characteristic mass of 1.2 pg of chromium. The accuracy was confirmed by analysis of a certified reference material of tomato leaves furnished by National Institute of Standards and Technology. The method proposed was applied for the determination of chromium in five different infant formula samples. The chromium content found varied in the range of 33.9-58.1 ng g(-1) (n=3). These samples were also analyzed employing ICP-MS. A statistical test demonstrated that there is no significant difference between the results found by two methods. The chromium concentrations achieved are lower than the maximum limit permissible for chromium in foods by Brazilian Legislation. Copyright © 2015. Published by Elsevier B.V.
Model Independence in Downscaled Climate Projections: a Case Study in the Southeast United States
NASA Astrophysics Data System (ADS)
Gray, G. M. E.; Boyles, R.
2016-12-01
Downscaled climate projections are used to deduce how the climate will change in future decades at local and regional scales. It is important to use multiple models to characterize part of the future uncertainty given the impact on adaptation decision making. This is traditionally employed through an equally-weighted ensemble of multiple GCMs downscaled using one technique. Newer practices include several downscaling techniques in an effort to increase the ensemble's representation of future uncertainty. However, this practice may be adding statistically dependent models to the ensemble. Previous research has shown a dependence problem in the GCM ensemble in multiple generations, but has not been shown in the downscaled ensemble. In this case study, seven downscaled climate projections on the daily time scale are considered: CLAREnCE10, SERAP, BCCA (CMIP5 and CMIP3 versions), Hostetler, CCR, and MACA-LIVNEH. These data represent 83 ensemble members, 44 GCMs, and two generations of GCMs. Baseline periods are compared against the University of Idaho's METDATA gridded observation dataset. Hierarchical agglomerative clustering is applied to the correlated errors to determine dependent clusters. Redundant GCMs across different downscaling techniques show the most dependence, while smaller dependence signals are detected within downscaling datasets and across generations of GCMs. These results indicate that using additional downscaled projections to increase the ensemble size must be done with care to avoid redundant GCMs and the process of downscaling may increase the dependence of those downscaled GCMs. Climate model generation does not appear dissimilar enough to be treated as two separate statistical populations for ensemble building at the local and regional scales.
NASA Astrophysics Data System (ADS)
Hadley, Brian Christopher
This dissertation assessed remotely sensed data and geospatial modeling technique(s) to map the spatial distribution of total above-ground biomass present on the surface of the Savannah River National Laboratory's (SRNL) Mixed Waste Management Facility (MWMF) hazardous waste landfill. Ordinary least squares (OLS) regression, regression kriging, and tree-structured regression were employed to model the empirical relationship between in-situ measured Bahia (Paspalum notatum Flugge) and Centipede [Eremochloa ophiuroides (Munro) Hack.] grass biomass against an assortment of explanatory variables extracted from fine spatial resolution passive optical and LIDAR remotely sensed data. Explanatory variables included: (1) discrete channels of visible, near-infrared (NIR), and short-wave infrared (SWIR) reflectance, (2) spectral vegetation indices (SVI), (3) spectral mixture analysis (SMA) modeled fractions, (4) narrow-band derivative-based vegetation indices, and (5) LIDAR derived topographic variables (i.e. elevation, slope, and aspect). Results showed that a linear combination of the first- (1DZ_DGVI), second- (2DZ_DGVI), and third-derivative of green vegetation indices (3DZ_DGVI) calculated from hyperspectral data recorded over the 400--960 nm wavelengths of the electromagnetic spectrum explained the largest percentage of statistical variation (R2 = 0.5184) in the total above-ground biomass measurements. In general, the topographic variables did not correlate well with the MWMF biomass data, accounting for less than five percent of the statistical variation. It was concluded that tree-structured regression represented the optimum geospatial modeling technique due to a combination of model performance and efficiency/flexibility factors.
Cournane, S; Sheehy, N; Cooke, J
2014-06-01
Benford's law is an empirical observation which predicts the expected frequency of digits in naturally occurring datasets spanning multiple orders of magnitude, with the law having been most successfully applied as an audit tool in accountancy. This study investigated the sensitivity of the technique in identifying system output changes using simulated changes in interventional radiology Dose-Area-Product (DAP) data, with any deviations from Benford's distribution identified using z-statistics. The radiation output for interventional radiology X-ray equipment is monitored annually during quality control testing; however, for a considerable portion of the year an increased output of the system, potentially caused by engineering adjustments or spontaneous system faults may go unnoticed, leading to a potential increase in the radiation dose to patients. In normal operation recorded examination radiation outputs vary over multiple orders of magnitude rendering the application of normal statistics ineffective for detecting systematic changes in the output. In this work, the annual DAP datasets complied with Benford's first order law for first, second and combinations of the first and second digits. Further, a continuous 'rolling' second order technique was devised for trending simulated changes over shorter timescales. This distribution analysis, the first employment of the method for radiation output trending, detected significant changes simulated on the original data, proving the technique useful in this case. The potential is demonstrated for implementation of this novel analysis for monitoring and identifying change in suitable datasets for the purpose of system process control. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
An Analysis of the Navy’s Voluntary Education Program
2007-03-01
NAVAL ANALYSIS VOLED STUDY .........11 1. Data .........................................11 2. Statistical Models ...........................12 3...B. EMPLOYER FINANCED GENERAL TRAINING ................31 1. Data .........................................32 2. Statistical Model...37 1. Data .........................................38 2. Statistical Model ............................38 3. Findings
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Comparative study of scarf and extended chevron osteotomies for correction of hallux valgus.
Vopat, Bryan G; Lareau, Craig R; Johnson, Julie; Reinert, Steven E; DiGiovanni, Christopher W
2013-12-01
Scarf and chevron osteotomies are two described treatments for the correction of hallux valgus deformity, but they have traditionally been employed for different levels of severity. We hypothesized that there would be no statistically significant difference between the results of these two treatments. This study is a retrospective review of 70 consecutive patients treated operatively for moderate and severe hallux valgus malalignment. The two groups based on their operative treatment: scarf osteotomy (Group A) and extended chevron osteotomy (Group B). Preoperative and postoperative hallux valgus angle (HVA), intermetatarsal angle and distal metatarsal articular angle (DMAA) were measured at final follow-up. Charts were also assessed to determine the postoperative rate of satisfaction, stiffness, and pain. There were no statistically significant differences between Groups A and B with regard to the HVA preoperatively and postoperatively. The DMAA was statistically significantly higher for Group B both preoperatively (p=0.0403) and postoperatively (p<0.0001). The differences in HVA correction and IMA correction were not statistically significant. There were no statistically significant differences with regard to post-operative stiffness, pain, and satisfaction. The scarf and extended chevron osteotomies are capable of adequately reducing the HVA and IMA in patients with moderate to severe hallux valgus. These two techniques yielded similar patient outcomes in terms of stiffness, pain and satisfaction. Based on these results, we recommend both the scarf and extended chevron osteotomy as acceptable forms of correction for moderate to severe hallux valgus.
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.
2017-03-01
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Statistical Approaches Used to Assess the Equity of Access to Food Outlets: A Systematic Review
Lamb, Karen E.; Thornton, Lukar E.; Cerin, Ester; Ball, Kylie
2015-01-01
Background Inequalities in eating behaviours are often linked to the types of food retailers accessible in neighbourhood environments. Numerous studies have aimed to identify if access to healthy and unhealthy food retailers is socioeconomically patterned across neighbourhoods, and thus a potential risk factor for dietary inequalities. Existing reviews have examined differences between methodologies, particularly focussing on neighbourhood and food outlet access measure definitions. However, no review has informatively discussed the suitability of the statistical methodologies employed; a key issue determining the validity of study findings. Our aim was to examine the suitability of statistical approaches adopted in these analyses. Methods Searches were conducted for articles published from 2000–2014. Eligible studies included objective measures of the neighbourhood food environment and neighbourhood-level socio-economic status, with a statistical analysis of the association between food outlet access and socio-economic status. Results Fifty-four papers were included. Outlet accessibility was typically defined as the distance to the nearest outlet from the neighbourhood centroid, or as the number of food outlets within a neighbourhood (or buffer). To assess if these measures were linked to neighbourhood disadvantage, common statistical methods included ANOVA, correlation, and Poisson or negative binomial regression. Although all studies involved spatial data, few considered spatial analysis techniques or spatial autocorrelation. Conclusions With advances in GIS software, sophisticated measures of neighbourhood outlet accessibility can be considered. However, approaches to statistical analysis often appear less sophisticated. Care should be taken to consider assumptions underlying the analysis and the possibility of spatially correlated residuals which could affect the results. PMID:29546115
Analysis of Employment Flow of Landscape Architecture Graduates in Agricultural Universities
ERIC Educational Resources Information Center
Yao, Xia; He, Linchun
2012-01-01
A statistical analysis of employment flow of landscape architecture graduates was conducted on the employment data of graduates major in landscape architecture in 2008 to 2011. The employment flow of graduates was to be admitted to graduate students, industrial direction and regional distribution, etc. Then, the features of talent flow and factors…
29 CFR 1614.601 - EEO group statistics.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false EEO group statistics. 1614.601 Section 1614.601 Labor... EMPLOYMENT OPPORTUNITY Matters of General Applicability § 1614.601 EEO group statistics. (a) Each agency... provided by an employee is inaccurate, the agency shall advise the employee about the solely statistical...
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
ERIC Educational Resources Information Center
Kidney, John
This self-instructional module, the eleventh in a series of 16 on techniques for coordinating work experience programs, deals with federal and state employment laws. Addressed in the module are federal and state employment laws pertaining to minimum wage for student learners, minimum wage for full-time students, unemployment insurance, child labor…
A Hybrid Semi-supervised Classification Scheme for Mining Multisource Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju; Bhaduri, Budhendra L
2011-01-01
Supervised learning methods such as Maximum Likelihood (ML) are often used in land cover (thematic) classification of remote sensing imagery. ML classifier relies exclusively on spectral characteristics of thematic classes whose statistical distributions (class conditional probability densities) are often overlapping. The spectral response distributions of thematic classes are dependent on many factors including elevation, soil types, and ecological zones. A second problem with statistical classifiers is the requirement of large number of accurate training samples (10 to 30 |dimensions|), which are often costly and time consuming to acquire over large geographic regions. With the increasing availability of geospatial databases, itmore » is possible to exploit the knowledge derived from these ancillary datasets to improve classification accuracies even when the class distributions are highly overlapping. Likewise newer semi-supervised techniques can be adopted to improve the parameter estimates of statistical model by utilizing a large number of easily available unlabeled training samples. Unfortunately there is no convenient multivariate statistical model that can be employed for mulitsource geospatial databases. In this paper we present a hybrid semi-supervised learning algorithm that effectively exploits freely available unlabeled training samples from multispectral remote sensing images and also incorporates ancillary geospatial databases. We have conducted several experiments on real datasets, and our new hybrid approach shows over 25 to 35% improvement in overall classification accuracy over conventional classification schemes.« less
Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums
NASA Astrophysics Data System (ADS)
Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei
2016-12-01
Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.
NASA Technical Reports Server (NTRS)
Wiedenbeck, M. E.
1977-01-01
An instrument, the Caltech High Energy Isotope Spectrometer Telescope was developed to measure isotopic abundances of cosmic ray nuclei by employing an energy loss - residual energy technique. A detailed analysis was made of the mass resolution capabilities of this instrument. A formalism, based on the leaky box model of cosmic ray propagation, was developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. It was shown that the dominant sources of uncertainty in the derived source ratios are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances. These results were applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.
Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo
2018-03-30
Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.
Insufficient Knowledge of Breast Cancer Risk Factors Among Malaysian Female University Students
Samah, Asnarulkhadi Abu; Ahmadian, Maryam; Latiff, Latiffah A.
2016-01-01
Background: Despite continuous argument about the efficacy of breast self-examination; it still could be a life-saving technique through inspiring and empowering women to take better control over their body/breast and health. This study investigated Malaysian female university students’ knowledge about breast cancer risk factors, signs, and symptoms and assessed breast self-examination frequency among students. Method: A cross-sectional survey was conducted in 2013 in nine public and private universities in the Klang Valley and Selangor. 842 female students were respondents for the self-administered survey technique. Simple descriptive and inferential statistics were employed for data analysis. Results: The uptake of breast self-examination (BSE) was less than 50% among the students. Most of students had insufficient knowledge on several breast cancer risk factors. Conclusion: Actions and efforts should be done to increase knowledge of breast cancer through the development of ethnically and traditionally sensitive educational training on BSE and breast cancer literacy. PMID:26234996
Real-time scalable visual analysis on mobile devices
NASA Astrophysics Data System (ADS)
Pattath, Avin; Ebert, David S.; May, Richard A.; Collins, Timothy F.; Pike, William
2008-02-01
Interactive visual presentation of information can help an analyst gain faster and better insight from data. When combined with situational or context information, visualization on mobile devices is invaluable to in-field responders and investigators. However, several challenges are posed by the form-factor of mobile devices in developing such systems. In this paper, we classify these challenges into two broad categories - issues in general mobile computing and issues specific to visual analysis on mobile devices. Using NetworkVis and Infostar as example systems, we illustrate some of the techniques that we employed to overcome many of the identified challenges. NetworkVis is an OpenVG-based real-time network monitoring and visualization system developed for Windows Mobile devices. Infostar is a flash-based interactive, real-time visualization application intended to provide attendees access to conference information. Linked time-synchronous visualization, stylus/button-based interactivity, vector graphics, overview-context techniques, details-on-demand and statistical information display are some of the highlights of these applications.
Re-calibration of coronary risk prediction: an example of the Seven Countries Study.
Puddu, Paolo Emilio; Piras, Paolo; Kromhout, Daan; Tolonen, Hanna; Kafatos, Anthony; Menotti, Alessandro
2017-12-14
We aimed at performing a calibration and re-calibration process using six standard risk factors from Northern (NE, N = 2360) or Southern European (SE, N = 2789) middle-aged men of the Seven Countries Study, whose parameters and data were fully known, to establish whether re-calibration gave the right answer. Greenwood-Nam-D'Agostino technique as modified by Demler (GNDD) in 2015 produced chi-squared statistics using 10 deciles of observed/expected CHD mortality risk, corresponding to Hosmer-Lemeshaw chi-squared employed for multiple logistic equations whereby binary data are used. Instead of the number of events, the GNDD test uses survival probabilities of observed and predicted events. The exercise applied, in five different ways, the parameters of the NE-predictive model to SE (and vice-versa) and compared the outcome of the simulated re-calibration with the real data. Good re-calibration could be obtained only when risk factor coefficients were substituted, being similar in magnitude and not significantly different between NE-SE. In all other ways, a good re-calibration could not be obtained. This is enough to praise for an overall need of re-evaluation of most investigations that, without GNDD or another proper technique for statistically assessing the potential differences, concluded that re-calibration is a fair method and might therefore be used, with no specific caution.
Random field assessment of nanoscopic inhomogeneity of bone.
Dong, X Neil; Luo, Qing; Sparkman, Daniel M; Millwater, Harry R; Wang, Xiaodu
2010-12-01
Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to represent the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. Copyright © 2010 Elsevier Inc. All rights reserved.
Kordi, Masoumeh; Erfanian, Fatemeh; Fakari, Farzaneh Rashidi; Dastfan, Fatemeh; Nejad, Keivan Shariati
2017-01-01
INTRODUCTION: Shoulder dystocia is one of the obstetric emergencies that are accompanied to serious risks for mother and fetus. It necessitates making the method of training of shoulder dystocia management more efficient, i.e., better management and giving services with higher quality. Thus, this study was carried out to compare the impact of training by simulation and oral technique on the skill of the employed midwives in obstetric clinics at Mashhad city (Iran) in shoulder dystocia management during 2012. METHODS: The current research is a double-group clinical trial that was conducted on 51 members of the employed midwives in the obstetric clinic at Mashhad city in 2012. The questionnaire of personal specification and awareness about shoulder dystocia and practical examination (objective-structured clinical examination) were employed as tools for data collection. The learners were divided into two groups by randomized allocation. Training was done by the presentation of lecture in the oral content group and a short movie was displayed at the end of it. The shoulder dystocia management technique was simulated in another group and through role-playing of instructor application of moulage (station) training was conducted. The period of the training course (4 h) and content of the educational workshop was identical for both groups. The practical examination was held for the learners before and immediately after training course. The given data were analyzed by means of statistical descriptive tests including Mann–Whitney U-test and Wilcoxon test via SPSS software (version 16). The significance level was considered as (P < 0.05) in all cases. RESULTS: The total mean score was significantly increased for the variable of shoulder dystocia management skill after intervention in both groups (P < 0.0001). Similarly, the results of Mann–Whitney U-test statistical tests indicated that total mean score for the variable of shoulder dystocia management skill after the intervention was significantly greater in simulation group than in an oral group (P = 0.040). CONCLUSION: Training in simulated delivery room by means of role-playing is an efficient method for training shoulder dystocia management skill, so it is recommended to use this program in the training of this skill. PMID:28616417
Sood, Akshay; Ghani, Khurshid R; Ahlawat, Rajesh; Modi, Pranjal; Abaza, Ronney; Jeong, Wooju; Sammon, Jesse D; Diaz, Mireya; Kher, Vijay; Menon, Mani; Bhandari, Mahendra
2014-08-01
Traditional evaluation of the learning curve (LC) of an operation has been retrospective. Furthermore, LC analysis does not permit patient safety monitoring. To prospectively monitor patient safety during the learning phase of robotic kidney transplantation (RKT) and determine when it could be considered learned using the techniques of statistical process control (SPC). From January through May 2013, 41 patients with end-stage renal disease underwent RKT with regional hypothermia at one of two tertiary referral centers adopting RKT. Transplant recipients were classified into three groups based on the robotic training and kidney transplant experience of the surgeons: group 1, robot trained with limited kidney transplant experience (n=7); group 2, robot trained and kidney transplant experienced (n=20); and group 3, kidney transplant experienced with limited robot training (n=14). We employed prospective monitoring using SPC techniques, including cumulative summation (CUSUM) and Shewhart control charts, to perform LC analysis and patient safety monitoring, respectively. Outcomes assessed included post-transplant graft function and measures of surgical process (anastomotic and ischemic times). CUSUM and Shewhart control charts are time trend analytic techniques that allow comparative assessment of outcomes following a new intervention (RKT) relative to those achieved with established techniques (open kidney transplant; target value) in a prospective fashion. CUSUM analysis revealed an initial learning phase for group 3, whereas groups 1 and 2 had no to minimal learning time. The learning phase for group 3 varied depending on the parameter assessed. Shewhart control charts demonstrated no compromise in functional outcomes for groups 1 and 2. Graft function was compromised in one patient in group 3 (p<0.05) secondary to reasons unrelated to RKT. In multivariable analysis, robot training was significantly associated with improved task-completion times (p<0.01). Graft function was not adversely affected by either the lack of robotic training (p=0.22) or kidney transplant experience (p=0.72). The LC and patient safety of a new surgical technique can be assessed prospectively using CUSUM and Shewhart control chart analytic techniques. These methods allow determination of the duration of mentorship and identification of adverse events in a timely manner. A new operation can be considered learned when outcomes achieved with the new intervention are at par with outcomes following established techniques. Statistical process control techniques allowed for robust, objective, and prospective monitoring of robotic kidney transplantation and can similarly be applied to other new interventions during the introduction and adoption phase. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Spin Glass a Bridge Between Quantum Computation and Statistical Mechanics
NASA Astrophysics Data System (ADS)
Ohzeki, Masayuki
2013-09-01
In this chapter, we show two fascinating topics lying between quantum information processing and statistical mechanics. First, we introduce an elaborated technique, the surface code, to prepare the particular quantum state with robustness against decoherence. Interestingly, the theoretical limitation of the surface code, accuracy threshold, to restore the quantum state has a close connection with the problem on the phase transition in a special model known as spin glasses, which is one of the most active researches in statistical mechanics. The phase transition in spin glasses is an intractable problem, since we must strive many-body system with complicated interactions with change of their signs depending on the distance between spins. Fortunately, recent progress in spin-glass theory enables us to predict the precise location of the critical point, at which the phase transition occurs. It means that statistical mechanics is available for revealing one of the most interesting parts in quantum information processing. We show how to import the special tool in statistical mechanics into the problem on the accuracy threshold in quantum computation. Second, we show another interesting technique to employ quantum nature, quantum annealing. The purpose of quantum annealing is to search for the most favored solution of a multivariable function, namely optimization problem. The most typical instance is the traveling salesman problem to find the minimum tour while visiting all the cities. In quantum annealing, we introduce quantum fluctuation to drive a particular system with the artificial Hamiltonian, in which the ground state represents the optimal solution of the specific problem we desire to solve. Induction of the quantum fluctuation gives rise to the quantum tunneling effect, which allows nontrivial hopping from state to state. We then sketch a strategy to control the quantum fluctuation efficiently reaching the ground state. Such a generic framework is called quantum annealing. The most typical instance is quantum adiabatic computation based on the adiabatic theorem. The quantum adiabatic computation as discussed in the other chapter, unfortunately, has a crucial bottleneck for a part of the optimization problems. We here introduce several recent trials to overcome such a weakpoint by use of developments in statistical mechanics. Through both of the topics, we would shed light on the birth of the interdisciplinary field between quantum mechanics and statistical mechanics.
Hobaiter, Catherine; Byrne, Richard W.
2010-01-01
Chimpanzee culture has generated intense recent interest, fueled by the technical complexity of chimpanzee tool-using traditions; yet it is seriously doubted whether chimpanzees are able to learn motor procedures by imitation under natural conditions. Here we take advantage of an unusual chimpanzee population as a ‘natural experiment’ to identify evidence for imitative learning of this kind in wild chimpanzees. The Sonso chimpanzee community has suffered from high levels of snare injury and now has several manually disabled members. Adult male Tinka, with near-total paralysis of both hands, compensates inability to scratch his back manually by employing a distinctive technique of holding a growing liana taut while making side-to-side body movements against it. We found that seven able-bodied young chimpanzees also used this ‘liana-scratch’ technique, although they had no need to. The distribution of the liana-scratch technique was statistically associated with individuals' range overlap with Tinka and the extent of time they spent in parties with him, confirming that the technique is acquired by social learning. The motivation for able-bodied chimpanzees copying his variant is unknown, but the fact that they do is evidence that the imitative learning of motor procedures from others is a natural trait of wild chimpanzees. PMID:20700527
Determining the Optimal Number of Clusters with the Clustergram
NASA Technical Reports Server (NTRS)
Fluegemann, Joseph K.; Davies, Misty D.; Aguirre, Nathan D.
2011-01-01
Cluster analysis aids research in many different fields, from business to biology to aerospace. It consists of using statistical techniques to group objects in large sets of data into meaningful classes. However, this process of ordering data points presents much uncertainty because it involves several steps, many of which are subject to researcher judgment as well as inconsistencies depending on the specific data type and research goals. These steps include the method used to cluster the data, the variables on which the cluster analysis will be operating, the number of resulting clusters, and parts of the interpretation process. In most cases, the number of clusters must be guessed or estimated before employing the clustering method. Many remedies have been proposed, but none is unassailable and certainly not for all data types. Thus, the aim of current research for better techniques of determining the number of clusters is generally confined to demonstrating that the new technique excels other methods in performance for several disparate data types. Our research makes use of a new cluster-number-determination technique based on the clustergram: a graph that shows how the number of objects in the cluster and the cluster mean (the ordinate) change with the number of clusters (the abscissa). We use the features of the clustergram to make the best determination of the cluster-number.
New Kinematical Constraints on Cosmic Acceleration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapetti, David; Allen, Steve W.; Amin, Mustafa A.
2007-05-25
We present and employ a new kinematical approach to ''dark energy'' studies. We construct models in terms of the dimensionless second and third derivatives of the scale factor a(t) with respect to cosmic time t, namely the present-day value of the deceleration parameter q{sub 0} and the cosmic jerk parameter, j(t). An elegant feature of this parameterization is that all {Lambda}CDM models have j(t)=1 (constant), which facilitates simple tests for departures from the {Lambda}CDM paradigm. Applying our model to redshift-independent distance measurements, from type Ia supernovae and X-ray cluster gas mass fraction measurements, we obtain clear statistical evidence for amore » late time transition from a decelerating to an accelerating phase. For a flat model with constant jerk, j(t)=j, we measure q{sub 0}=-0.81 {+-} 0.14 and j=2.16 +0.81 -0.75, results that are consistent with {Lambda}CDM at about the 1{sigma} confidence level. In comparison to dynamical analyses, the kinematical approach uses a different model set and employs a minimum of prior information, being independent of any particular gravity theory. The results obtained with this new approach therefore provide important additional information and we argue that both kinematical and dynamical techniques should be employed in future dark energy studies, where possible.« less
Physics Manpower, 1973, Education and Employment Studies.
ERIC Educational Resources Information Center
American Inst. of Physics, New York, NY.
Discussed in this document are the changes within the physics profession, their causes and effect. Detailed statistical data are supplied concerning physics enrollments, the institutions where physics is taught, the faculty in physics departments, and the nonacademic employment of physicists. Other topics include employment, education, minority…
Bacci, Silvia; Seracini, Marco; Chiavarini, Manuela; Bartolucci, Francesco; Minelli, Liliana
2017-01-01
The aim of this study was to investigate the relationship between employment status (permanent employment, fixed-term employment, unemployment, other) and perceived health status in a sample of the Italian population. Data was obtained from the European Union Statistics on Income and Living Condition (EU-SILC) study during the period 2009 - 2012. The sample consists of 4,848 individuals, each with a complete record of observations during four years for a total of 19,392 observations. The causal relationship between perceived/self-reported health status and employment status was tested using a global logit model (STATA). Our results confirm a significant association between employment status and perceived health, as well as between perceived health status and economic status. Unemployment that was dependent on an actual lack of work opportunities and not from individual disability was found to be the most significant determinant of perceived health status; a higher educational level produces a better perceived health status.
Pretorius, P. Hendrik; Johnson, Karen L.; King, Michael A.
2016-01-01
We have recently been successful in the development and testing of rigid-body motion tracking, estimation and compensation for cardiac perfusion SPECT based on a visual tracking system (VTS). The goal of this study was to evaluate in patients the effectiveness of our rigid-body motion compensation strategy. Sixty-four patient volunteers were asked to remain motionless or execute some predefined body motion during an additional second stress perfusion acquisition. Acquisitions were performed using the standard clinical protocol with 64 projections acquired through 180 degrees. All data were reconstructed with an ordered-subsets expectation-maximization (OSEM) algorithm using 4 projections per subset and 5 iterations. All physical degradation factors were addressed (attenuation, scatter, and distance dependent resolution), while a 3-dimensional Gaussian rotator was used during reconstruction to correct for six-degree-of-freedom (6-DOF) rigid-body motion estimated by the VTS. Polar map quantification was employed to evaluate compensation techniques. In 54.7% of the uncorrected second stress studies there was a statistically significant difference in the polar maps, and in 45.3% this made a difference in the interpretation of segmental perfusion. Motion correction reduced the impact of motion such that with it 32.8 % of the polar maps were statistically significantly different, and in 14.1% this difference changed the interpretation of segmental perfusion. The improvement shown in polar map quantitation translated to visually improved uniformity of the SPECT slices. PMID:28042170
Instrumental and statistical methods for the comparison of class evidence
NASA Astrophysics Data System (ADS)
Liszewski, Elisa Anne
Trace evidence is a major field within forensic science. Association of trace evidence samples can be problematic due to sample heterogeneity and a lack of quantitative criteria for comparing spectra or chromatograms. The aim of this study is to evaluate different types of instrumentation for their ability to discriminate among samples of various types of trace evidence. Chemometric analysis, including techniques such as Agglomerative Hierarchical Clustering, Principal Components Analysis, and Discriminant Analysis, was employed to evaluate instrumental data. First, automotive clear coats were analyzed by using microspectrophotometry to collect UV absorption data. In total, 71 samples were analyzed with classification accuracy of 91.61%. An external validation was performed, resulting in a prediction accuracy of 81.11%. Next, fiber dyes were analyzed using UV-Visible microspectrophotometry. While several physical characteristics of cotton fiber can be identified and compared, fiber color is considered to be an excellent source of variation, and thus was examined in this study. Twelve dyes were employed, some being visually indistinguishable. Several different analyses and comparisons were done, including an inter-laboratory comparison and external validations. Lastly, common plastic samples and other polymers were analyzed using pyrolysis-gas chromatography/mass spectrometry, and their pyrolysis products were then analyzed using multivariate statistics. The classification accuracy varied dependent upon the number of classes chosen, but the plastics were grouped based on composition. The polymers were used as an external validation and misclassifications occurred with chlorinated samples all being placed into the category containing PVC.
DATA ON YOUTH, 1967, A STATISTICAL DOCUMENT.
ERIC Educational Resources Information Center
SCHEIDER, GEORGE
THE DATA IN THIS REPORT ARE STATISTICS ON YOUTH THROUGHOUT THE UNITED STATES AND IN NEW YORK STATE. INCLUDED ARE DATA ON POPULATION, SCHOOL STATISTICS, EMPLOYMENT, FAMILY INCOME, JUVENILE DELINQUENCY AND YOUTH CRIME (INCLUDING NEW YORK CITY FIGURES), AND TRAFFIC ACCIDENTS. THE STATISTICS ARE PRESENTED IN THE TEXT AND IN TABLES AND CHARTS. (NH)
ERIC Educational Resources Information Center
Meletiou-Mavrotheris, Maria
2004-01-01
While technology has become an integral part of introductory statistics courses, the programs typically employed are professional packages designed primarily for data analysis rather than for learning. Findings from several studies suggest that use of such software in the introductory statistics classroom may not be very effective in helping…
ERIC Educational Resources Information Center
Lovett, Jennifer Nickell
2016-01-01
The purpose of this study is to provide researchers, mathematics educators, and statistics educators information about the current state of preservice secondary mathematics teachers' preparedness to teach statistics. To do so, this study employed an explanatory mixed methods design to quantitatively examine the statistical knowledge and statistics…
The Effect of Sexual Abstinence on Females’ Educational Attainment
SABIA, JOSEPH J.; REES, DANIEL I.
2009-01-01
A number of studies have shown that teenagers who abstain from sex are more likely to graduate from high school and attend college than their sexually active peers. However, it is unclear whether this association represents a causal relationship or can be explained by unmeasured heterogeneity. We employ a variety of statistical techniques to distinguish between these hypotheses, using data on females from the National Longitudinal Study of Adolescent Health. Our results provide evidence that delaying first intercourse leads to an increased likelihood of graduating from high school. This relationship appears to be strongest among respondents in the bottom third of the ability distribution. Controlling for fertility reduces, but does not eliminate, the estimated effect of delaying intercourse. PMID:20084825
Multispectral processing based on groups of resolution elements
NASA Technical Reports Server (NTRS)
Richardson, W.; Gleason, J. M.
1975-01-01
Several nine-point rules are defined and compared with previously studied rules. One of the rules performed well in boundary areas, but with reduced efficiency in field interiors; another combined best performance on field interiors with good sensitivity to boundary detail. The basic threshold gradient and some modifications were investigated as a means of boundary point detection. The hypothesis testing methods of closed-boundary formation were also tested and evaluated. An analysis of the boundary detection problem was initiated, employing statistical signal detection and parameter estimation techniques to analyze various formulations of the problem. These formulations permit the atmospheric and sensor system effects on the data to be thoroughly analyzed. Various boundary features and necessary assumptions can also be investigated in this manner.
Data survey on the effect of product features on competitive advantage of selected firms in Nigeria.
Olokundun, Maxwell; Iyiola, Oladele; Ibidunni, Stephen; Falola, Hezekiah; Salau, Odunayo; Amaihian, Augusta; Peter, Fred; Borishade, Taiye
2018-06-01
The main objective of this study was to present a data article that investigates the effect product features on firm's competitive advantage. Few studies have examined how the features of a product could help in driving the competitive advantage of a firm. Descriptive research method was used. Statistical Package for Social Sciences (SPSS 22) was engaged for analysis of one hundred and fifty (150) valid questionnaire which were completed by small business owners registered under small and medium scale enterprises development of Nigeria (SMEDAN). Stratified and simple random sampling techniques were employed; reliability and validity procedures were also confirmed. The field data set is made publicly available to enable critical or extended analysis.
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1979-01-01
In order to establish transmitter power and receiver sensitivity levels at frequencies above 10 GHz, the designers of earth-satellite telecommunication systems are interested in cumulative rain fade statistics at variable path orientations, elevation angles, climatological regions, and frequencies. They are also interested in establishing optimum space diversity performance parameters. In this work are examined the many elements involved in the employment of single non-attenuating frequency radars for arriving at the desired information. The elements examined include radar techniques and requirements, phenomenological assumptions, path attenation formulations and procedures, as well as error budgeting and calibration analysis. Included are the pertinent results of previous investigators who have used radar for rain attenuation modeling. Suggestions are made for improving present methods.
Rosso, Osvaldo A; Ospina, Raydonal; Frery, Alejandro C
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups.
M.S.L.A.P. Modular Spectral Line Analysis Program documentation
NASA Technical Reports Server (NTRS)
Joseph, Charles L.; Jenkins, Edward B.
1991-01-01
MSLAP is a software for analyzing spectra, providing the basic structure to identify spectral features, to make quantitative measurements of this features, and to store the measurements for convenient access. MSLAP can be used to measure not only the zeroth moment (equivalent width) of a profile, but also the first and second moments. Optical depths and the corresponding column densities across the profile can be measured as well for sufficiently high resolution data. The software was developed for an interactive, graphical analysis where the computer carries most of the computational and data organizational burden and the investigator is responsible only for all judgement decisions. It employs sophisticated statistical techniques for determining the best polynomial fit to the continuum and for calculating the uncertainties.
Signal Processing Methods for Liquid Rocket Engine Combustion Stability Assessments
NASA Technical Reports Server (NTRS)
Kenny, R. Jeremy; Lee, Erik; Hulka, James R.; Casiano, Matthew
2011-01-01
The J2X Gas Generator engine design specifications include dynamic, spontaneous, and broadband combustion stability requirements. These requirements are verified empirically based high frequency chamber pressure measurements and analyses. Dynamic stability is determined with the dynamic pressure response due to an artificial perturbation of the combustion chamber pressure (bomb testing), and spontaneous and broadband stability are determined from the dynamic pressure responses during steady operation starting at specified power levels. J2X Workhorse Gas Generator testing included bomb tests with multiple hardware configurations and operating conditions, including a configuration used explicitly for engine verification test series. This work covers signal processing techniques developed at Marshall Space Flight Center (MSFC) to help assess engine design stability requirements. Dynamic stability assessments were performed following both the CPIA 655 guidelines and a MSFC in-house developed statistical-based approach. The statistical approach was developed to better verify when the dynamic pressure amplitudes corresponding to a particular frequency returned back to pre-bomb characteristics. This was accomplished by first determining the statistical characteristics of the pre-bomb dynamic levels. The pre-bomb statistical characterization provided 95% coverage bounds; these bounds were used as a quantitative measure to determine when the post-bomb signal returned to pre-bomb conditions. The time for post-bomb levels to acceptably return to pre-bomb levels was compared to the dominant frequency-dependent time recommended by CPIA 655. Results for multiple test configurations, including stable and unstable configurations, were reviewed. Spontaneous stability was assessed using two processes: 1) characterization of the ratio of the peak response amplitudes to the excited chamber acoustic mode amplitudes and 2) characterization of the variability of the peak response's frequency over the test duration. This characterization process assists in evaluating the discreteness of a signal as well as the stability of the chamber response. Broadband stability was assessed using a running root-mean-square evaluation. These techniques were also employed, in a comparative analysis, on available Fastrac data, and these results are presented here.
Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.
Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan
2016-09-01
Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.
Rodriguez-Florez, Naiara; Bruse, Jan L; Borghi, Alessandro; Vercruysse, Herman; Ong, Juling; James, Greg; Pennec, Xavier; Dunaway, David J; Jeelani, N U Owase; Schievano, Silvia
2017-10-01
Spring-assisted cranioplasty is performed to correct the long and narrow head shape of children with sagittal synostosis. Such corrective surgery involves osteotomies and the placement of spring-like distractors, which gradually expand to widen the skull until removal about 4 months later. Due to its dynamic nature, associations between surgical parameters and post-operative 3D head shape features are difficult to comprehend. The current study aimed at applying population-based statistical shape modelling to gain insight into how the choice of surgical parameters such as craniotomy size and spring positioning affects post-surgical head shape. Twenty consecutive patients with sagittal synostosis who underwent spring-assisted cranioplasty at Great Ormond Street Hospital for Children (London, UK) were prospectively recruited. Using a nonparametric statistical modelling technique based on mathematical currents, a 3D head shape template was computed from surface head scans of sagittal patients after spring removal. Partial least squares (PLS) regression was employed to quantify and visualise trends of localised head shape changes associated with the surgical parameters recorded during spring insertion: anterior-posterior and lateral craniotomy dimensions, anterior spring position and distance between anterior and posterior springs. Bivariate correlations between surgical parameters and corresponding PLS shape vectors demonstrated that anterior-posterior (Pearson's [Formula: see text]) and lateral craniotomy dimensions (Spearman's [Formula: see text]), as well as the position of the anterior spring ([Formula: see text]) and the distance between both springs ([Formula: see text]) on average had significant effects on head shapes at the time of spring removal. Such effects were visualised on 3D models. Population-based analysis of 3D post-operative medical images via computational statistical modelling tools allowed for detection of novel associations between surgical parameters and head shape features achieved following spring-assisted cranioplasty. The techniques described here could be extended to other cranio-maxillofacial procedures in order to assess post-operative outcomes and ultimately facilitate surgical decision making.
Designs for surge immunity in critical electronic facilities
NASA Technical Reports Server (NTRS)
Roberts, Edward F., Jr.
1991-01-01
In recent years, Federal Aviation Administration (FAA) embarked on a program replacing older tube type electronic equipment with newer solid state equipment. This replacement program dramatically increased the susceptibility of the FAA's facilities to lightning related damages. The proposal is presented of techniques which may be employed to lessen the susceptibility of new FAA electronic facility designs to failures resulting from lightning related surges and transients as well as direct strikes. The general concept espoused is one of a consistent system approach employing both perimeter and internal protection. It compares the technique presently employed to reduce electronic noise with other techniques which reduce noise while lowering susceptibility to lightning related damage. It is anticipated that these techniques will be employed in the design of an Air Traffic Control Tower in a high isokeraunic area. This facility would be subjected to rigorous monitoring over a multi-year period to provide quantitative data hopefully supporting the advantage of this design.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC.
To compile its projections of future employment levels, the Bureau of Labor Statistics (BLS) combines the following five interlinked models in a six-step process: a labor force model, an econometric model of the U.S. economy, an industry activity model, an industry labor demand model, and an occupational labor demand model. The BLS was asked to…
Measuring Efficiency and Tradeoffs in Attainment of EEO Goals.
1982-02-01
in FY78 and FY79. i.e., T9tese goals Are based on undifferentiated Civilian Labor Force (CLF) ratios required for reporting by the Equal Employment...Lewis and R.J. Niehaus, "Design and Development of Equal Employment Opportunity Human Resources Planning Models," NPDRC TR79--141 (San Diego: Navy...Approach to Analysis of Tradeoffs Among Household Ptoduction Outputs," American Statistical Association 1979 Proceedings of the Social Statistics Section
Evaluation of methods to estimate lake herring spawner abundance in Lake Superior
Yule, D.L.; Stockwell, J.D.; Cholwek, G.A.; Evrard, L.M.; Schram, S.; Seider, M.; Symbal, M.
2006-01-01
Historically, commercial fishers harvested Lake Superior lake herring Coregonus artedi for their flesh, but recently operators have targeted lake herring for roe. Because no surveys have estimated spawning female abundance, direct estimates of fishing mortality are lacking. The primary objective of this study was to determine the feasibility of using acoustic techniques in combination with midwater trawling to estimate spawning female lake herring densities in a Lake Superior statistical grid (i.e., a 10′ latitude × 10′ longitude area over which annual commercial harvest statistics are compiled). Midwater trawling showed that mature female lake herring were largely pelagic during the night in late November, accounting for 94.5% of all fish caught exceeding 250 mm total length. When calculating acoustic estimates of mature female lake herring, we excluded backscattering from smaller pelagic fishes like immature lake herring and rainbow smelt Osmerus mordax by applying an empirically derived threshold of −35.6 dB. We estimated the average density of mature females in statistical grid 1409 at 13.3 fish/ha and the total number of spawning females at 227,600 (95% confidence interval = 172,500–282,700). Using information on mature female densities, size structure, and fecundity, we estimate that females deposited 3.027 billion (109) eggs in grid 1409 (95% confidence interval = 2.356–3.778 billion). The relative estimation error of the mature female density estimate derived using a geostatistical model—based approach was low (12.3%), suggesting that the employed method was robust. Fishing mortality rates of all mature females and their eggs were estimated at 2.3% and 3.8%, respectively. The techniques described for enumerating spawning female lake herring could be used to develop a more accurate stock–recruitment model for Lake Superior lake herring.
Hardiman, S; Miller, K; Murphy, M
1993-01-01
Safety observations during the clinical development of Mentane (velnacrine maleate) have included the occurrence of generally asymptomatic liver enzyme elevations confined to patients with Alzheimer's disease (AD). The clinical presentation of this reversible hepatocellular injury is analogous to that reported for tetrahydroaminoacridine (THA). Direct liver injury, possibly associated with the production of a toxic metabolite, would be consistent with reports of aberrant xenobiotic metabolism in Alzheimer's disease patients. Since a patient related aberration in drug metabolism was suspected, a biostatistical strategy was developed with the objective of predicting hepatotoxicity in individual patients prior to exposure to velnacrine maleate. The method used logistic regression techniques with variable selection restricted to those items which could be routinely and inexpensively accessed at screen evaluation for potential candidates for treatment. The model was to be predictive (a marker for eventual hepatotoxicity) rather than a causative model, and techniques employed "goodness of fit", percentage correct, and positive and negative predictive values. On the basis of demographic and baseline laboratory data from 942 patients, the PROPP statistic was developed (the Physician Reference Of Predicted Probabilities). Main effect variables included age, gender, and nine hematological and serum chemistry variables. The sensitivity of the current model is approximately 49%, specificity approximately 88%. Using prior probability estimates, however, in which the patient's likelihood of liver toxicity is presumed to be at least 30%, the positive predictive value ranged between 64-77%. Although the clinical utility of this statistic will require refinements and additional prospective confirmation, its potential existence speaks to the possibility of markers for idiosyncratic drug metabolism in patients with Alzheimer's disease.
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
Sanmiquel, Lluís; Bascompta, Marc; Rossell, Josep M.; Anticoi, Hernán Francisco; Guash, Eduard
2018-01-01
An analysis of occupational accidents in the mining sector was conducted using the data from the Spanish Ministry of Employment and Social Safety between 2005 and 2015, and data-mining techniques were applied. Data was processed with the software Weka. Two scenarios were chosen from the accidents database: surface and underground mining. The most important variables involved in occupational accidents and their association rules were determined. These rules are composed of several predictor variables that cause accidents, defining its characteristics and context. This study exposes the 20 most important association rules in the sector—either surface or underground mining—based on the statistical confidence levels of each rule as obtained by Weka. The outcomes display the most typical immediate causes, along with the percentage of accidents with a basis in each association rule. The most important immediate cause is body movement with physical effort or overexertion, and the type of accident is physical effort or overexertion. On the other hand, the second most important immediate cause and type of accident are different between the two scenarios. Data-mining techniques were chosen as a useful tool to find out the root cause of the accidents. PMID:29518921
Sanmiquel, Lluís; Bascompta, Marc; Rossell, Josep M; Anticoi, Hernán Francisco; Guash, Eduard
2018-03-07
An analysis of occupational accidents in the mining sector was conducted using the data from the Spanish Ministry of Employment and Social Safety between 2005 and 2015, and data-mining techniques were applied. Data was processed with the software Weka. Two scenarios were chosen from the accidents database: surface and underground mining. The most important variables involved in occupational accidents and their association rules were determined. These rules are composed of several predictor variables that cause accidents, defining its characteristics and context. This study exposes the 20 most important association rules in the sector-either surface or underground mining-based on the statistical confidence levels of each rule as obtained by Weka. The outcomes display the most typical immediate causes, along with the percentage of accidents with a basis in each association rule. The most important immediate cause is body movement with physical effort or overexertion, and the type of accident is physical effort or overexertion. On the other hand, the second most important immediate cause and type of accident are different between the two scenarios. Data-mining techniques were chosen as a useful tool to find out the root cause of the accidents.
Hybrid Disease Diagnosis Using Multiobjective Optimization with Evolutionary Parameter Optimization
Nalluri, MadhuSudana Rao; K., Kannan; M., Manisha
2017-01-01
With the widespread adoption of e-Healthcare and telemedicine applications, accurate, intelligent disease diagnosis systems have been profoundly coveted. In recent years, numerous individual machine learning-based classifiers have been proposed and tested, and the fact that a single classifier cannot effectively classify and diagnose all diseases has been almost accorded with. This has seen a number of recent research attempts to arrive at a consensus using ensemble classification techniques. In this paper, a hybrid system is proposed to diagnose ailments using optimizing individual classifier parameters for two classifier techniques, namely, support vector machine (SVM) and multilayer perceptron (MLP) technique. We employ three recent evolutionary algorithms to optimize the parameters of the classifiers above, leading to six alternative hybrid disease diagnosis systems, also referred to as hybrid intelligent systems (HISs). Multiple objectives, namely, prediction accuracy, sensitivity, and specificity, have been considered to assess the efficacy of the proposed hybrid systems with existing ones. The proposed model is evaluated on 11 benchmark datasets, and the obtained results demonstrate that our proposed hybrid diagnosis systems perform better in terms of disease prediction accuracy, sensitivity, and specificity. Pertinent statistical tests were carried out to substantiate the efficacy of the obtained results. PMID:29065626
NASA Technical Reports Server (NTRS)
Rampe, E. B.; Lanza, N. L.
2012-01-01
Orbital near-infrared (NIR) reflectance spectra of the martian surface from the OMEGA and CRISM instruments have identified a variety of phyllosilicates in Noachian terrains. The types of phyllosilicates present on Mars have important implications for the aqueous environments in which they formed, and, thus, for recognizing locales that may have been habitable. Current identifications of phyllosilicates from martian NIR data are based on the positions of spectral absorptions relative to laboratory data of well-characterized samples and from spectral ratios; however, some phyllosilicates can be difficult to distinguish from one another with these methods (i.e. illite vs. muscovite). Here we employ a multivariate statistical technique, principal component analysis (PCA), to differentiate between spectrally similar phyllosilicate minerals. PCA is commonly used in a variety of industries (pharmaceutical, agricultural, viticultural) to discriminate between samples. Previous work using PCA to analyze raw NIR reflectance data from mineral mixtures has shown that this is a viable technique for identifying mineral types, abundances, and particle sizes. Here, we evaluate PCA of second-derivative NIR reflectance data as a method for classifying phyllosilicates and test whether this method can be used to identify phyllosilicates on Mars.
NASA Astrophysics Data System (ADS)
Mahmud, M. H.; Nordin, A. J.; Saad, F. F. Ahmad; Fattah Azman, A. Z.
2014-11-01
This study aims to estimate the radiation effective dose resulting from whole body fluorine-18 flourodeoxyglucose Positron Emission Tomography (18F-FDG PET) scanning as compared to conservative Computed Tomography (CT) techniques in evaluating oncology patients. We reviewed 19 oncology patients who underwent 18F-FDG PET/CT at our centre for cancer staging. Internal and external doses were estimated using radioactivity of injected FDG and volume CT Dose Index (CTDIvol), respectively with employment of the published and modified dose coefficients. The median differences of dose among the conservative CT and PET protocols were determined using Kruskal Wallis test with p < 0.05 considered as significant. The median (interquartile range, IQR) effective doses of non-contrasted CT, contrasted CT and PET scanning protocols were 7.50 (9.35) mSv, 9.76 (3.67) mSv and 6.30 (1.20) mSv, respectively, resulting in the total dose of 21.46 (8.58) mSv. Statistically significant difference was observed in the median effective dose between the three protocols (p < 0.01). The effective doses of whole body 18F-FDG PET technique may be effective the lowest amongst the conventional CT imaging techniques.
Castro-González, María Isabel; Carrillo-Domínguez, Silvia
2015-09-01
the regular consumption of fish brings benefits to health due to its content of n-3 fatty acids, but cooking enhances or damages the concentration of fatty acids (FA) since they are susceptible to oxidation by temperatures and cooking times. to analyze the effect of six cooking techniques on total lipids (TL) and (FA) content in marlin and hake and select which one helps the best conservation of health beneficial FA. fillets were subjected to different cooking techniques: steam (ST), foiled-aluminum-paper (FAP), foiled- banana-leaf (FBL), gas-oven (GO), microwave-oven (MO) and fried-lightly (FL). FA were identified by gas chromatography/FID. Marlin: FL increased the concentration of TL and MO decreased it. Statistically, PUFA, SFA and EPA + DHA increased with FAP, MUFA decreased with FBL and increased in FL. Hake: FL increased concentration in all groups of FA while ST decreased it. SFA values and index of atherogenicity (IA), thrombogenicity (IT), peroxidisability and hipocolesterolemic/ hipercolesterolemic fatty acid ratio (HH) found, suggested that hake is a fish with greater health benefits, regardless of the technique you employ. By its EPA + DHA content, marlin seems to be an excellent choice if cooked FAP. FL technique in both species decreased the IA, IT and significantly increased the HH; however n3/n6 ratio decreased. ST seems to be the least desirable cooking technique for both species. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Pedagogical Techniques Employed by the Television Show "MythBusters"
NASA Astrophysics Data System (ADS)
Zavrel, Erik
2016-11-01
"MythBusters," the long-running though recently discontinued Discovery Channel science entertainment television program, has proven itself to be far more than just a highly rated show. While its focus is on entertainment, the show employs an array of pedagogical techniques to communicate scientific concepts to its audience. These techniques include: achieving active learning, avoiding jargon, employing repetition to ensure comprehension, using captivating demonstrations, cultivating an enthusiastic disposition, and increasing intrinsic motivation to learn. In this content analysis, episodes from the show's 10-year history were examined for these techniques. "MythBusters" represents an untapped source of pedagogical techniques, which science educators may consider availing themselves of in their tireless effort to better reach their students. Physics educators in particular may look to "MythBusters" for inspiration and guidance in how to incorporate these techniques into their own teaching and help their students in the learning process.
Equal Employment + Equal Pay = Multiple Problems for Colleges and Universities
ERIC Educational Resources Information Center
Steinbach, Sheldon Elliot; Reback, Joyce E.
1974-01-01
Issues involved in government regulation of university employment practices are discussed: confidentiality of records, pregnancy as a disability, alleged discrimination in benefits, tests and other employment criteria, seniority and layoff, reverse discrimination, use of statistics for determination of discrimination, and the Equal Pay Act. (JT)
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Teixeira, Kelly Sivocy Sampaio; da Cruz Fonseca, Said Gonçalves; de Moura, Luís Carlos Brigido; de Moura, Mario Luís Ribeiro; Borges, Márcia Herminia Pinheiro; Barbosa, Euzébio Guimaraes; De Lima E Moura, Túlio Flávio Accioly
2018-02-05
The World Health Organization recommends that TB treatment be administered using combination therapy. The methodologies for quantifying simultaneously associated drugs are highly complex, being costly, extremely time consuming and producing chemical residues harmful to the environment. The need to seek alternative techniques that minimize these drawbacks is widely discussed in the pharmaceutical industry. Therefore, the objective of this study was to develop and validate a multivariate calibration model in association with the near infrared spectroscopy technique (NIR) for the simultaneous determination of rifampicin, isoniazid, pyrazinamide and ethambutol. These models allow the quality control of these medicines to be optimized using simple, fast, low-cost techniques that produce no chemical waste. In the NIR - PLS method, spectra readings were acquired in the 10,000-4000cm -1 range using an infrared spectrophotometer (IRPrestige - 21 - Shimadzu) with a resolution of 4cm -1 , 20 sweeps, under controlled temperature and humidity. For construction of the model, the central composite experimental design was employed on the program Statistica 13 (StatSoft Inc.). All spectra were treated by computational tools for multivariate analysis using partial least squares regression (PLS) on the software program Pirouette 3.11 (Infometrix, Inc.). Variable selections were performed by the QSAR modeling program. The models developed by NIR in association with multivariate analysis provided good prediction of the APIs for the external samples and were therefore validated. For the tablets, however, the slightly different quantitative compositions of excipients compared to the mixtures prepared for building the models led to results that were not statistically similar, despite having prediction errors considered acceptable in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Hybrid optical CDMA-FSO communications network under spatially correlated gamma-gamma scintillation.
Jurado-Navas, Antonio; Raddo, Thiago R; Garrido-Balsells, José María; Borges, Ben-Hur V; Olmos, Juan José Vegas; Monroy, Idelfonso Tafur
2016-07-25
In this paper, we propose a new hybrid network solution based on asynchronous optical code-division multiple-access (OCDMA) and free-space optical (FSO) technologies for last-mile access networks, where fiber deployment is impractical. The architecture of the proposed hybrid OCDMA-FSO network is thoroughly described. The users access the network in a fully asynchronous manner by means of assigned fast frequency hopping (FFH)-based codes. In the FSO receiver, an equal gain-combining technique is employed along with intensity modulation and direct detection. New analytical formalisms for evaluating the average bit error rate (ABER) performance are also proposed. These formalisms, based on the spatially correlated gamma-gamma statistical model, are derived considering three distinct scenarios, namely, uncorrelated, totally correlated, and partially correlated channels. Numerical results show that users can successfully achieve error-free ABER levels for the three scenarios considered as long as forward error correction (FEC) algorithms are employed. Therefore, OCDMA-FSO networks can be a prospective alternative to deliver high-speed communication services to access networks with deficient fiber infrastructure.
Using complex networks for text classification: Discriminating informative and imaginative documents
NASA Astrophysics Data System (ADS)
de Arruda, Henrique F.; Costa, Luciano da F.; Amancio, Diego R.
2016-01-01
Statistical methods have been widely employed in recent years to grasp many language properties. The application of such techniques have allowed an improvement of several linguistic applications, such as machine translation and document classification. In the latter, many approaches have emphasised the semantical content of texts, as is the case of bag-of-word language models. These approaches have certainly yielded reasonable performance. However, some potential features such as the structural organization of texts have been used only in a few studies. In this context, we probe how features derived from textual structure analysis can be effectively employed in a classification task. More specifically, we performed a supervised classification aiming at discriminating informative from imaginative documents. Using a networked model that describes the local topological/dynamical properties of function words, we achieved an accuracy rate of up to 95%, which is much higher than similar networked approaches. A systematic analysis of feature relevance revealed that symmetry and accessibility measurements are among the most prominent network measurements. Our results suggest that these measurements could be used in related language applications, as they play a complementary role in characterising texts.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
Photoacoustic Analysis of the Penetration Kinetics of Cordia verbenacea DC in Human Skin
NASA Astrophysics Data System (ADS)
Carvalho, S. S.; Barja, P. R.
2012-11-01
Phonophoresis consists of the utilization of ultrasound radiation associated to pharmacological agents in order to enhance transdermal penetration of applied drugs. It is a widely employed resource in physiotherapy practice, normally associated with anti-inflammatory drugs, such as Acheflan. This drug was developed in Brazil from the essential oil of Cordia verbenacea DC, a native plant of the Brazilian southern coast. In previous studies, the photoacoustic (PA) technique proved effective in the study of the penetration kinetics of topically applied products and in the evaluation of drug delivery after phonophoresis application. The present work aimed to evaluate the penetration kinetics of Acheflan in human skin, employing in vivo PA measurements after massage application or phonophoresis application. Ten volunteers (aged between 18 and 30 years) took part in the study. Time evolution of the PA signal was fitted to a Boltzmann curve, S-shaped. After statistical analysis, PA measurements have shown drug penetration for both application forms, but drug delivery was more evident after phonophoresis application, with a characteristic penetration time of less than 15 min for the stratum corneum.
A novel health indicator for on-line lithium-ion batteries remaining useful life prediction
NASA Astrophysics Data System (ADS)
Zhou, Yapeng; Huang, Miaohua; Chen, Yupu; Tao, Ye
2016-07-01
Prediction of lithium-ion batteries remaining useful life (RUL) plays an important role in an intelligent battery management system. The capacity and internal resistance are often used as the batteries health indicator (HI) for quantifying degradation and predicting RUL. However, on-line measurement of capacity and internal resistance are hardly realizable due to the not fully charged and discharged condition and the extremely expensive cost, respectively. Therefore, there is a great need to find an optional way to deal with this plight. In this work, a novel HI is extracted from the operating parameters of lithium-ion batteries for degradation modeling and RUL prediction. Moreover, Box-Cox transformation is employed to improve HI performance. Then Pearson and Spearman correlation analyses are utilized to evaluate the similarity between real capacity and the estimated capacity derived from the HI. Next, both simple statistical regression technique and optimized relevance vector machine are employed to predict the RUL based on the presented HI. The correlation analyses and prediction results show the efficiency and effectiveness of the proposed HI for battery degradation modeling and RUL prediction.
Monthly monsoon rainfall forecasting using artificial neural networks
NASA Astrophysics Data System (ADS)
Ganti, Ravikumar
2014-10-01
Indian agriculture sector heavily depends on monsoon rainfall for successful harvesting. In the past, prediction of rainfall was mainly performed using regression models, which provide reasonable accuracy in the modelling and forecasting of complex physical systems. Recently, Artificial Neural Networks (ANNs) have been proposed as efficient tools for modelling and forecasting. A feed-forward multi-layer perceptron type of ANN architecture trained using the popular back-propagation algorithm was employed in this study. Other techniques investigated for modeling monthly monsoon rainfall include linear and non-linear regression models for comparison purposes. The data employed in this study include monthly rainfall and monthly average of the daily maximum temperature in the North Central region in India. Specifically, four regression models and two ANN model's were developed. The performance of various models was evaluated using a wide variety of standard statistical parameters and scatter plots. The results obtained in this study for forecasting monsoon rainfalls using ANNs have been encouraging. India's economy and agricultural activities can be effectively managed with the help of the availability of the accurate monsoon rainfall forecasts.
ERIC Educational Resources Information Center
Hollenbeck, Kevin
A study examined the effect of education and training on the economy and on employment outcomes. Data collected during a 1982 nationwide telephone survey of 3,500 employers were used as the basis for statistical models of voluntary and involuntary job separations and job promotions. Four major conclusions resulted from the modeling process…
ERIC Educational Resources Information Center
National Centre for Vocational Education Research (NCVER), 2017
2017-01-01
This publication presents information on employers' use and views of the vocational education and training (VET) system. The findings relate to the various ways in which Australian employers use the VET system and unaccredited training to meet their skill needs, and their satisfaction with these methods of training. Australian employers can engage…
Employment status and heart disease risk factors in middle-aged women: the Rancho Bernardo Study.
Kritz-Silverstein, D; Wingard, D L; Barrett-Connor, E
1992-01-01
BACKGROUND. In recent years, an increasing number of women have been entering the labor force. It is known that in men, employment is related to heart disease risk, but there are few studies examining this association among women. METHODS. The relation between employment status and heart disease risk factors including lipid and lipoprotein levels, systolic and diastolic blood pressure, fasting and postchallenge plasma glucose and insulin levels, was examined in 242 women aged 40 to 59 years, who were participants in the Rancho Bernardo Heart and Chronic Disease Survey. At the time of a follow-up clinic visit between 1984 and 1987, 46.7% were employed, primarily in managerial positions. RESULTS. Employed women smoked fewer cigarettes, drank less alcohol, and exercised more than unemployed women, but these differences were not statistically significant. After adjustment for covariates, employed women had significantly lower total cholesterol and fasting plasma glucose levels than unemployed women. Differences on other biological variables, although not statistically significant, also favored the employed women. CONCLUSIONS. Results of this study suggest that middle-aged women employed in managerial positions are healthier than unemployed women. PMID:1739150
Trends in study design and the statistical methods employed in a leading general medicine journal.
Gosho, M; Sato, Y; Nagashima, K; Takahashi, S
2018-02-01
Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing after the presentation of the FDA guidance for adaptive design. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Asal, F. F.
2012-07-01
Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical analysis has proven good potential in the DEM quality assessment.
Certification Can Count: The Case of Aircraft Mechanics. Issues in Labor Statistics. Summary 02-03.
ERIC Educational Resources Information Center
Bureau of Labor Statistics, Washington, DC.
This document is a summary of aerospace industry technician statistics gathered by the Occupational Employment Statistics Survey for the year 2000 by the Department of Labor, Bureau of Labor Statistics. The data includes the following: (1) a comparison of wages earned by Federal Aviation Administration (FAA) certified and non-FAA certified…
Code of Federal Regulations, 2010 CFR
2010-10-01
... statistical summaries and other information it maintains? 40.111 Section 40.111 Transportation Office of the... Testing Laboratories § 40.111 When and how must a laboratory disclose statistical summaries and other information it maintains? (a) As a laboratory, you must transmit an aggregate statistical summary, by employer...
The Functional Relationship between Maternal Employment, Self-Concept; and Family Orientation.
ERIC Educational Resources Information Center
Goodwin, Paul; Newman, Isadore
This study investigated the relationships between maternal employment during three periods in the child's life, the child's self-concept, and family orientation. Variables statistically controlled were intactness of the family, father's employment status, the child's sex, the child's race, and the family's socioeconomic status. It was hypothesized…
Women and Nontraditional Work.
ERIC Educational Resources Information Center
Mort, Heidi; Reisman, Janet
This fact sheet summarizes labor market statistics on nontraditional jobs for women and public policy, barriers, and strategies regarding such employment. Among the data presented are the following: nontraditional jobs for women are jobs in which 75 percent or more of those employed are men; 9 percent of all working women are employed in…
NASA Astrophysics Data System (ADS)
Lutz, Norbert W.; Bernard, Monique
2018-02-01
We recently suggested a new paradigm for statistical analysis of thermal heterogeneity in (semi-)aqueous materials by 1H NMR spectroscopy, using water as a temperature probe. Here, we present a comprehensive in silico and in vitro validation that demonstrates the ability of this new technique to provide accurate quantitative parameters characterizing the statistical distribution of temperature values in a volume of (semi-)aqueous matter. First, line shape parameters of numerically simulated water 1H NMR spectra are systematically varied to study a range of mathematically well-defined temperature distributions. Then, corresponding models based on measured 1H NMR spectra of agarose gel are analyzed. In addition, dedicated samples based on hydrogels or biological tissue are designed to produce temperature gradients changing over time, and dynamic NMR spectroscopy is employed to analyze the resulting temperature profiles at sub-second temporal resolution. Accuracy and consistency of the previously introduced statistical descriptors of temperature heterogeneity are determined: weighted median and mean temperature, standard deviation, temperature range, temperature mode(s), kurtosis, skewness, entropy, and relative areas under temperature curves. Potential and limitations of this method for quantitative analysis of thermal heterogeneity in (semi-)aqueous materials are discussed in view of prospective applications in materials science as well as biology and medicine.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework.
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-06-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd.
Bridging stylized facts in finance and data non-stationarities
NASA Astrophysics Data System (ADS)
Camargo, Sabrina; Duarte Queirós, Sílvio M.; Anteneodo, Celia
2013-04-01
Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary paths of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of trading volume and price fluctuations and a dynamics compatible with the U-shaped profile of the volume in a trading section and the slow decay of the autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.
Resolving the Antarctic contribution to sea-level rise: a hierarchical modelling framework†
Zammit-Mangion, Andrew; Rougier, Jonathan; Bamber, Jonathan; Schön, Nana
2014-01-01
Determining the Antarctic contribution to sea-level rise from observational data is a complex problem. The number of physical processes involved (such as ice dynamics and surface climate) exceeds the number of observables, some of which have very poor spatial definition. This has led, in general, to solutions that utilise strong prior assumptions or physically based deterministic models to simplify the problem. Here, we present a new approach for estimating the Antarctic contribution, which only incorporates descriptive aspects of the physically based models in the analysis and in a statistical manner. By combining physical insights with modern spatial statistical modelling techniques, we are able to provide probability distributions on all processes deemed to play a role in both the observed data and the contribution to sea-level rise. Specifically, we use stochastic partial differential equations and their relation to geostatistical fields to capture our physical understanding and employ a Gaussian Markov random field approach for efficient computation. The method, an instantiation of Bayesian hierarchical modelling, naturally incorporates uncertainty in order to reveal credible intervals on all estimated quantities. The estimated sea-level rise contribution using this approach corroborates those found using a statistically independent method. © 2013 The Authors. Environmetrics Published by John Wiley & Sons, Ltd. PMID:25505370
Measuring the Number of M Dwarfs per M Dwarf Using Kepler Eclipsing Binaries
NASA Astrophysics Data System (ADS)
Shan, Yutong; Johnson, John A.; Morton, Timothy D.
2015-11-01
We measure the binarity of detached M dwarfs in the Kepler field with orbital periods in the range of 1-90 days. Kepler’s photometric precision and nearly continuous monitoring of stellar targets over time baselines ranging from 3 months to 4 years make its detection efficiency for eclipsing binaries nearly complete over this period range and for all radius ratios. Our investigation employs a statistical framework akin to that used for inferring planetary occurrence rates from planetary transits. The obvious simplification is that eclipsing binaries have a vastly improved detection efficiency that is limited chiefly by their geometric probabilities to eclipse. For the M-dwarf sample observed by the Kepler Mission, the fractional incidence of eclipsing binaries implies that there are {0.11}-0.04+0.02 close stellar companions per apparently single M dwarf. Our measured binarity is higher than previous inferences of the occurrence rate of close binaries via radial velocity techniques, at roughly the 2σ level. This study represents the first use of eclipsing binary detections from a high quality transiting planet mission to infer binary statistics. Application of this statistical framework to the eclipsing binaries discovered by future transit surveys will establish better constraints on short-period M+M binary rate, as well as binarity measurements for stars of other spectral types.
A statistical framework for applying RNA profiling to chemical hazard detection.
Kostich, Mitchell S
2017-12-01
Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.
An Acoustic-Based Method to Detect and Quantify the Effect of Exhalation into a Dry Powder Inhaler.
Holmes, Martin S; Seheult, Jansen N; O'Connell, Peter; D'Arcy, Shona; Ehrhardt, Carsten; Healy, Anne Marie; Costello, Richard W; Reilly, Richard B
2015-08-01
Dry powder inhaler (DPI) users frequently exhale into their inhaler mouthpiece before the inhalation step. This error in technique compromises the integrity of the drug and results in poor bronchodilation. This study investigated the effect of four exhalation factors (exhalation flow rate, distance from mouth to inhaler, exhalation duration, and relative air humidity) on dry powder dose delivery. Given that acoustic energy can be related to the factors associated with exhalation sounds, we then aimed to develop a method of identifying and quantifying this critical inhaler technique error using acoustic based methods. An in vitro test rig was developed to simulate this critical error. The effect of the four factors on subsequent drug delivery were investigated using multivariate regression models. In a further study we then used an acoustic monitoring device to unobtrusively record the sounds 22 asthmatic patients made whilst using a Diskus(™) DPI. Acoustic energy was employed to automatically detect and analyze exhalation events in the audio files. All exhalation factors had a statistically significant effect on drug delivery (p<0.05); distance from the inhaler mouthpiece had the largest effect size. Humid air exhalations were found to reduce the fine particle fraction (FPF) compared to dry air. In a dataset of 110 audio files from 22 asthmatic patients, the acoustic method detected exhalations with an accuracy of 89.1%. We were able to classify exhalations occurring 5 cm or less in the direction of the inhaler mouthpiece or recording device with a sensitivity of 72.2% and specificity of 85.7%. Exhaling into a DPI has a significant detrimental effect. Acoustic based methods can be employed to objectively detect and analyze exhalations during inhaler use, thus providing a method of remotely monitoring inhaler technique and providing personalized inhaler technique feedback.
Descriptive Statistical Techniques for Librarians. 2nd Edition.
ERIC Educational Resources Information Center
Hafner, Arthur W.
A thorough understanding of the uses and applications of statistical techniques is integral in gaining support for library funding or new initiatives. This resource is designed to help practitioners develop and manipulate descriptive statistical information in evaluating library services, tracking and controlling limited resources, and analyzing…
ERIC Educational Resources Information Center
Williams, Immanuel James; Williams, Kelley Kim
2016-01-01
Understanding summary statistics and graphical techniques are building blocks to comprehending concepts beyond basic statistics. It's known that motivated students perform better in school. Using examples that students find engaging allows them to understand the concepts at a deeper level.
1987-04-01
of jobs, four types of exclusionary barriers are investigated: "segregated networks" at the candidate stage, "information bias" and " statistical ...constitutional law, and socio-economic theory (for example, Glazer, 1975; Maguire, 1980). Disagreements have been particularly strong about the preferen...will present statistics on current labor market processes that can be used to assess the continuing need for strong policies of equal employment
Shinozaki, Kazuma; Zack, Jason W.; Pylypenko, Svitlana; ...
2015-09-17
Platinum electrocatalysts supported on high surface area and Vulcan carbon blacks (Pt/HSC, Pt/V) were characterized in rotating disk electrode (RDE) setups for electrochemical area (ECA) and oxygen reduction reaction (ORR) area specific activity (SA) and mass specific activity (MA) at 0.9 V. Films fabricated using several ink formulations and film-drying techniques were characterized for a statistically significant number of independent samples. The highest quality Pt/HSC films exhibited MA 870 ± 91 mA/mgPt and SA 864 ± 56 μA/cm 2 Pt while Pt/V had MA 706 ± 42 mA/mgPt and SA 1120 ± 70 μA/cm 2 Pt when measured in 0.1more » M HClO 4, 20 mV/s, 100 kPa O 2 and 23±2°C. An enhancement factor of 2.8 in themeasured SA was observable on eliminating Nafion ionomer and employing extremely thin, uniform films (~4.5 μg/cm 2 Pt) of Pt/HSC. The ECA for Pt/HSC (99 ± 7 m2/gPt) and Pt/V (65 ± 5 m 2/gPt) were statistically invariant and insensitive to film uniformity/thickness/fabrication technique; accordingly, enhancements in MA are wholly attributable to increases in SA. Impedance measurements coupled with scanning electron microscopy were used to de-convolute the losses within the catalyst layer and ascribed to the catalyst layer resistance, oxygen diffusion, and sulfonate anion adsorption/blocking. The ramifications of these results for proton exchange membrane fuel cells have also been examined.« less
Mande, Sharmila S.
2016-01-01
The nature of inter-microbial metabolic interactions defines the stability of microbial communities residing in any ecological niche. Deciphering these interaction patterns is crucial for understanding the mode/mechanism(s) through which an individual microbial community transitions from one state to another (e.g. from a healthy to a diseased state). Statistical correlation techniques have been traditionally employed for mining microbial interaction patterns from taxonomic abundance data corresponding to a given microbial community. In spite of their efficiency, these correlation techniques can capture only 'pair-wise interactions'. Moreover, their emphasis on statistical significance can potentially result in missing out on several interactions that are relevant from a biological standpoint. This study explores the applicability of one of the earliest association rule mining algorithm i.e. the 'Apriori algorithm' for deriving 'microbial association rules' from the taxonomic profile of given microbial community. The classical Apriori approach derives association rules by analysing patterns of co-occurrence/co-exclusion between various '(subsets of) features/items' across various samples. Using real-world microbiome data, the efficiency/utility of this rule mining approach in deciphering multiple (biologically meaningful) association patterns between 'subsets/subgroups' of microbes (constituting microbiome samples) is demonstrated. As an example, association rules derived from publicly available gut microbiome datasets indicate an association between a group of microbes (Faecalibacterium, Dorea, and Blautia) that are known to have mutualistic metabolic associations among themselves. Application of the rule mining approach on gut microbiomes (sourced from the Human Microbiome Project) further indicated similar microbial association patterns in gut microbiomes irrespective of the gender of the subjects. A Linux implementation of the Association Rule Mining (ARM) software (customised for deriving 'microbial association rules' from microbiome data) is freely available for download from the following link: http://metagenomics.atc.tcs.com/arm. PMID:27124399
Tandon, Disha; Haque, Mohammed Monzoorul; Mande, Sharmila S
2016-01-01
The nature of inter-microbial metabolic interactions defines the stability of microbial communities residing in any ecological niche. Deciphering these interaction patterns is crucial for understanding the mode/mechanism(s) through which an individual microbial community transitions from one state to another (e.g. from a healthy to a diseased state). Statistical correlation techniques have been traditionally employed for mining microbial interaction patterns from taxonomic abundance data corresponding to a given microbial community. In spite of their efficiency, these correlation techniques can capture only 'pair-wise interactions'. Moreover, their emphasis on statistical significance can potentially result in missing out on several interactions that are relevant from a biological standpoint. This study explores the applicability of one of the earliest association rule mining algorithm i.e. the 'Apriori algorithm' for deriving 'microbial association rules' from the taxonomic profile of given microbial community. The classical Apriori approach derives association rules by analysing patterns of co-occurrence/co-exclusion between various '(subsets of) features/items' across various samples. Using real-world microbiome data, the efficiency/utility of this rule mining approach in deciphering multiple (biologically meaningful) association patterns between 'subsets/subgroups' of microbes (constituting microbiome samples) is demonstrated. As an example, association rules derived from publicly available gut microbiome datasets indicate an association between a group of microbes (Faecalibacterium, Dorea, and Blautia) that are known to have mutualistic metabolic associations among themselves. Application of the rule mining approach on gut microbiomes (sourced from the Human Microbiome Project) further indicated similar microbial association patterns in gut microbiomes irrespective of the gender of the subjects. A Linux implementation of the Association Rule Mining (ARM) software (customised for deriving 'microbial association rules' from microbiome data) is freely available for download from the following link: http://metagenomics.atc.tcs.com/arm.
Flocculation kinetics and aggregate structure of kaolinite mixtures in laminar tube flow.
Vaezi G, Farid; Sanders, R Sean; Masliyah, Jacob H
2011-03-01
Flocculation is commonly used in various solid-liquid separation processes in chemical and mineral industries to separate desired products or to treat waste streams. This paper presents an experimental technique to study flocculation processes in laminar tube flow. This approach allows for more realistic estimation of the shear rate to which an aggregate is exposed, as compared to more complicated shear fields (e.g. stirred tanks). A direct sampling method is used to minimize the effect of sampling on the aggregate structure. A combination of aggregate settling velocity and image analysis was used to quantify the structure of the aggregate. Aggregate size, density, and fractal dimension were found to be the most important aggregate structural parameters. The two methods used to determine aggregate fractal dimension were in good agreement. The effects of advective flow through an aggregate's porous structure and transition-regime drag coefficient on the evaluation of aggregate density were considered. The technique was applied to investigate the flocculation kinetics and the evolution of the aggregate structure of kaolin particles with an anionic flocculant under conditions similar to those of oil sands fine tailings. Aggregates were formed using a well controlled two-stage aggregation process. Detailed statistical analysis was performed to investigate the establishment of dynamic equilibrium condition in terms of aggregate size and density evolution. An equilibrium steady state condition was obtained within 90 s of the start of flocculation; after which no further change in aggregate structure was observed. Although longer flocculation times inside the shear field could conceivably cause aggregate structure conformation, statistical analysis indicated that this did not occur for the studied conditions. The results show that the technique and experimental conditions employed here produce aggregates having a well-defined, reproducible structure. Copyright © 2011. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinozaki, Kazuma; Zack, Jason W.; Pylypenko, Svitlana
Platinum electrocatalysts supported on high surface area and Vulcan carbon blacks (Pt/HSC, Pt/V) were characterized in rotating disk electrode (RDE) setups for electrochemical area (ECA) and oxygen reduction reaction (ORR) area specific activity (SA) and mass specific activity (MA) at 0.9 V. Films fabricated using several ink formulations and film-drying techniques were characterized for a statistically significant number of independent samples. The highest quality Pt/HSC films exhibited MA 870 ± 91 mA/mgPt and SA 864 ± 56 μA/cm 2 Pt while Pt/V had MA 706 ± 42 mA/mgPt and SA 1120 ± 70 μA/cm 2 Pt when measured in 0.1more » M HClO 4, 20 mV/s, 100 kPa O 2 and 23±2°C. An enhancement factor of 2.8 in themeasured SA was observable on eliminating Nafion ionomer and employing extremely thin, uniform films (~4.5 μg/cm 2 Pt) of Pt/HSC. The ECA for Pt/HSC (99 ± 7 m2/gPt) and Pt/V (65 ± 5 m 2/gPt) were statistically invariant and insensitive to film uniformity/thickness/fabrication technique; accordingly, enhancements in MA are wholly attributable to increases in SA. Impedance measurements coupled with scanning electron microscopy were used to de-convolute the losses within the catalyst layer and ascribed to the catalyst layer resistance, oxygen diffusion, and sulfonate anion adsorption/blocking. The ramifications of these results for proton exchange membrane fuel cells have also been examined.« less
75 FR 41579 - Submitting Airline Data via the Internet
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-16
... Airline Information, RTS-42, Bureau of Transportation Statistics, Research and Innovative Technology... Statistics (BTS), must be submitted electronically (e- filing). The new e-filing system is designed to be... November 30, 2010. P-10 Employment Statistics by Labor Category--due February 20, 2011. A Certification...
75 FR 3926 - Submission for OMB Emergency Review: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-25
... DEPARTMENT OF LABOR Bureau of Labor Statistics Submission for OMB Emergency Review: Comment.... Agency: Bureau of Labor Statistics. Type of Review: New collection. Title of Collection: Quarterly Census... appropriation tasks the Bureau of Labor Statistics (BLS) Quarterly Census of Employment and Wages (QCEW) program...
Electric Field Fluctuations in Water
NASA Astrophysics Data System (ADS)
Thorpe, Dayton; Limmer, David; Chandler, David
2013-03-01
Charge transfer in solution, such as autoionization and ion pair dissociation in water, is governed by rare electric field fluctuations of the solvent. Knowing the statistics of such fluctuations can help explain the dynamics of these rare events. Trajectories short enough to be tractable by computer simulation are virtually certain not to sample the large fluctuations that promote rare events. Here, we employ importance sampling techniques with classical molecular dynamics simulations of liquid water to study statistics of electric field fluctuations far from their means. We find that the distributions of electric fields located on individual water molecules are not in general gaussian. Near the mean this non-gaussianity is due to the internal charge distribution of the water molecule. Further from the mean, however, there is a previously unreported Bjerrum-like defect that stabilizes certain large fluctuations out of equilibrium. As expected, differences in electric fields acting between molecules are gaussian to a remarkable degree. By studying these differences, though, we are able to determine what configurations result not only in large electric fields, but also in electric fields with long spatial correlations that may be needed to promote charge separation.
On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit
2012-12-01
This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.
High-throughput Screening and Statistical Learning for the Design of Transparent Conducting Oxides
NASA Astrophysics Data System (ADS)
Sutton, Christopher; Ghiringhelli, Luca; Scheffler, Matthias
Transparent conducting oxides (TCOs) represent a class of well-developed and commercialized wide-bandgap semiconductors that are crucial for many electronic devices. Al, Ga, and In-based sesquioxides are investigated as new TCOs motivated by very intriguing recent experimental work that has demonstrated bandgap engineering in ternary (AlxGayIn1-x-y)2O3 ranging from 3.8 eV to 7.5 eV by adjusting the ratio of In/Ga and Ga/Al. We employed DFT-based cluster expansion (CE) models combined with fast stochastic optimization techniques (e.g., Wang-Landau and diffusive nested sampling) in order to efficiently search for stable and metastable configurations of (AlxGayIn1-x-y)2O3 at various lattice structures. The approach also allows for a consideration of the effect of entropy on the relative stability of ternary TCOs. Statistical learning/compressed sensing is being used to efficiently identify a structure-property relationship between the targeted properties (e.g., mobilities and optical transparency) and the fundamental chemical and physical parameters that control these properties. ∖
Sinko, William; de Oliveira, César Augusto F; Pierce, Levi C T; McCammon, J Andrew
2012-01-10
Molecular dynamics (MD) is one of the most common tools in computational chemistry. Recently, our group has employed accelerated molecular dynamics (aMD) to improve the conformational sampling over conventional molecular dynamics techniques. In the original aMD implementation, sampling is greatly improved by raising energy wells below a predefined energy level. Recently, our group presented an alternative aMD implementation where simulations are accelerated by lowering energy barriers of the potential energy surface. When coupled with thermodynamic integration simulations, this implementation showed very promising results. However, when applied to large systems, such as proteins, the simulation tends to be biased to high energy regions of the potential landscape. The reason for this behavior lies in the boost equation used since the highest energy barriers are dramatically more affected than the lower ones. To address this issue, in this work, we present a new boost equation that prevents oversampling of unfavorable high energy conformational states. The new boost potential provides not only better recovery of statistics throughout the simulation but also enhanced sampling of statistically relevant regions in explicit solvent MD simulations.
Multivariate analysis in thoracic research.
Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego
2015-03-01
Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.
Accuracy of Physical Self-Description Among Chronic Exercisers and Non-Exercisers.
Berning, Joseph M; DeBeliso, Mark; Sevene, Trish G; Adams, Kent J; Salmon, Paul; Stamford, Bryant A
2014-11-06
This study addressed the role of chronic exercise to enhance physical self-description as measured by self-estimated percent body fat. Accuracy of physical self-description was determined in normal-weight, regularly exercising and non-exercising males with similar body mass index (BMI)'s and females with similar BMI's (n=42 males and 45 females of which 23 males and 23 females met criteria to be considered chronic exercisers). Statistical analyses were conducted to determine the degree of agreement between self-estimated percent body fat and actual laboratory measurements (hydrostatic weighing). Three statistical techniques were employed: Pearson correlation coefficients, Bland and Altman plots, and regression analysis. Agreement between measured and self-estimated percent body fat was superior for males and females who exercised chronically, compared to non-exercisers. The clinical implications are as follows. Satisfaction with one's body can be influenced by several factors, including self-perceived body composition. Dissatisfaction can contribute to maladaptive and destructive weight management behaviors. The present study suggests that regular exercise provides a basis for more positive weight management behaviors by enhancing the accuracy of self-assessed body composition.
Nanocluster building blocks of artificial square spin ice: Stray-field studies of thermal dynamics
NASA Astrophysics Data System (ADS)
Pohlit, Merlin; Porrati, Fabrizio; Huth, Michael; Ohno, Yuzo; Ohno, Hideo; Müller, Jens
2015-05-01
We present measurements of the thermal dynamics of a Co-based single building block of an artificial square spin ice fabricated by focused electron-beam-induced deposition. We employ micro-Hall magnetometry, an ultra-sensitive tool to study the stray field emanating from magnetic nanostructures, as a new technique to access the dynamical properties during the magnetization reversal of the spin-ice nanocluster. The obtained hysteresis loop exhibits distinct steps, displaying a reduction of their "coercive field" with increasing temperature. Therefore, thermally unstable states could be repetitively prepared by relatively simple temperature and field protocols allowing one to investigate the statistics of their switching behavior within experimentally accessible timescales. For a selected switching event, we find a strong reduction of the so-prepared states' "survival time" with increasing temperature and magnetic field. Besides the possibility to control the lifetime of selected switching events at will, we find evidence for a more complex behavior caused by the special spin ice arrangement of the macrospins, i.e., that the magnetic reversal statistically follows distinct "paths" most likely driven by thermal perturbation.
Looking for a Job While Employed. Issues in Labor Statistics. Summary 97-14.
ERIC Educational Resources Information Center
Bureau of Labor Statistics, Washington, DC.
In February 1995, a supplement to the Current Population Survey examined the job search rate among a sample of 108,876 employed persons (except unpaid family workers) who had worked for their employer for at least 3 months were asked if they had looked for others employment since December 1994. Of those surveyed, 6,044 (5.6%) had actively searched…
Post, Marcel W. M.; Fekete, Christine; Trezzini, Bruno; Brinkhof, Martin W. G.
2016-01-01
Objectives We aimed to describe labor market participation (LMP) of persons with spinal cord injury (SCI) in Switzerland, to examine potential determinants of LMP, and to compare LMP between SCI and the general population. Methods We analyzed data from 1458 participants of employable age from the cross-sectional community survey of the Swiss Spinal Cord Injury Cohort Study. Data on LMP of the Swiss general population were obtained from the Swiss Federal Statistical Office. Factors associated with employment status as well as the amount of work performed in terms of full-time equivalent (FTE) were examined with regression techniques. Results 53.4% of the participants were employed at the time of the study. Adjusted odds of being employed were increased for males (OR = 1.73, 95% CI 1.33–2.25) and participants with paraplegia (OR = 1.78, 95% CI 1.40–2.27). The likelihood of being employed showed a significant concave relationship with age, peaking at age 40. The relation of LMP with education was s-shaped, while LMP was linearly related to time since injury. On average, employment rates were 30% lower than in the general population. Males with tetraplegia aged between 40 and 54 showed the greatest difference. From the 771 employed persons, the majority (81.7%) worked part-time with a median of 50% FTE (IRQ: 40%-80%). Men, those with younger age, higher education, incomplete lesions, and non-traumatic etiology showed significantly increased odds of working more hours per week. Significantly more people worked part-time than in the general population with the greatest difference found for males with tetraplegia aged between 40 and 54. Conclusions LMP of persons with SCI is comparatively high in Switzerland. LMP after SCI is, however, considerably lower than in the general population. Future research needs to show whether the reduced LMP in SCI reflects individual capacity adjustment, contextual constraints on higher LMP or both. PMID:27875566
Reinhardt, Jan D; Post, Marcel W M; Fekete, Christine; Trezzini, Bruno; Brinkhof, Martin W G
2016-01-01
We aimed to describe labor market participation (LMP) of persons with spinal cord injury (SCI) in Switzerland, to examine potential determinants of LMP, and to compare LMP between SCI and the general population. We analyzed data from 1458 participants of employable age from the cross-sectional community survey of the Swiss Spinal Cord Injury Cohort Study. Data on LMP of the Swiss general population were obtained from the Swiss Federal Statistical Office. Factors associated with employment status as well as the amount of work performed in terms of full-time equivalent (FTE) were examined with regression techniques. 53.4% of the participants were employed at the time of the study. Adjusted odds of being employed were increased for males (OR = 1.73, 95% CI 1.33-2.25) and participants with paraplegia (OR = 1.78, 95% CI 1.40-2.27). The likelihood of being employed showed a significant concave relationship with age, peaking at age 40. The relation of LMP with education was s-shaped, while LMP was linearly related to time since injury. On average, employment rates were 30% lower than in the general population. Males with tetraplegia aged between 40 and 54 showed the greatest difference. From the 771 employed persons, the majority (81.7%) worked part-time with a median of 50% FTE (IRQ: 40%-80%). Men, those with younger age, higher education, incomplete lesions, and non-traumatic etiology showed significantly increased odds of working more hours per week. Significantly more people worked part-time than in the general population with the greatest difference found for males with tetraplegia aged between 40 and 54. LMP of persons with SCI is comparatively high in Switzerland. LMP after SCI is, however, considerably lower than in the general population. Future research needs to show whether the reduced LMP in SCI reflects individual capacity adjustment, contextual constraints on higher LMP or both.
Long term electromagnetic monitoring at Parkfield, CA
NASA Astrophysics Data System (ADS)
Kappler, Karl Neil
Electric and magnetic fields in the (10-4-1.0) Hz band were monitored at two sites adjacent to the San Andreas Fault near Parkfield and Hollister, California. Observed fields typically comprise natural magnetotelluric fields, with cultural and instrument noise. A data window [2002-2005], enclosing the September 28, 2004 M6 Parkfield earthquake, was analyzed to determine if anomalous electric or magnetic fields, or changes in ground conductivity, occurred before the earthquake. The data were edited, removing intervals of instrument malfunction, leaving 875 days left in the four-year period. Frequent, local spike-like disturbances were removed. The distribution of these spikes was not biased around the time of the earthquake. Signal to noise ratios, estimated via magnetotelluric processing techniques, provided an index of data quality. Plots of signal and noise amplitude spectra, showed the behavior of the ULF fields to be remarkably constant over the period of analysis. From these first-order plots, it is clear that most of the recorded energy is coherent over the spatial extent of the array. Three main statistical techniques were employed to separate local anomalous electrical or magnetic fields from the dominant coherent natural fields: transfer function estimates between components at each site were employed to subtract the dominant field, and look deeper at the 'residual' fields; the data were decomposed into principal components to identify linear combinations of array channels, which are maximally uncorrelated; the technique of canonical coherences was employed to distinguish anomalous fields which are spatially broad from anomalies which occur at a single site only, and furthermore to distinguish anomalies which are present in both the electric and magnetic fields form those which are present in only one field type. Standard remote reference apparent resistivity estimates were generated daily at Parkfield. Most of the variation was observed to be seasonal, and frequency independent, suggesting a local seasonal distortion effect. Once corrected for distortion, nearly all of the variability in the apparent resistivity was removed. In all cases, high levels of sensitivity to subtle electromagnetic effects were demonstrated, but no effects which can be described as precursors to the Parkfield earthquake were found.
NASA Astrophysics Data System (ADS)
Hilt, Attila; Pozsonyi, László
2012-09-01
Fixed access networks widely employ fiber-optical techniques due to the extremely wide bandwidth offered to subscribers. In the last decade, there has also been an enormous increase of user data visible in mobile systems. The importance of fiber-optical techniques within the fixed transmission/transport networks of mobile systems is therefore inevitably increasing. This article summarizes a few reasons and gives examples why and how fiber-optic techniques are employed efficiently in second-generation networks.
Chi-squared and C statistic minimization for low count per bin data
NASA Astrophysics Data System (ADS)
Nousek, John A.; Shue, David R.
1989-07-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy
NASA Technical Reports Server (NTRS)
Nousek, John A.; Shue, David R.
1989-01-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Recent statistical methods for orientation data
NASA Technical Reports Server (NTRS)
Batschelet, E.
1972-01-01
The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.
A Critique of Divorce Statistics and Their Interpretation.
ERIC Educational Resources Information Center
Crosby, John F.
1980-01-01
Increasingly, appeals to the divorce statistic are employed to substantiate claims that the family is in a state of breakdown and marriage is passe. This article contains a consideration of reasons why the divorce statistics are invalid and/or unreliable as indicators of the present state of marriage and family. (Author)
Handbook of Labor Statistics. Bulletin 2175.
ERIC Educational Resources Information Center
Springsteen, Rosalind, Comp.; Epstein, Rosalie, Comp.
This publication makes available in one volume the major series produced by the Bureau of Labor Statistics. Technical notes preceding each major section contain information on data changes and explain the services. Forty-four tables derived from the Current Population Survey (CPS) provide statistics on labor force and employment status,…
76 FR 34385 - Program Integrity: Gainful Employment-Debt Measures
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... postsecondary education at a public institution. National Center for Education Statistics, 2004/2009 Beginning... reliable earnings information, including use of State data, survey data, or Bureau of Labor Statistics (BLS...
Overworked? An Observation of the Relationship between Student Employment and Academic Performance
ERIC Educational Resources Information Center
Logan, Jennifer; Hughes, Traci; Logan, Brian
2016-01-01
Current observations from the National Center for Education Statistics demonstrate the dramatic increase in college student employment over the past few decades. Not only are more students employed than in previous decades, students are working more hours. This could lead to declines in academic performance as hours worked increase, resulting in…
The Impact of Social Capital on the Employment of College Graduates
ERIC Educational Resources Information Center
Fengqiao, Yan; Dan, Mao
2015-01-01
This article addresses the impact of social capital on college graduate employment. After reviewing the literature, the authors analyze data collected by Peking University from 34 universities in 2005 and use statistical analysis to clarify the impact of social capital on students' choice of employment or further study, job placement rate,…
Employment and Unemployment in 1976. Special Labor Force Report 199.
ERIC Educational Resources Information Center
Bednarzik, Robert W.; St. Marie, Stephen M.
Changes in employment and unemployment in 1976, presented through the use of statistical data in tabular and chart forms, is the focus of this report. Protection for the unemployed, labor force trends, and persons of Spanish origin are also discussed under separate minor headings. Under the section on employment, the following subsections are…
Sex Discrimination in Employment. Research Report No. 171.
ERIC Educational Resources Information Center
Morris, J. David; Wood, Linda B.
This report examines the status of women and the laws that have been enacted to protect women from discrimination in employment. Written in lay language, it examines employment and occupational statistics for women in the United States and in Kentucky. Following an introduction in Chapter 1, the report presents four chapters surveying the problem,…
The 1988-89 Job Outlook in Brief.
ERIC Educational Resources Information Center
White, Martha C.
1988-01-01
This article summarizes the employment outlook in 225 occupations as projected by the Bureau of Labor Statistics. It provides thumbnail sketches of employment data for each of the occupations in the 1988-89 "Occupational Outlook Handbook," on which it is based. Each entry presents the occupation's title, 1986 employment numbers, the percent change…
Employers and Child Care: What Roles Do They Play?
ERIC Educational Resources Information Center
Hayghe, Howard V.
1988-01-01
The Bureau of Labor Statistics conducted a nationwide survey of approximately 10,000 businesses and government agencies in 1987. Results show that about 2 percent of employers sponsored day-care centers and 3 percent provide financial assistance toward expenses. However, employers are doing other things to aid employees with growing children. (JOW)
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
20 CFR 656.40 - Determination of prevailing wage for labor certification purposes.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Occupational Employment Statistics Survey shall be used to determine the arithmetic mean, unless the employer provides an acceptable survey under paragraph (g) of this section. (3) If the employer provides a survey... education and research entities. In computing the prevailing wage for a job opportunity in an occupational...
In-school service predictors of employment for individuals with intellectual disability.
Park, Jiyoon; Bouck, Emily
2018-06-01
Although there are many secondary data analyses of the National Longitudinal Transition Study-2 (NLTS-2) to investigate post-school outcome for students with disabilities, there has been a lack of research with in-school service predictors and post-school outcome for students with specific disability categories. This study was a secondary data analysis of NLTS-2 to investigate the relationship between current employment status and in-school services for individuals with intellectual disability. Statistical methods such as descriptive statistics and logistic regression were used to analyze NLTS-2 data set. The main findings included that in-school services were correlated with current employment status, and that primary disability (i.e., mild intellectual disability and moderate/severe intellectual disability) was associated with current employment status. In-school services are critical in predicting current employment for individuals with intellectual disability. Also, data suggest additional research is needed to investigate various in-school services and variables that could predict employment differences between individuals with mild and moderate/severe intellectual disability. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Importance of Introductory Statistics Students Understanding Appropriate Sampling Techniques
ERIC Educational Resources Information Center
Menil, Violeta C.
2005-01-01
In this paper the author discusses the meaning of sampling, the reasons for sampling, the Central Limit Theorem, and the different techniques of sampling. Practical and relevant examples are given to make the appropriate sampling techniques understandable to students of Introductory Statistics courses. With a thorough knowledge of sampling…
Facial anthropometric differences among gender, ethnicity, and age groups.
Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald
2010-06-01
The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational groups have facial features that differ significantly than those in construction. Statistically significant differences in facial anthropometric dimensions (P < 0.05) were noted between males and females, all racial/ethnic groups, and the subjects who were at least 45 years old when compared to workers between 18 and 29 years of age. These findings could be important to the design and manufacture of respirators, as well as employers responsible for supplying respiratory protective equipment to their employees.
Performance Data Gathering and Representation from Fixed-Size Statistical Data
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Jin, Haoqiang H.; Schmidt, Melisa A.; Kutler, Paul (Technical Monitor)
1997-01-01
The two commonly-used performance data types in the super-computing community, statistics and event traces, are discussed and compared. Statistical data are much more compact but lack the probative power event traces offer. Event traces, on the other hand, are unbounded and can easily fill up the entire file system during program execution. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. Two basic ideas are employed: the use of averages to replace recording data for each instance and 'formulae' to represent sequences associated with communication and control flow. The user can trade off tracing overhead, trace data size with data quality incrementally. In other words, the user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer using space-time views. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected with event traces. We found that the trace files thus obtained are, indeed, small, bounded and predictable before program execution, and that the quality of the space-time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at runtime to learn longer sequences.
Liu, Su; Stapleton, David C
2011-01-01
We present longitudinal employment and work-incentive statistics for individuals who began receiving Social Security Disability Insurance (DI) benefits from 1996 through 2006. For the longest-observed cohort, 28 percent returned to work, 6.5 percent had their benefits suspended for work in at least 1 month, and 3.7 percent had their benefits terminated for work. The corresponding percentages are much higher for those who were younger than age 40 when they entered the DI program. Most first suspensions occurred within 5 years after entry. Cross-state variation in outcomes is high, and, to the extent observed, statistics for more recent cohorts are lower.
41 CFR 60-2.35 - Compliance status.
Code of Federal Regulations, 2010 CFR
2010-07-01
... workforce (i.e., the employment of minorities or women at a percentage rate below, or above, the goal level... obligations will be determined by analysis of statistical data and other non-statistical information which...
Should I Pack My Umbrella? Clinical versus Statistical Prediction of Mental Health Decisions
ERIC Educational Resources Information Center
Aegisdottir, Stefania; Spengler, Paul M.; White, Michael J.
2006-01-01
In this rejoinder, the authors respond to the insightful commentary of Strohmer and Arm, Chwalisz, and Hilton, Harris, and Rice about the meta-analysis on statistical versus clinical prediction techniques for mental health judgments. The authors address issues including the availability of statistical prediction techniques for real-life psychology…
Change Detection in Rough Time Series
2014-09-01
Business Statistics : An Inferential Approach, Dellen: San Francisco. [18] Winston, W. (1997) Operations Research Applications and Algorithms, Duxbury...distribution that can present significant challenges to conventional statistical tracking techniques. To address this problem the proposed method...applies hybrid fuzzy statistical techniques to series granules instead of to individual measures. Three examples demonstrated the robust nature of the
Enhancing Students' Ability to Use Statistical Reasoning with Everyday Problems
ERIC Educational Resources Information Center
Lawson, Timothy J.; Schwiers, Michael; Doellman, Maureen; Grady, Greg; Kelnhofer, Robert
2003-01-01
We discuss a technique for teaching students everyday applications of statistical concepts. We used this technique with students (n = 50) enrolled in several sections of an introductory statistics course; students (n = 45) in other sections served as a comparison group. A class of introductory psychology students (n = 24) served as a second…
Technical Note: The Initial Stages of Statistical Data Analysis
Tandy, Richard D.
1998-01-01
Objective: To provide an overview of several important data-related considerations in the design stage of a research project and to review the levels of measurement and their relationship to the statistical technique chosen for the data analysis. Background: When planning a study, the researcher must clearly define the research problem and narrow it down to specific, testable questions. The next steps are to identify the variables in the study, decide how to group and treat subjects, and determine how to measure, and the underlying level of measurement of, the dependent variables. Then the appropriate statistical technique can be selected for data analysis. Description: The four levels of measurement in increasing complexity are nominal, ordinal, interval, and ratio. Nominal data are categorical or “count” data, and the numbers are treated as labels. Ordinal data can be ranked in a meaningful order by magnitude. Interval data possess the characteristics of ordinal data and also have equal distances between levels. Ratio data have a natural zero point. Nominal and ordinal data are analyzed with nonparametric statistical techniques and interval and ratio data with parametric statistical techniques. Advantages: Understanding the four levels of measurement and when it is appropriate to use each is important in determining which statistical technique to use when analyzing data. PMID:16558489
Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Equal Employment Opportunity Commission, Washington, DC.
The Equal Employment Opportunity Report for 1969 documents the results of job discrimination, based on more than 150,000 reports submitted by 44,000 employers covering more than 28 million workers. These reports provide statistics of employment by sex, race, and national origin in nine standard occupational categories: officials and managers,…
Size and shape measurement in contemporary cephalometrics.
McIntyre, Grant T; Mossey, Peter A
2003-06-01
The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.
Identifying synonymy between relational phrases using word embeddings.
Nguyen, Nhung T H; Miwa, Makoto; Tsuruoka, Yoshimasa; Tojo, Satoshi
2015-08-01
Many text mining applications in the biomedical domain benefit from automatic clustering of relational phrases into synonymous groups, since it alleviates the problem of spurious mismatches caused by the diversity of natural language expressions. Most of the previous work that has addressed this task of synonymy resolution uses similarity metrics between relational phrases based on textual strings or dependency paths, which, for the most part, ignore the context around the relations. To overcome this shortcoming, we employ a word embedding technique to encode relational phrases. We then apply the k-means algorithm on top of the distributional representations to cluster the phrases. Our experimental results show that this approach outperforms state-of-the-art statistical models including latent Dirichlet allocation and Markov logic networks. Copyright © 2015 Elsevier Inc. All rights reserved.
Continuous-time system identification of a smoking cessation intervention
NASA Astrophysics Data System (ADS)
Timms, Kevin P.; Rivera, Daniel E.; Collins, Linda M.; Piper, Megan E.
2014-07-01
Cigarette smoking is a major global public health issue and the leading cause of preventable death in the United States. Toward a goal of designing better smoking cessation treatments, system identification techniques are applied to intervention data to describe smoking cessation as a process of behaviour change. System identification problems that draw from two modelling paradigms in quantitative psychology (statistical mediation and self-regulation) are considered, consisting of a series of continuous-time estimation problems. A continuous-time dynamic modelling approach is employed to describe the response of craving and smoking rates during a quit attempt, as captured in data from a smoking cessation clinical trial. The use of continuous-time models provide benefits of parsimony, ease of interpretation, and the opportunity to work with uneven or missing data.
Ospina, Raydonal; Frery, Alejandro C.
2016-01-01
We present a new approach for handwritten signature classification and verification based on descriptors stemming from time causal information theory. The proposal uses the Shannon entropy, the statistical complexity, and the Fisher information evaluated over the Bandt and Pompe symbolization of the horizontal and vertical coordinates of signatures. These six features are easy and fast to compute, and they are the input to an One-Class Support Vector Machine classifier. The results are better than state-of-the-art online techniques that employ higher-dimensional feature spaces which often require specialized software and hardware. We assess the consistency of our proposal with respect to the size of the training sample, and we also use it to classify the signatures into meaningful groups. PMID:27907014
Systems implications of L-band fade data statistics for LEO mobile systems
NASA Technical Reports Server (NTRS)
Devieux, Carrie L.
1993-01-01
This paper examines and analyzes research data on the role of foliage attenuation in signal fading between a satellite transmitter and a terrestrial vehicle-mounted receiver. The frequency band of measurement, called L-Band, includes the region 1610.0 to 1626.5 MHz. Data from tests involving various combinations of foliage and vehicle movement conditions clearly show evidence of fast fading (in excess of 0.5 dB per millisecond) and fade depths as great or greater than 16 dB. As a result, the design of a communications link power control that provides the level of accuracy necessary for power sensitive systems could be significantly impacted. Specific examples of this include the communications links that employ Code Division Multiple Access (CDMA) as a modulation technique.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Vectorized program architectures for supercomputer-aided circuit design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzoli, V.; Ferlito, M.; Neri, A.
1986-01-01
Vector processors (supercomputers) can be effectively employed in MIC or MMIC applications to solve problems of large numerical size such as broad-band nonlinear design or statistical design (yield optimization). In order to fully exploit the capabilities of a vector hardware, any program architecture must be structured accordingly. This paper presents a possible approach to the ''semantic'' vectorization of microwave circuit design software. Speed-up factors of the order of 50 can be obtained on a typical vector processor (Cray X-MP), with respect to the most powerful scaler computers (CDC 7600), with cost reductions of more than one order of magnitude. Thismore » could broaden the horizon of microwave CAD techniques to include problems that are practically out of the reach of conventional systems.« less
Systems implications of L-band fade data statistics for LEO mobile systems
NASA Astrophysics Data System (ADS)
Devieux, Carrie L.
This paper examines and analyzes research data on the role of foliage attenuation in signal fading between a satellite transmitter and a terrestrial vehicle-mounted receiver. The frequency band of measurement, called L-Band, includes the region 1610.0 to 1626.5 MHz. Data from tests involving various combinations of foliage and vehicle movement conditions clearly show evidence of fast fading (in excess of 0.5 dB per millisecond) and fade depths as great or greater than 16 dB. As a result, the design of a communications link power control that provides the level of accuracy necessary for power sensitive systems could be significantly impacted. Specific examples of this include the communications links that employ Code Division Multiple Access (CDMA) as a modulation technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tennenberg, S.D.; Jacobs, M.P.; Solomkin, J.S.
1987-04-01
Two methods for predicting adult respiratory distress syndrome (ARDS) were evaluated prospectively in a group of 81 multitrauma and sepsis patients considered at clinical high risk. A popular ARDS risk-scoring method, employing discriminant analysis equations (weighted risk criteria and oxygenation characteristics), yielded a predictive accuracy of 59% and a false-negative rate of 22%. Pulmonary alveolar-capillary permeability (PACP) was determined with a radioaerosol lung-scan technique in 23 of these 81 patients, representing a statistically similar subgroup. Lung scanning achieved a predictive accuracy of 71% (after excluding patients with unilateral pulmonary contusion) and gave no false-negatives. We propose a combination of clinicalmore » risk identification and functional determination of PACP to assess a patient's risk of developing ARDS.« less
de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino
2018-05-01
This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.
Moment-based metrics for global sensitivity analysis of hydrological systems
NASA Astrophysics Data System (ADS)
Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto
2017-12-01
We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.
Pai, Chih-Wei; Saleh, Wafaa
2007-03-01
The fact that motorcycle users tend to be more vulnerable to injuries than those using other motorized vehicles may act synergistically with the complexity of conflicting movements between vehicles and motorcycles to increase injury severity in a junction-type accident. A junction-type collision tends to be more severe than a non-junction case due to the fact that some of the injurious crashes such as angle-collision commonly occur. Existing studies have applied several statistical modeling techniques to examine influential factors on the occurrences of different crashes among motorized vehicles but surprisingly very little has empirically explored whether a particular crash type, resulting from a junction-type accident, is more injurious to motorcyclists. This article attempts to investigate whether a particular collision is more deadly to motorcyclists conditioned on crash occurrence at T-junctions in the U.K., while controlling for environment, vehicle, and demographic factors. The statistical modeling technique employed is the ordered probit models using the data extracted from the STATS19 accident injury database (1999-2004). The modeling found determinants of injury severity among motorcyclists at T-junctions in the U.K. For example, an approach-turn/head-on collision is much more injurious to motorcyclists; and, those riding in early morning (i.e., 0000-0659) are more likely to be severely injured. This study offers a guideline for future research, as well as insight into potential prevention strategies that might help moderate motorcyclist injuries.
A Highly Efficient Design Strategy for Regression with Outcome Pooling
Mitchell, Emily M.; Lyles, Robert H.; Manatunga, Amita K.; Perkins, Neil J.; Schisterman, Enrique F.
2014-01-01
The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. PMID:25220822
A highly efficient design strategy for regression with outcome pooling.
Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F
2014-12-10
The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
NASA Astrophysics Data System (ADS)
Daniel, Amuthachelvi; Prakasarao, Aruna; Ganesan, Singaravelu
2018-02-01
The molecular level changes associated with oncogenesis precede the morphological changes in cells and tissues. Hence molecular level diagnosis would promote early diagnosis of the disease. Raman spectroscopy is capable of providing specific spectral signature of various biomolecules present in the cells and tissues under various pathological conditions. The aim of this work is to develop a non-linear multi-class statistical methodology for discrimination of normal, neoplastic and malignant cells/tissues. The tissues were classified as normal, pre-malignant and malignant by employing Principal Component Analysis followed by Artificial Neural Network (PC-ANN). The overall accuracy achieved was 99%. Further, to get an insight into the quantitative biochemical composition of the normal, neoplastic and malignant tissues, a linear combination of the major biochemicals by non-negative least squares technique was fit to the measured Raman spectra of the tissues. This technique confirms the changes in the major biomolecules such as lipids, nucleic acids, actin, glycogen and collagen associated with the different pathological conditions. To study the efficacy of this technique in comparison with histopathology, we have utilized Principal Component followed by Linear Discriminant Analysis (PC-LDA) to discriminate the well differentiated, moderately differentiated and poorly differentiated squamous cell carcinoma with an accuracy of 94.0%. And the results demonstrated that Raman spectroscopy has the potential to complement the good old technique of histopathology.
Non-Markovian near-infrared Q branch of HCl diluted in liquid Ar.
Padilla, Antonio; Pérez, Justo
2013-08-28
By using a non-Markovian spectral theory based in the Kubo cumulant expansion technique, we have qualitatively studied the infrared Q branch observed in the fundamental absorption band of HCl diluted in liquid Ar. The statistical parameters of the anisotropic interaction present in this spectral theory were calculated by means of molecular dynamics techniques, and found that the values of the anisotropic correlation times are significantly greater (by a factor of two) than those previously obtained by fitting procedures or microscopic cell models. This fact is decisive for the observation in the theoretical spectral band of a central Q resonance which is absent in the abundant previous researches carried out with the usual theories based in Kubo cumulant expansion techniques. Although the theory used in this work only allows a qualitative study of the Q branch, we can employ it to study the unknown characteristics of the Q resonance which are difficult to obtain with the quantum simulation techniques recently developed. For example, in this study we have found that the Q branch is basically a non-Markovian (or memory) effect produced by the spectral line interferences, where the PR interferential profile basically determines the Q branch spectral shape. Furthermore, we have found that the Q resonance is principally generated by the first rotational states of the first two vibrational levels, those more affected by the action of the dissolvent.
NASA Astrophysics Data System (ADS)
Fisichella, M.; Shotter, A. C.; Di Pietro, A.; Figuera, P.; Lattuada, M.; Marchetta, C.; Privitera, V.; Romano, L.; Ruiz, C.; Zadro, M.
2015-12-01
For low energy reaction studies involving radioactive ion beams, the experimental reaction yields are generally small due to the low intensity of the beams. For this reason, the stacked target technique has been often used to measure excitation functions. This technique offers considerable advantages since the reaction cross-section at several energies can be simultaneously measured. In a further effort to increase yields, thick targets are also employed. The main disadvantage of the method is the degradation of the beam quality as it passes through the stack due to the statistical nature of energy loss processes and any nonuniformity of the stacked targets. This degradation can lead to ambiguities of associating effective beam energies to reaction product yields for the targets within the stack and, as a consequence, to an error in the determination of the excitation function for the reaction under study. A thorough investigation of these ambiguities is reported, and a best practice procedure of analyzing data obtained using the stacked target technique with radioactive ion beams is recommended. Using this procedure a re-evaluation is reported of some previously published sub-barrier fusion data in order to demonstrate the possibility of misinterpretations of derived excitation functions. In addition, this best practice procedure has been used to evaluate, from a new data set, the sub-barrier fusion excitation function for the reaction 6Li+120Sn .
Comparing Pattern Recognition Feature Sets for Sorting Triples in the FIRST Database
NASA Astrophysics Data System (ADS)
Proctor, D. D.
2006-07-01
Pattern recognition techniques have been used with increasing success for coping with the tremendous amounts of data being generated by automated surveys. Usually this process involves construction of training sets, the typical examples of data with known classifications. Given a feature set, along with the training set, statistical methods can be employed to generate a classifier. The classifier is then applied to process the remaining data. Feature set selection, however, is still an issue. This paper presents techniques developed for accommodating data for which a substantive portion of the training set cannot be classified unambiguously, a typical case for low-resolution data. Significance tests on the sort-ordered, sample-size-normalized vote distribution of an ensemble of decision trees is introduced as a method of evaluating relative quality of feature sets. The technique is applied to comparing feature sets for sorting a particular radio galaxy morphology, bent-doubles, from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) database. Also examined are alternative functional forms for feature sets. Associated standard deviations provide the means to evaluate the effect of the number of folds, the number of classifiers per fold, and the sample size on the resulting classifications. The technique also may be applied to situations for which, although accurate classifications are available, the feature set is clearly inadequate, but is desired nonetheless to make the best of available information.
Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?
Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie
2012-01-01
A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746
Managing age discrimination: an examination of the techniques used when seeking employment.
Berger, Ellie D
2009-06-01
This article examines the age-related management techniques used by older workers in their search for employment. Data are drawn from interviews with individuals aged 45-65 years (N = 30). Findings indicate that participants develop "counteractions" and "concealments" to manage perceived age discrimination. Individuals counteract employers' ageist stereotypes by maintaining their skills and changing their work-related expectations and conceal age by altering their résumés, physical appearance, and language used. This research suggests that there is a need to reexamine the hiring practices of employers and to improve legislation in relation to their accountability.
Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures
Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha
2017-01-01
Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763
Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.
Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha
2017-06-01
The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.
Pursuant to the No Fear Act, a federal agency must post on its public Web site summary statistical data pertaining to complaints of employment discrimination filed by employees, former employees and applicants for employment under 29 CFR part 1614
Industry is Largest Employer of Scientists
ERIC Educational Resources Information Center
Chemical and Engineering News, 1977
1977-01-01
Cites statistics of a National Science Foundation report on scientists and engineers in 1974. Reports that chemists are better educated, older, have a better chance of being employed, and do more work for industry, than other scientific personnel. (MLH)
Exposure of the surgeon's hands to radiation during hand surgery procedures.
Żyluk, Andrzej; Puchalski, Piotr; Szlosser, Zbigniew; Dec, Paweł; Chrąchol, Joanna
2014-01-01
The objective of the study was to assess the time of exposure of the surgeon's hands to radiation and calculate of the equivalent dose absorbed during surgery of hand and wrist fractures with C-arm fluoroscope guidance. The necessary data specified by the objective of the study were acquired from operations of 287 patients with fractures of fingers, metacarpals, wrist bones and distal radius. 218 operations (78%) were percutaneous procedures and 60 (22%) were performed by open method. Data on the time of exposure and dose of radiation were acquired from the display of the fluoroscope, where they were automatically generated. These data were assigned to the individual patient, type of fracture, method of surgery and the operating surgeon. Fixations of distal radial fractures required longer times of radiation exposure (mean 61 sec.) than fractures of the wrist/metacarpals and fingers (38 and 32 sec., respectively), which was associated with absorption of significantly higher equivalent doses. Fixations of distal radial fractures by open method were associated with statistically significantly higher equivalent doses (0.41 mSv) than percutaneous procedures (0.3 mSv). Fixations of wrist and metacarpal bone fractures by open method were associated with lower equivalent doses (0.34 mSv) than percutaneous procedures (0.37 mSv),but the difference was not significant. Fixations of finger fractures by open method were associated with lower equivalent doses (0.13 mSv) than percutaneous procedures (0.24 mSv), the difference being statistically non-significant. Statistically significant differences in exposure time and equivalent doses were noted between 4 surgeons participating in the study, but no definitive relationship was found between these parameters and surgeons' employment time. 1. Hand surgery procedures under fluoroscopic guidance are associated with mild exposure of the surgeons' hands to radiation. 2. The equivalent dose was related to the type of fracture, operative technique and - to some degree - to the time of employment of the surgeon.
Public opinion about smoking and smoke free legislation in a district of North India.
Goel, S; Singh, R J; D, Sharma; A, Singh
2014-01-01
Context: A growing number of cities, districts, counties and states across the globe are going smoke-free. While an Indian national law namely Cigarettes and Other Tobacco Products Act (COTPA) exists since 2003 and aims at protecting all the people in our country; people still smoke in public places. Aim: This study assessed knowledge and perceptions about smoking, SHS and their support for Smoke-free laws among people residing in Mohali district, Punjab. Materials and Methods: This cross-sectional study was conducted in Mohali district of Punjab, India. A sample size of 1600 people was obtained. Probability Proportional to Size technique was used for selecting the number of individuals to be interviewed from each block and also from urban and rural population. Statistical Analysis Used: We estimated proportions and tested for significant differences by residence, smoking status, literacy level and employment level by means of the chi-square statistics. Statistical software SPSS for Windows version 20 was used for analysing data . Results: The overall prevalence of current smoking among study participants was 25%. Around 96% were aware of the fact that smoking is harmful to health, 45% viewed second-hand smoke to be equally harmful as active smoking, 84.2% knew that smoking is prohibited in public places and 88.3% wanted the government to take strict actions to control the menace of public smoking. Multivariate logistic regression analysis showed that people aged 20 years and above, unemployed, urban, literate and non-smokers had significantly better perception towards harms of smoking. The knowledge about smoke free provisions of COTPA was significantly better among males, employed individuals, urban residents, and literate people. Conclusions: There was high knowledge about deleterious multi-dimensional effects of smoking among residents and a high support for implementation of COTPA. Efforts should be taken to make Mohali a "smoke-free district".
In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...
Statistics and Title VII Proof: Prima Facie Case and Rebuttal.
ERIC Educational Resources Information Center
Whitten, David
1978-01-01
The method and means by which statistics can raise a prima facie case of Title VII violation are analyzed. A standard is identified that can be applied to determine whether a statistical disparity is sufficient to shift the burden to the employer to rebut a prima facie case of discrimination. (LBH)
Klein, Celso A; da Silva, Douglas; Reston, Eduardo G; Borghetti, Diana Lb; Zimmer, Roberto
2018-03-01
The aim of this study is to assess marginal microleakage of cervical cavities restored with composite resins and two different adhesive techniques subjected to at-home and in-office bleaching. In this randomized, blind laboratory experiment, 60 bovine teeth recently extracted were collected and divided into six groups (n = 10 each group). The teeth received cervical cavity preparations (2 mm × 3 mm × 1 mm) with enamel margins. Two different adhesive systems were used (Single Bond 2 and Clearfil SE Bond), in addition to composite resin (Z250). Restored teeth received two different bleaching gels (Opalescence PF and Opalescence Boost). Teeth were thermo-cycled and analyzed under confocal laser scanning microscopy. No significant differences were observed (p > 0.05) in microleakage scores between the two groups not subjected to bleaching nor between the four groups that received bleaching treatment (p > 0.05), regardless of the gel and adhesive system employed. However, when comparing nonbleached with bleached teeth, those not subjected to bleaching showed statistically lower marginal microleakage scores (p < 0.05). Data were statistically analyzed using the Kruskal-Wallis test followed by Student-Newman-Keuls post hoc test, with significance set at 5%. Marginal microleakage in composite resin restorations is influenced by the action of bleaching agents used both at-home and in-office, regardless of the adhesive system employed (total-etch or self-etch). Both at-home and in-office bleaching agents have an influence on the adhesive interface of resin restorations, producing changes and inducing marginal leakage.
The economic impact of Mexico City's smoke-free law.
López, Carlos Manuel Guerrero; Ruiz, Jorge Alberto Jiménez; Shigematsu, Luz Myriam Reynales; Waters, Hugh R
2011-07-01
To evaluate the economic impact of Mexico City's 2008 smoke-free law--The Non-Smokers' Health Protection Law on restaurants, bars and nightclubs. We used the Monthly Services Survey of businesses from January 2005 to April 2009--with revenues, employment and payments to employees as the principal outcomes. The results are estimated using a differences-in-differences regression model with fixed effects. The states of Jalisco, Nuevo León and México, where the law was not in effect, serve as a counterfactual comparison group. In restaurants, after accounting for observable factors and the fixed effects, there was a 24.8% increase in restaurants' revenue associated with the smoke-free law. This difference is not statistically significant but shows that, on average, restaurants did not suffer economically as a result of the law. Total wages increased by 28.2% and employment increased by 16.2%. In nightclubs, bars and taverns there was a decrease of 1.5% in revenues and an increase of 0.1% and 3.0%, respectively, in wages and employment. None of these effects are statistically significant in multivariate analysis. There is no statistically significant evidence that the Mexico City smoke-free law had a negative impact on restaurants' income, employees' wages and levels of employment. On the contrary, the results show a positive, though statistically non-significant, impact of the law on most of these outcomes. Mexico City's experience suggests that smoke-free laws in Mexico and elsewhere will not hurt economic productivity in the restaurant and bar industries.
Study of Employment of Women in the Federal Government 1971.
ERIC Educational Resources Information Center
Civil Service Commission, Washington, DC. Manpower Statistics Div.
This study presents statistical information gained from a survey of women employed full-time in Federal civilian collar employment as of October 31, 1971 in the Washington, D.C., metropolitan area, the 50 states and the territories of the U.S., and foreign countries. Excluded from the survey are members and employees of the Congress, employees of…
Youth Employment in the Hospitality Sector.
ERIC Educational Resources Information Center
Schiller, Bradley R.
A study used data from the National Longitudinal Surveys of Youth to analyze the long-term effects of hospitality industry employment on youth. The subsample extracted for the study included all youth who were aged 16-24 in 1980 and employed in the civilian sector for pay at any time in the year. Statistics indicated the hospitality sector was…
Labor Trends: Overview of the United States, New York City, and Long Island. Revised Edition.
ERIC Educational Resources Information Center
Goldstein, Cheryl
This document summarizes employment statistics and trends, with a geographic emphasis on areas where Queensborough Community College (New York) students and graduates seek employment. Data are presented on the following: (1) current and projected United States labor force; (2) occupational outlook; (3) employment status of civilian labor force 25…
Does Marital Status Influence the Parenting Styles Employed by Parents?
ERIC Educational Resources Information Center
Ashiono, Benard Litali; Mwoma, Teresa B.
2015-01-01
The current study sought to establish whether parents' marital status, influence their use of specific parenting styles in Kisauni District, Kenya. A correlational research design was employed to carry out this study. Stratified sampling technique was used to select preschools while purposive sampling technique was used to select preschool…
Pedagogical Techniques Employed by the Television Show "MythBusters"
ERIC Educational Resources Information Center
Zavrel, Erik
2016-01-01
"MythBusters," the long-running though recently discontinued Discovery Channel science entertainment television program, has proven itself to be far more than just a highly rated show. While its focus is on entertainment, the show employs an array of pedagogical techniques to communicate scientific concepts to its audience. These…
Statistics in the Workplace: A Survey of Use by Recent Graduates with Higher Degrees
ERIC Educational Resources Information Center
Harraway, John A.; Barker, Richard J.
2005-01-01
A postal survey was conducted regarding statistical techniques, research methods and software used in the workplace by 913 graduates with PhD and Masters degrees in the biological sciences, psychology, business, economics, and statistics. The study identified gaps between topics and techniques learned at university and those used in the workplace,…
Statistical approach for selection of biologically informative genes.
Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N
2018-05-20
Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.
Career retention in the dental hygiene workforce in Texas.
Johns, G H; Gutmann, M E; DeWald, J P; Nunn, M E
2001-01-01
The purpose of this study was to determine the factors that influence the retention and attrition of dental hygienists within the workforce in Texas. Respondents' perception of the role of employee benefits and practice of dental hygiene on career retention were explored. Demographic descriptors, including educational level, marital status, age, employment setting, and practice statuses, were also examined. A questionnaire modified from the American Dental Hygienists' Association Extension Study: Retention of Dental Hygienists in the Workforce Final Report, April 1992, was mailed to a systematic sample of licensed Texas dental hygienists in March 1999. Descriptive statistics were computed for dental hygienists currently in practice in Texas and those not in practice at the time of the survey. Differences in demographics, benefits, and attitudes between dental hygienists currently in practice in Texas and dental hygienists not in practice at the time of the survey were tested using independent t-tests for interval data and chi-squared tests for categorical data. All statistical analyses were conducted using the Statistical Package for Social Scientists (SPSS v. 9, Chicago, Illinois). A response rate of 68.1% was obtained. Results revealed the primary reasons for remaining in the practice of dental hygiene were salary, family responsibility, professional collaboration, and variety of work. The primary reasons for leaving dental hygiene practice were family responsibility, boredom, salary, and lack of benefits. Secondary and tertiary reasons stated for staying in clinical practice revealed additional factors including benefits, participation in decision-making, and a safe environment. Dental hygienists in clinical practice were more likely to be employed by a dentist in a single practice and see more patients per day, have a certificate or associate's degree, be unmarried, have fewer children, and be younger than dental hygienists not in practice. The findings suggest that dental hygienists in Texas who remain in the workforce are positively influenced primarily by salary. Dental hygienists in Texas who had left the workforce were primarily influenced to leave practice because of family responsibility. Boredom and lack of benefits were also important factors in deciding to leave clinical practice. Employers of dental hygienists need to be aware of these factors in the hiring process. In addition, dental hygiene educators should prepare students in interviewing techniques for better communication regarding retention factors.
MIDAS: Regionally linear multivariate discriminative statistical mapping.
Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos
2018-07-01
Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.
Parkinson's disease detection based on dysphonia measurements
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Assessing dysphonic symptoms is a noninvasive and effective approach to detect Parkinson's disease (PD) in patients. The main purpose of this study is to investigate the effect of different dysphonia measurements on PD detection by support vector machine (SVM). Seven categories of dysphonia measurements are considered. Experimental results from ten-fold cross-validation technique demonstrate that vocal fundamental frequency statistics yield the highest accuracy of 88 % ± 0.04. When all dysphonia measurements are employed, the SVM classifier achieves 94 % ± 0.03 accuracy. A refinement of the original patterns space by removing dysphonia measurements with similar variation across healthy and PD subjects allows achieving 97.03 % ± 0.03 accuracy. The latter performance is larger than what is reported in the literature on the same dataset with ten-fold cross-validation technique. Finally, it was found that measures of ratio of noise to tonal components in the voice are the most suitable dysphonic symptoms to detect PD subjects as they achieve 99.64 % ± 0.01 specificity. This finding is highly promising for understanding PD symptoms.
IsobariQ: software for isobaric quantitative proteomics using IPTL, iTRAQ, and TMT.
Arntzen, Magnus Ø; Koehler, Christian J; Barsnes, Harald; Berven, Frode S; Treumann, Achim; Thiede, Bernd
2011-02-04
Isobaric peptide labeling plays an important role in relative quantitative comparisons of proteomes. Isobaric labeling techniques utilize MS/MS spectra for relative quantification, which can be either based on the relative intensities of reporter ions in the low mass region (iTRAQ and TMT) or on the relative intensities of quantification signatures throughout the spectrum due to isobaric peptide termini labeling (IPTL). Due to the increased quantitative information found in MS/MS fragment spectra generated by the recently developed IPTL approach, new software was required to extract the quantitative information. IsobariQ was specifically developed for this purpose; however, support for the reporter ion techniques iTRAQ and TMT is also included. In addition, to address recently emphasized issues about heterogeneity of variance in proteomics data sets, IsobariQ employs the statistical software package R and variance stabilizing normalization (VSN) algorithms available therein. Finally, the functionality of IsobariQ is validated with data sets of experiments using 6-plex TMT and IPTL. Notably, protein substrates resulting from cleavage by proteases can be identified as shown for caspase targets in apoptosis.
NASA Astrophysics Data System (ADS)
Li, Shaoxin; Zhang, Yanjiao; Xu, Junfa; Li, Linfang; Zeng, Qiuyao; Lin, Lin; Guo, Zhouyi; Liu, Zhiming; Xiong, Honglian; Liu, Songhao
2014-09-01
This study aims to present a noninvasive prostate cancer screening methods using serum surface-enhanced Raman scattering (SERS) and support vector machine (SVM) techniques through peripheral blood sample. SERS measurements are performed using serum samples from 93 prostate cancer patients and 68 healthy volunteers by silver nanoparticles. Three types of kernel functions including linear, polynomial, and Gaussian radial basis function (RBF) are employed to build SVM diagnostic models for classifying measured SERS spectra. For comparably evaluating the performance of SVM classification models, the standard multivariate statistic analysis method of principal component analysis (PCA) is also applied to classify the same datasets. The study results show that for the RBF kernel SVM diagnostic model, the diagnostic accuracy of 98.1% is acquired, which is superior to the results of 91.3% obtained from PCA methods. The receiver operating characteristic curve of diagnostic models further confirm above research results. This study demonstrates that label-free serum SERS analysis technique combined with SVM diagnostic algorithm has great potential for noninvasive prostate cancer screening.
Application of machine vision to pup loaf bread evaluation
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Chung, O. K.
1996-12-01
Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.
Predicting the binding preference of transcription factors to individual DNA k-mers.
Alleyne, Trevis M; Peña-Castillo, Lourdes; Badis, Gwenael; Talukder, Shaheynoor; Berger, Michael F; Gehrke, Andrew R; Philippakis, Anthony A; Bulyk, Martha L; Morris, Quaid D; Hughes, Timothy R
2009-04-15
Recognition of specific DNA sequences is a central mechanism by which transcription factors (TFs) control gene expression. Many TF-binding preferences, however, are unknown or poorly characterized, in part due to the difficulty associated with determining their specificity experimentally, and an incomplete understanding of the mechanisms governing sequence specificity. New techniques that estimate the affinity of TFs to all possible k-mers provide a new opportunity to study DNA-protein interaction mechanisms, and may facilitate inference of binding preferences for members of a given TF family when such information is available for other family members. We employed a new dataset consisting of the relative preferences of mouse homeodomains for all eight-base DNA sequences in order to ask how well we can predict the binding profiles of homeodomains when only their protein sequences are given. We evaluated a panel of standard statistical inference techniques, as well as variations of the protein features considered. Nearest neighbour among functionally important residues emerged among the most effective methods. Our results underscore the complexity of TF-DNA recognition, and suggest a rational approach for future analyses of TF families.
NASA Astrophysics Data System (ADS)
Abdi, Abdi M.; Szu, Harold H.
2003-04-01
With the growing rate of interconnection among computer systems, network security is becoming a real challenge. Intrusion Detection System (IDS) is designed to protect the availability, confidentiality and integrity of critical network information systems. Today"s approach to network intrusion detection involves the use of rule-based expert systems to identify an indication of known attack or anomalies. However, these techniques are less successful in identifying today"s attacks. Hackers are perpetually inventing new and previously unanticipated techniques to compromise information infrastructure. This paper proposes a dynamic way of detecting network intruders on time serious data. The proposed approach consists of a two-step process. Firstly, obtaining an efficient multi-user detection method, employing the recently introduced complexity minimization approach as a generalization of a standard ICA. Secondly, we identified unsupervised learning neural network architecture based on Kohonen"s Self-Organizing Map for potential functional clustering. These two steps working together adaptively will provide a pseudo-real time novelty detection attribute to supplement the current intrusion detection statistical methodology.
A global optimization perspective on molecular clusters.
Marques, J M C; Pereira, F B; Llanio-Trujillo, J L; Abreu, P E; Albertí, M; Aguilar, A; Pirani, F; Bartolomei, M
2017-04-28
Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca 2+ ions with various types of solvents.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
A global optimization perspective on molecular clusters
Pereira, F. B.; Llanio-Trujillo, J. L.; Abreu, P. E.; Albertí, M.; Aguilar, A.; Pirani, F.; Bartolomei, M.
2017-01-01
Although there is a long history behind the idea of chemical structure, this is a key concept that continues to challenge chemists. Chemical structure is fundamental to understanding most of the properties of matter and its knowledge for complex systems requires the use of state-of-the-art techniques, either experimental or theoretical. From the theoretical view point, one needs to establish the interaction potential among the atoms or molecules of the system, which contains all the information regarding the energy landscape, and employ optimization algorithms to discover the relevant stationary points. In particular, global optimization methods are of major importance to search for the low-energy structures of molecular aggregates. We review the application of global optimization techniques to several molecular clusters; some new results are also reported. Emphasis is given to evolutionary algorithms and their application in the study of the microsolvation of alkali-metal and Ca2+ ions with various types of solvents. This article is part of the themed issue ‘Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces’. PMID:28320902
The effect of different types of employment on quality of life.
Kober, R; Eggleton, I R C
2005-10-01
Despite research that has investigated whether the financial benefits of open employment exceed the costs, there has been scant research as to the effect sheltered and open employment have upon the quality of life of participants. The importance of this research is threefold: it investigates outcomes explicitly in terms of quality of life; the sample size is comparatively large; and it uses an established and validated questionnaire. One hundred and seventeen people with intellectual disability (ID) who were employed in either open or sheltered employment by disability employment agencies were interviewed. Quality of life was assessed using the Quality of Life Questionnaire. After making an initial assessment to see whether the outcomes achieved depended on type of employment, quality of life scores were analyzed controlling for participants' level of functional work ability (assessed via the Functional Assessment Inventory). The results showed that participants placed in open employment reported statistically significant higher quality of life scores. When the sample was split based upon participants' functional work ability, the type of employment had no effect on the reported quality of life for participants with a low functional work ability. However, for those participants with a high functional work ability, those in open employment reported statistically significantly higher quality of life. The results of this study support the placement of people with ID with high functional work ability into open employment. However, a degree of caution needs to be taken in interpreting the results presented given the disparity in income levels between the two types of employment.
Interpreting Conditions in the Job Market for College Graduates.
ERIC Educational Resources Information Center
Alsalam, Nabeel
1993-01-01
Indicates that occupational and employment statistics would be more beneficial if users had a better understanding of how occupations are changing and how employers are redefining jobs to use the education and skills of their employees. (JOW)
North American transportation : statistics on Canadian, Mexican, and United States transportation
DOT National Transportation Integrated Search
1994-05-01
North American Transportation: Statistics on Canadian, Mexican, and United States transportation contains extensive data on the size and scope, use, employment, fuel consumption, and economic role of each country's transportation system. It was publi...
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
Chern-Simons Term: Theory and Applications.
NASA Astrophysics Data System (ADS)
Gupta, Kumar Sankar
1992-01-01
We investigate the quantization and applications of Chern-Simons theories to several systems of interest. Elementary canonical methods are employed for the quantization of abelian and nonabelian Chern-Simons actions using ideas from gauge theories and quantum gravity. When the spatial slice is a disc, it yields quantum states at the edge of the disc carrying a representation of the Kac-Moody algebra. We next include sources in this model and their quantum states are shown to be those of a conformal family. Vertex operators for both abelian and nonabelian sources are constructed. The regularized abelian Wilson line is proved to be a vertex operator. The spin-statistics theorem is established for Chern-Simons dynamics using purely geometrical techniques. Chern-Simons action is associated with exotic spin and statistics in 2 + 1 dimensions. We study several systems in which the Chern-Simons action affects the spin and statistics. The first class of systems we study consist of G/H models. The solitons of these models are shown to obey anyonic statistics in the presence of a Chern-Simons term. The second system deals with the effect of the Chern -Simons term in a model for high temperature superconductivity. The coefficient of the Chern-Simons term is shown to be quantized, one of its possible values giving fermionic statistics to the solitons of this model. Finally, we study a system of spinning particles interacting with 2 + 1 gravity, the latter being described by an ISO(2,1) Chern-Simons term. An effective action for the particles is obtained by integrating out the gauge fields. Next we construct operators which exchange the particles. They are shown to satisfy the braid relations. There are ambiguities in the quantization of this system which can be exploited to give anyonic statistics to the particles. We also point out that at the level of the first quantized theory, the usual spin-statistics relation need not apply to these particles.
Pfeiffer, T.J.; Summerfelt, S.T.; Watten, B.J.
2011-01-01
Many methods are available for the measurement of dissolved carbon dioxide in an aqueous environment. Standard titration is the typical field method for measuring dissolved CO2 in aquaculture systems. However, titrimetric determination of dissolved CO2 in marine water aquaculture systems is unsuitable because of the high dissolved solids, silicates, and other dissolved minerals that interfere with the determination. Other methods used to measure dissolved carbon dioxide in an aquaculture water included use of a wetted CO2 probe analyzer, standard nomographic methods, and calculation by direct measurements of the water's pH, temperature, and alkalinity. The determination of dissolved CO2 in saltwater based on partial pressure measurements and non-dispersive infra-red (NDIR) techniques with a CO2 gas analyzer are widely employed for oceanic surveys of surface ocean CO2 flux and are similar to the techniques employed with the head space unit (HSU) in this study. Dissolved carbon dioxide (DC) determination with the HSU using a infra-red gas analyzer (IRGA) was compared with titrimetric, nomographic, calculated, and probe measurements of CO2 in freshwater and in saltwater with a salinity ranging from 5.0 to 30 ppt, and a CO2 range from 8 to 50 mg/L. Differences in CO2 measurements between duplicate HSUs (0.1–0.2 mg/L) were not statistically significant different. The coefficient of variation for the HSU readings averaged 1.85% which was better than the CO2 probe (4.09%) and that for the titrimetric method (5.84%). In all low, medium and high salinity level trials HSU precision was good, averaging 3.39%. Differences existed between comparison testing of the CO2 probe and HSU measurements with the CO2 probe readings, on average, providing DC estimates that were higher than HSU estimates. Differences between HSU and titration based estimates of DC increased with salinity and reached a maximum at 32.2 ppt. These differences were statistically significant (P < 0.05) at all salinity levels greater than 0.3 ppt. Results indicated reliable replicated results from the head space unit with varying salinity and dissolved carbon dioxide concentrations.
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
The Shock and Vibration Digest. Volume 16, Number 1
1984-01-01
investigation of the measure- ment of frequency band average loss factors of structural components for use in the statistical energy analysis method of...stiffness. Matrix methods Key Words: Finite element technique. Statistical energy analysis . Experimental techniques. Framed structures, Com- puter...programs In order to further understand the practical application of the statistical energy analysis , a two section plate-like frame structure is
Yang, Yaowen; Divsholi, Bahador Sabet
2010-01-01
The electromechanical (EM) impedance technique using piezoelectric lead zirconate titanate (PZT) transducers for structural health monitoring (SHM) has attracted considerable attention in various engineering fields. In the conventional EM impedance technique, the EM admittance of a PZT transducer is used as a damage indicator. Statistical analysis methods such as root mean square deviation (RMSD) have been employed to associate the damage level with the changes in the EM admittance signatures, but it is difficult to determine the location of damage using such methods. This paper proposes a new approach by dividing the large frequency (30–400 kHz) range into sub-frequency intervals and calculating their respective RMSD values. The RMSD of the sub-frequency intervals (RMSD-S) will be used to study the severity and location of damage. An experiment is carried out on a real size concrete structure subjected to artificial damage. It is observed that damage close to the PZT changes the high frequency range RMSD-S significantly, while the damage far away from the PZT changes the RMSD-S in the low frequency range significantly. The relationship between the frequency range and the PZT sensing region is also presented. Finally, a damage identification scheme is proposed to estimate the location and severity of damage in concrete structures. PMID:22163548
Comparing the landcapes of common retroviral insertion sites across tumor models
NASA Astrophysics Data System (ADS)
Weishaupt, Holger; Čančer, Matko; Engström, Cristopher; Silvestrov, Sergei; Swartling, Fredrik J.
2017-01-01
Retroviral tagging represents an important technique, which allows researchers to screen for candidate cancer genes. The technique is based on the integration of retroviral sequences into the genome of a host organism, which might then lead to the artificial inhibition or expression of proximal genetic elements. The identification of potential cancer genes in this framework involves the detection of genomic regions (common insertion sites; CIS) which contain a number of such viral integration sites that is greater than expected by chance. During the last two decades, a number of different methods have been discussed for the identification of such loci and the respective techniques have been applied to a variety of different retroviruses and/or tumor models. We have previously established a retrovirus driven brain tumor model and reported the CISs which were found based on a Monte Carlo statistics derived detection paradigm. In this study, we consider a recently proposed alternative graph theory based method for identifying CISs and compare the resulting CIS landscape in our brain tumor dataset to those obtained when using the Monte Carlo approach. Finally, we also employ the graph-based method to compare the CIS landscape in our brain tumor model with those of other published retroviral tumor models.
Mitigating randomness of consumer preferences under certain conditional choices
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thanos, Konstantinos-Georgios; Papadopoulou, Eirini; Daveas, Stelios; Thomopoulos, Stelios C. A.
2017-05-01
Agent-based crowd behaviour consists a significant field of research that has drawn a lot of attention in recent years. Agent-based crowd simulation techniques have been used excessively to forecast the behaviour of larger or smaller crowds in terms of certain given conditions influenced by specific cognition models and behavioural rules and norms, imposed from the beginning. Our research employs conditional event algebra, statistical methodology and agent-based crowd simulation techniques in developing a behavioural econometric model about the selection of certain economic behaviour by a consumer that faces a spectre of potential choices when moving and acting in a multiplex mall. More specifically we try to analyse the influence of demographic, economic, social and cultural factors on the economic behaviour of a certain individual and then we try to link its behaviour with the general behaviour of the crowds of consumers in multiplex malls using agent-based crowd simulation techniques. We then run our model using Generalized Least Squares and Maximum Likelihood methods to come up with the most probable forecast estimations, regarding the agent's behaviour. Our model is indicative about the formation of consumers' spectre of choices in multiplex malls under the condition of predefined preferences and can be used as a guide for further research in this area.
Development and application of computational aerothermodynamics flowfield computer codes
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
1993-01-01
Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
40 CFR 426.135 - Standards of performance for new sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... greater than 50 gallons per day of process waste water, and employs hydrofluoric acid finishing techniques... any 1 day Average of daily values for 30 consecutive days shall not exceed— Lead 0.2 0.1 Fluoride 26.0... waste water, and employs hydrofluoric acid finishing techniques shall meet the following limitations...
40 CFR 426.135 - Standards of performance for new sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... greater than 50 gallons per day of process waste water, and employs hydrofluoric acid finishing techniques... any 1 day Average of daily values for 30 consecutive days shall not exceed— Lead 0.2 0.1 Fluoride 26.0... waste water, and employs hydrofluoric acid finishing techniques shall meet the following limitations...
40 CFR 426.135 - Standards of performance for new sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... greater than 50 gallons per day of process waste water, and employs hydrofluoric acid finishing techniques... any 1 day Average of daily values for 30 consecutive days shall not exceed— Lead 0.2 0.1 Fluoride 26.0... waste water, and employs hydrofluoric acid finishing techniques shall meet the following limitations...
Managing Age Discrimination: An Examination of the Techniques Used when Seeking Employment
ERIC Educational Resources Information Center
Berger, Ellie D.
2009-01-01
Purpose: This article examines the age-related management techniques used by older workers in their search for employment. Design and Methods: Data are drawn from interviews with individuals aged 45-65 years (N = 30). Results: Findings indicate that participants develop "counteractions" and "concealments" to manage perceived age discrimination.…
Maternal characteristics and immunization status of children in North Central of Nigeria
Adenike, Olugbenga-Bello; Adejumoke, Jimoh; Olufunmi, Oke; Ridwan, Oladejo
2017-01-01
Introduction Routine immunization coverage in Nigeria is one of the lowest national coverage rates in the world. The objective of this study was to compare the mother’ characteristics and the child’s Immunization status in some selected rural and urban communities in the North central part of Nigeria. Methods A descriptive cross sectional study, using a multistage sampling technique to select 600 respondent women with an index child between 0-12 months. Results Mean age of rural respondents was 31.40±7.21 years and 32.72+6.77 years among urban respondents, though there was no statistically significant difference in age between the 2 locations (p-0.762). One hundred and ninetyseven (65.7%) and 241(80.3%) of rural and urban respondents respectively were aware of immunization, the difference was statistically significant (p-0.016). knowledge in urban areas was better than among rural respondents. There was statistically significant association between respondents age, employment status, mothers' educational status and the child's immunization status (P<0.05), while variables like parity, age at marriage, marital status, No of children, household income and place of index were not statistically associated with immunization status as P>0.05. More than half 179(59.7%) of rural and 207(69.0%) of urban had good practice of immunization though the difference was not statistically significant (p-0.165) Conclusion The immunization coverage in urban community was better than that of the rural community. The result of this study has clearly indicated that mothers in Nigeria have improved on taking their children for immunization in both rural and urban area compared to previous reports PMID:28588745
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahfuz, H.; Maniruzzaman, M.; Vaidya, U.
1997-04-01
Monotonic tensile and fatigue response of continuous silicon carbide fiber reinforced silicon nitride (SiC{sub f}/Si{sub 3}N{sub 4}) composites has been investigated. The monotonic tensile tests have been performed at room and elevated temperatures. Fatigue tests have been conducted at room temperature (RT), at a stress ratio, R = 0.1 and a frequency of 5 Hz. It is observed during the monotonic tests that the composites retain only 30% of its room temperature strength at 1,600 C suggesting a substantial chemical degradation of the matrix at that temperature. The softening of the matrix at elevated temperature also causes reduction in tensilemore » modulus, and the total reduction in modulus is around 45%. Fatigue data have been generated at three load levels and the fatigue strength of the composite has been found to be considerably high; about 75% of its ultimate room temperature strength. Extensive statistical analysis has been performed to understand the degree of scatter in the fatigue as well as in the static test data. Weibull shape factors and characteristic values have been determined for each set of tests and their relationship with the response of the composites has been discussed. A statistical fatigue life prediction method developed from the Weibull distribution is also presented. Maximum Likelihood Estimator with censoring techniques and data pooling schemes has been employed to determine the distribution parameters for the statistical analysis. These parameters have been used to generate the S-N diagram with desired level of reliability. Details of the statistical analysis and the discussion of the static and fatigue behavior of the composites are presented in this paper.« less
ERIC Educational Resources Information Center
Holland, Bart K.
2006-01-01
A generally-educated individual should have some insight into how decisions are made in the very wide range of fields that employ statistical and probabilistic reasoning. Also, students of introductory probability and statistics are often best motivated by specific applications rather than by theory and mathematical development, because most…
Descôteaux, Nancy; Chagnon, Valérie; Di Dong, Xin; Ellemo, Eric; Hamelin, Alessandra; Juste, Evans; Laplante, Xavier; Miron, Allison; Morency, Philippe; Samuel, Katherine; Charles, David; Hunt, Matthew
2018-05-01
This article examines the employment situation and perceptions of graduates from three rehabilitation technician (RT) programs in Haiti. In this mixed method study, 74 of 93 recent graduates completed a questionnaire, and 20 graduates participated in an in-depth qualitative interview. We analyzed survey results using descriptive statistics. We used a qualitative description approach and analyzed the interviews using constant comparative techniques. Of the 48 survey respondents who had completed their training more than six months prior to completing the questionnaire, 30 had found work in the rehabilitation sector. Most of these technicians were working in hospitals in urban settings and the patient population they treated most frequently were patients with neurological conditions. Through the interviews, we explored the participants' motivations for becoming a RT, reflections on the training program, process of finding work, current employment, and plans for the future. An analysis of qualitative and quantitative findings provides insights regarding challenges, including availability of supervision for graduated RTs and the process of seeking remunerated work. This study highlights the need for stakeholders to further engage with issues related to formal recognition of RT training, expectations for supervision of RTs, concerns for the precariousness of their employment, and uncertainty about their professional futures. Implications for Rehabilitation The availability of human resources in the rehabilitation field in Haiti has increased with the implementation of three RT training programs over the past 10 years. RTs who found work in the rehabilitation sector were more likely to work in a hospital setting, in the province where their training had taken place, to treat a diverse patient clientele, and to be employed by a non-governmental organization. The study underlines challenges related to the long-term sustainability of RT training programs, as well as the employment of their graduates. Further discussion and research are needed to identify feasible and effective mechanisms to provide supervision for RTs within the Haitian healthcare system.
Huge Increase in Day-Care Workers: A Result of Multiple Societal Changes.
ERIC Educational Resources Information Center
Bureau of Labor Statistics (DOL), Washington, DC.
Using Bureau of Labor Statistics estimates of employment in day-care establishments, this study analyzes changes in day care over the past 20 years. Growth in day-care employment has been much stronger than that of other industries. Since 1972, employment has increased by nearly 250 per cent. Causes of growth includes changing trends in enrollment…
The Multiplier Effect of the Development of Forest Park Tourism on Employment Creation in China
ERIC Educational Resources Information Center
Shuifa, Ke; Chenguang, Pan; Jiahua, Pan; Yan, Zheng; Ying, Zhang
2011-01-01
The focus of this article was employment creation by developing forest park tourism industries in China. Analysis of the statistical data and an input-output approach showed that 1 direct job opportunity in tourism industries created 1.15 other job opportunities. In the high, middle, and low scenarios, the total predicted employment in forest park…
ERIC Educational Resources Information Center
Willhide, Robert Jesse
2014-01-01
This report is part of a series of reports that provides information on the structure, function, finances, taxation, employment, and pension systems of the United States' approximately 90,000 state and local governments. This report presents data on state and local government employment and payroll based on information collected by the 2013 Annual…
ERIC Educational Resources Information Center
Wood, Gaynor
2016-01-01
Graduate employment statistics are receiving considerable attention in UK universities. This paper looks at how a wide range of employability attributes can be developed with students, through the innovative use of the Project Based Learning (PjBL) approach. The case study discussed here involves a group of archaeology students from the University…
Kim, Il Ho; Khang, Young Ho; Cho, Sung Il; Chun, Heeran; Muntaner, Carles
2011-01-01
We examined gender differential changes in employment-related health inequalities according to occupational position (professional/nonprofessional) in South Korea during the last decade. Data were taken from four rounds of Social Statistical Surveys of South Korea (1995, 1999, 2003, and 2006) from the Korean National Statistics Office. The total study population was 55435 male and 33 913 female employees aged 25-64. Employment arrangements were divided into permanent, fixed-term, and daily employment. After stratification according to occupational position (professional/nonprofessional) and gender, different patterns in employment - related health inequalities were observed. In the professional group, the gaps in absolute and relative employment inequalities for poor self-rated health were more likely to widen following Korea's 1997 economic downturn. In the nonprofessional group, during the study period, graded patterns of employment-related health inequalities were continuously observed in both genders. Absolute health inequalities by employment status, however, decreased among men but increased among women. In addition, a remarkable increase in relative health inequalities was found among female temporary and daily employees (p = 0.009, < 0.001, respectively), but only among male daily employees (p = 0.001). Relative employment-related health inequalities had clearly widened for female daily workers between 2003 and 2006 (p = 0.047). The 1997 Korean economic downturn, in particular, seemingly stimulated a widening gap in employment health inequalities. Our study revealed that whereas absolute health inequalities in relation to employment status increased in the professional group, relative employment-related health inequalities increased in the nonprofessional group, especially among women. In view of the high concentration of female nonstandard employees, further monitoring of inequality should consider gender specific patterns according to employee's occupational and employment status.
Testing for Mutagens Using Fruit Flies.
ERIC Educational Resources Information Center
Liebl, Eric C.
1998-01-01
Describes a laboratory employed in undergraduate teaching that uses fruit flies to test student-selected compounds for their ability to cause mutations. Requires no prior experience with fruit flies, incorporates a student design component, and employs both rigorous controls and statistical analyses. (DDR)
iTTVis: Interactive Visualization of Table Tennis Data.
Wu, Yingcai; Lan, Ji; Shu, Xinhuan; Ji, Chenyang; Zhao, Kejian; Wang, Jiachen; Zhang, Hui
2018-01-01
The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.
NASA Astrophysics Data System (ADS)
Darvishzadeh, R.; Skidmore, A. K.; Mirzaie, M.; Atzberger, C.; Schlerf, M.
2014-12-01
Accurate estimation of grassland biomass at their peak productivity can provide crucial information regarding the functioning and productivity of the rangelands. Hyperspectral remote sensing has proved to be valuable for estimation of vegetation biophysical parameters such as biomass using different statistical techniques. However, in statistical analysis of hyperspectral data, multicollinearity is a common problem due to large amount of correlated hyper-spectral reflectance measurements. The aim of this study was to examine the prospect of above ground biomass estimation in a heterogeneous Mediterranean rangeland employing multivariate calibration methods. Canopy spectral measurements were made in the field using a GER 3700 spectroradiometer, along with concomitant in situ measurements of above ground biomass for 170 sample plots. Multivariate calibrations including partial least squares regression (PLSR), principal component regression (PCR), and Least-Squared Support Vector Machine (LS-SVM) were used to estimate the above ground biomass. The prediction accuracy of the multivariate calibration methods were assessed using cross validated R2 and RMSE. The best model performance was obtained using LS_SVM and then PLSR both calibrated with first derivative reflectance dataset with R2cv = 0.88 & 0.86 and RMSEcv= 1.15 & 1.07 respectively. The weakest prediction accuracy was appeared when PCR were used (R2cv = 0.31 and RMSEcv= 2.48). The obtained results highlight the importance of multivariate calibration methods for biomass estimation when hyperspectral data are used.
Dinkel, Philipp Johannes; Willmes, Klaus; Krinzinger, Helga; Konrad, Kerstin; Koten Jr, Jan Willem
2013-01-01
FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number representation in fronto-parietal cortex. We conclude that dyscalculic children show large individual differences in brain activation patterns. Nonetheless, the majority of dyscalculic children can be differentiated from controls employing brain activation patterns when appropriate methods are used. PMID:24349547
Dinkel, Philipp Johannes; Willmes, Klaus; Krinzinger, Helga; Konrad, Kerstin; Koten, Jan Willem
2013-01-01
FMRI-studies are mostly based on a group study approach, either analyzing one group or comparing multiple groups, or on approaches that correlate brain activation with clinically relevant criteria or behavioral measures. In this study we investigate the potential of fMRI-techniques focusing on individual differences in brain activation within a test-retest reliability context. We employ a single-case analysis approach, which contrasts dyscalculic children with a control group of typically developing children. In a second step, a support-vector machine analysis and cluster analysis techniques served to investigate similarities in multivariate brain activation patterns. Children were confronted with a non-symbolic number comparison and a non-symbolic exact calculation task during fMRI acquisition. Conventional second level group comparison analysis only showed small differences around the angular gyrus bilaterally and the left parieto-occipital sulcus. Analyses based on single-case statistical procedures revealed that developmental dyscalculia is characterized by individual differences predominantly in visual processing areas. Dyscalculic children seemed to compensate for relative under-activation in the primary visual cortex through an upregulation in higher visual areas. However, overlap in deviant activation was low for the dyscalculic children, indicating that developmental dyscalculia is a disorder characterized by heterogeneous brain activation differences. Using support vector machine analysis and cluster analysis, we tried to group dyscalculic and typically developing children according to brain activation. Fronto-parietal systems seem to qualify for a distinction between the two groups. However, this was only effective when reliable brain activations of both tasks were employed simultaneously. Results suggest that deficits in number representation in the visual-parietal cortex get compensated for through finger related aspects of number representation in fronto-parietal cortex. We conclude that dyscalculic children show large individual differences in brain activation patterns. Nonetheless, the majority of dyscalculic children can be differentiated from controls employing brain activation patterns when appropriate methods are used.
Long-term monitoring of ULF electromagnetic fields at Parkfield, CA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kappler, K.N.; Morrison, H.F.; Egbert, G.D.
2009-08-01
Electric and magnetic fields in the (10{sup -4}-1.0) Hz band were monitored at two sites adjacent to the San Andreas Fault near Parkfield and Hollister, California from 1995 to present. A data window [2002-2005], enclosing the September 28, 2004 M6 Parkfield earthquake, was analyzed to determine if anomalous electric or magnetic fields, or changes in ground conductivity, occurred before the earthquake. The data were edited, removing intervals of instrument malfunction leaving 875 days in the four-year period. Frequent, spike-like disturbances were common, but were not more frequent around the time of the earthquake; these were removed before subsequent processing. Signalmore » to noise amplitude spectra, estimated via magnetotelluric processing showed the behavior of the ULF fields to be remarkably constant over the period of analysis. These first-order plots make clear that most of the recorded energy is coherent over the spatial extent of the array. Three main statistical techniques were employed to separate local anomalous electrical or magnetic fields from the dominant coherent natural fields: transfer function estimates between components at each site were employed to subtract the dominant field, and look deeper at the 'residual' fields; the data were decomposed into principal components to identify the dominant coherent array modes; and the technique of canonical coherences was employed to distinguish anomalous fields which are spatially broad from anomalies which occur at a single site only, and furthermore to distinguish anomalies which are present in both the electric and magnetic fields from those which are present in only one field type. Standard remote reference apparent resistivity estimates were generated daily at Parkfield. A significant seasonal component of variability was observed suggesting local distortion due to variations in near surface resistance. In all cases, high levels of sensitivity to subtle electromagnetic effects were demonstrated, but no effects which can be reasonably characterized as precursors to the Parkfield earthquake were found.« less
Manjunatha, M; Annapurna, Kini; Sudhakar, V; Sunil Kumar, VC; Hiremath, Vinay Kumar; Shah, Ankur
2013-01-01
Introduction: The aim of any root canal treatment is to achieve a canal free of micro organisms, residual pulp remnants, debris and smear layer for the long term success of the procedure. Manual and automated instrumentation techniques along with proper irrigation regime is used to arrive at the aforementioned goal. Many authors focused on the preparation capabilities of various manual and rotary instruments but very few investigators stressed on the actual cleaning abilities of these instruments. Aims and objectives: This study was undertaken to evaluate the cleaning efficiency of manual K flex files and rotary Pro File systems in the root canals using a scanning electron microscope. Material and Methods:Thirty single rooted mandibular first premolars were divided into two groups and randomized (the manual group-M and the ProFile group-P) with respect to the preparation technique. The Manual group was hand instrumented with stainless steel K- Flexofiles by means of a conventional filing technique. The Pro File group was instrumented according to the manufacturer's instructions using a rotary handpiece. All canals were shaped and cleaned under frequent irrigation with EDTA. Final irrigation was carried out with 3 mL of normal saline solution to neutralize the action of the irrigant. The roots were split, one half of each tooth was selected for further SEM technique analysis and examined under the scanning electron microscope. The canal walls were quantitatively evaluated for the amount of debris and smear layer. The apical, middle and coronal regions of the canal surface, were graded (1-5) for debris and smear layer. A statistical analysis was performed using a Mann-Whitney Rank Sum test. ProFile performed least effective cleaning. Manual K-Flexofiles led to a grooved pattern. Results and Conclusion: A statistically significant difference was observed (p<0.05) between the two instrumentation techniques concerning the amount of debris and smear layer at the apical level. The manually filed canals had less debris and smear layer than those using a rotary technique. It was concluded from this study that none of the instrumentation techniques employed, produced the canal walls which were free of surface debris and smear layer. The manual instrumentation technique was better in cleaning the canals compared to the ProFile rotary Ni-Ti instruments despite the step-back technique used for manual instrumentation. How to cite this article: Manjunatha M, Kini A, Sudhakar V, Sunil K V C, Hiremath V K, Shah A. Smear Layer Evaluation on Root Canal Preparation with Manual and Rotary Techniques using EDTA as an Irrigant: A Scanning Electron Microscopy Study. J Int Oral Health 2013; 5(1):66-78. PMID:24155580
Resisting the "Employability" Doctrine through Anarchist Pedagogies & Prefiguration
ERIC Educational Resources Information Center
Osborne, Natalie; Grant-Smith, Deanna
2017-01-01
Increasingly those working in higher education are tasked with targeting their teaching approaches and techniques to improve the "employability" of graduates. However, this approach is promoted with little recognition that enhanced employability does not guarantee employment outcomes or the tensions inherent in pursuing this agenda. The…
Family support and exclusive breastfeeding among Yogyakarta mothers in employment.
Ratnasari, Dewi; Paramashanti, Bunga Astria; Hadi, Hamam; Yugistyowati, Anafrin; Astiti, Dewi; Nurhayati, Eka
2017-06-01
Exclusive breastfeeding provides many benefits to both infants and mothers. Despite the introduction of laws aimed at protecting the practice of exclusive breastfeeding, the coverage of exclusive breastfeeding remains low, particularly for working mothers. This crosssectional study recruited working mothers employed in medium and large companies in Bantul District, Daerah Istimewa Yogyakarta, Indonesia. The study participants were 158 working mothers whose children were aged 6- 12 months, and they were selected using the probability proportional to size technique. The data were analyzed using descriptive statistics, chi-square tests, and multiple logistic regression. Adequate family support for breastfeeding (OR: 2.86; 95% CI: 1.25-6.53) and a high paternal education level (OR: 2.68; 95% CI: 1.11- 6.48) were significantly associated with the practice of exclusive breastfeeding among working mothers. However, the infant's sex and age, parity, and the mother's age and education level were unassociated with exclusive breastfeeding. Family support and a high paternal education level are crucial in enabling working mothers to practice exclusive breastfeeding. Interventions that promote exclusive breastfeeding should focus on involving the husband and other family members in health care programs related to breastfeeding.
NASA Astrophysics Data System (ADS)
Kopsaftopoulos, Fotios; Nardari, Raphael; Li, Yu-Hung; Wang, Pengchuan; Chang, Fu-Kuo
2016-04-01
In this work, the system design, integration, and wind tunnel experimental evaluation are presented for a bioinspired self-sensing intelligent composite unmanned aerial vehicle (UAV) wing. A total of 148 micro-sensors, including piezoelectric, strain, and temperature sensors, in the form of stretchable sensor networks are embedded in the layup of a composite wing in order to enable its self-sensing capabilities. Novel stochastic system identification techniques based on time series models and statistical parameter estimation are employed in order to accurately interpret the sensing data and extract real-time information on the coupled air flow-structural dynamics. Special emphasis is given to the wind tunnel experimental assessment under various flight conditions defined by multiple airspeeds and angles of attack. A novel modeling approach based on the recently introduced Vector-dependent Functionally Pooled (VFP) model structure is employed for the stochastic identification of the "global" coupled airflow-structural dynamics of the wing and their correlation with dynamic utter and stall. The obtained results demonstrate the successful system-level integration and effectiveness of the stochastic identification approach, thus opening new perspectives for the state sensing and awareness capabilities of the next generation of "fly-by-fee" UAVs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Jade; Nobrega, R. Paul; Schwantes, Christian
The dynamics of globular proteins can be described in terms of transitions between a folded native state and less-populated intermediates, or excited states, which can play critical roles in both protein folding and function. Excited states are by definition transient species, and therefore are difficult to characterize using current experimental techniques. We report an atomistic model of the excited state ensemble of a stabilized mutant of an extensively studied flavodoxin fold protein CheY. We employed a hybrid simulation and experimental approach in which an aggregate 42 milliseconds of all-atom molecular dynamics were used as an informative prior for the structuremore » of the excited state ensemble. The resulting prior was then refined against small-angle X-ray scattering (SAXS) data employing an established method (EROS). The most striking feature of the resulting excited state ensemble was an unstructured N-terminus stabilized by non-native contacts in a conformation that is topologically simpler than the native state. We then predict incisive single molecule FRET experiments, using these results, as a means of model validation. Our study demonstrates the paradigm of uniting simulation and experiment in a statistical model to study the structure of protein excited states and rationally design validating experiments.« less
Error-in-variables models in calibration
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Kleinman, R G; Csongradi, J J; Rinksy, L A; Bleck, E E
1982-01-01
The use of a "prone push" posteroanterior radiograph of the spine was reviewed in 58 patients with scoliosis (82 curves) who underwent Harrington instrumentation and spinal fusion. The technique is previously undescribed and is accomplished by applying manual pressure to the apices of each curve with the patient prone on the X-ray table. The average correction obtained for all 82 curves was 21.1 degrees, as measured on the push films and 21.8 degrees postoperatively. The difference between these values was not statistically significant. The close relationship between push film and immediate postoperative correction was not altered by the location of the curve, the sex or age of the patient, the presence of a single- or double-major curve pattern, the type of instrumentation employed, nor the etiology of the scoliosis. This method is an alternative to the commonly employed supine lateral bending radiographs. An estimate of spinal flexibility is important for determination of structural change in the spine, the rigidity of curves considered for instrumentation, the curves requiring fusion, the length of fusion necessary, and the amount of correction that is safely possible.
NASA Technical Reports Server (NTRS)
Wolf, S. F.; Lipschutz, M. E.
1993-01-01
Multivariate statistical analysis techniques (linear discriminant analysis and logistic regression) can provide powerful discrimination tools which are generally unfamiliar to the planetary science community. Fall parameters were used to identify a group of 17 H chondrites (Cluster 1) that were part of a coorbital stream which intersected Earth's orbit in May, from 1855 - 1895, and can be distinguished from all other H chondrite falls. Using multivariate statistical techniques, it was demonstrated that a totally different criterion, labile trace element contents - hence thermal histories - or 13 Cluster 1 meteorites are distinguishable from those of 45 non-Cluster 1 H chondrites. Here, we focus upon the principles of multivariate statistical techniques and illustrate their application using non-meteoritic and meteoritic examples.
Kimko, Holly; Berry, Seth; O'Kelly, Michael; Mehrotra, Nitin; Hutmacher, Matthew; Sethuraman, Venkat
2017-01-01
The application of modeling and simulation (M&S) methods to improve decision-making was discussed during the Trends & Innovations in Clinical Trial Statistics Conference held in Durham, North Carolina, USA on May 1-4, 2016. Uses of both pharmacometric and statistical M&S were presented during the conference, highlighting the diversity of the methods employed by pharmacometricians and statisticians to address a broad range of quantitative issues in drug development. Five presentations are summarized herein, which cover the development strategy of employing M&S to drive decision-making; European initiatives on best practice in M&S; case studies of pharmacokinetic/pharmacodynamics modeling in regulatory decisions; estimation of exposure-response relationships in the presence of confounding; and the utility of estimating the probability of a correct decision for dose selection when prior information is limited. While M&S has been widely used during the last few decades, it is expected to play an essential role as more quantitative assessments are employed in the decision-making process. By integrating M&S as a tool to compile the totality of evidence collected throughout the drug development program, more informed decisions will be made.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., other techniques, such as the use of statistical models or the use of historical data could be..., mathematical techniques should be applied to account for the trends to ensure that the expected annual values... emission patterns, either the most recent representative year(s) could be used or statistical techniques or...
The Use of a Context-Based Information Retrieval Technique
2009-07-01
provided in context. Latent Semantic Analysis (LSA) is a statistical technique for inferring contextual and structural information, and previous studies...WAIS). 10 DSTO-TR-2322 1.4.4 Latent Semantic Analysis LSA, which is also known as latent semantic indexing (LSI), uses a statistical and...1.4.6 Language Models In contrast, natural language models apply algorithms that combine statistical information with semantic information. Semantic
Two Paradoxes in Linear Regression Analysis.
Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong
2016-12-25
Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.
[Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].
Golder, W
1999-09-01
To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.