Sample records for statistical object modeling

  1. Statistical Models of At-Grade Intersection Accidents. Addendum.

    DOT National Transportation Integrated Search

    2000-03-01

    This report is an addendum to the work published in FHWA-RD-96-125 titled Statistical Models of At-Grade Intersection Accidents. The objective of both research studies was to develop statistical models of the relationship between traffic accide...

  2. A critique of Rasch residual fit statistics.

    PubMed

    Karabatsos, G

    2000-01-01

    In test analysis involving the Rasch model, a large degree of importance is placed on the "objective" measurement of individual abilities and item difficulties. The degree to which the objectivity properties are attained, of course, depends on the degree to which the data fit the Rasch model. It is therefore important to utilize fit statistics that accurately and reliably detect the person-item response inconsistencies that threaten the measurement objectivity of persons and items. Given this argument, it is somewhat surprising that there is far more emphasis placed in the objective measurement of person and items than there is in the measurement quality of Rasch fit statistics. This paper provides a critical analysis of the residual fit statistics of the Rasch model, arguably the most often used fit statistics, in an effort to illustrate that the task of Rasch fit analysis is not as simple and straightforward as it appears to be. The faulty statistical properties of the residual fit statistics do not allow either a convenient or a straightforward approach to Rasch fit analysis. For instance, given a residual fit statistic, the use of a single minimum critical value for misfit diagnosis across different testing situations, where the situations vary in sample and test properties, leads to both the overdetection and underdetection of misfit. To improve this situation, it is argued that psychometricians need to implement residual-free Rasch fit statistics that are based on the number of Guttman response errors, or use indices that are statistically optimal in detecting measurement disturbances.

  3. Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.

    PubMed

    Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira

    2016-01-01

    Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.

  4. Statistical Learning Is Constrained to Less Abstract Patterns in Complex Sensory Input (but not the Least)

    PubMed Central

    Emberson, Lauren L.; Rubinstein, Dani

    2016-01-01

    The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779

  5. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  6. Categorical data processing for real estate objects valuation using statistical analysis

    NASA Astrophysics Data System (ADS)

    Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.

    2018-05-01

    Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.

  7. Visualization of the variability of 3D statistical shape models by animation.

    PubMed

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  8. Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

    NASA Astrophysics Data System (ADS)

    Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato

    2017-02-01

    Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.

  9. Real-world visual statistics and infants' first-learned object names

    PubMed Central

    Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen

    2017-01-01

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373

  10. Detecting temporal change in freshwater fisheries surveys: statistical power and the important linkages between management questions and monitoring objectives

    USGS Publications Warehouse

    Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,

    2016-01-01

    Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.

  11. Improved analyses using function datasets and statistical modeling

    Treesearch

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  12. Visual shape perception as Bayesian inference of 3D object-centered shape representations.

    PubMed

    Erdogan, Goker; Jacobs, Robert A

    2017-11-01

    Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  14. Real-world visual statistics and infants' first-learned object names.

    PubMed

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  15. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  16. Detection of reflecting surfaces by a statistical model

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Chee-Hung H.

    2009-02-01

    Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.

  17. Secular Extragalactic Parallax and Geometric Distances with Gaia Proper Motions

    NASA Astrophysics Data System (ADS)

    Paine, Jennie; Darling, Jeremiah K.

    2018-06-01

    The motion of the Solar System with respect to the cosmic microwave background (CMB) rest frame creates a well measured dipole in the CMB, which corresponds to a linear solar velocity of about 78 AU/yr. This motion causes relatively nearby extragalactic objects to appear to move compared to more distant objects, an effect that can be measured in the proper motions of nearby galaxies. An object at 1 Mpc and perpendicular to the CMB apex will exhibit a secular parallax, observed as a proper motion, of 78 µas/yr. The relatively large peculiar motions of galaxies make the detection of secular parallax challenging for individual objects. Instead, a statistical parallax measurement can be made for a sample of objects with proper motions, where the global parallax signal is modeled as an E-mode dipole that diminishes linearly with distance. We present preliminary results of applying this model to a sample of nearby galaxies with Gaia proper motions to detect the statistical secular parallax signal. The statistical measurement can be used to calibrate the canonical cosmological “distance ladder.”

  18. Modeling of LEO Orbital Debris Populations in Centimeter and Millimeter Size Regimes

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Hill, . M.; Horstman, M.; Krisko, P. H.; Liou, J.-C.; Matney, M.; Stansbery, E. G.

    2010-01-01

    The building of the NASA Orbital Debris Engineering Model, whether ORDEM2000 or its recently updated version ORDEM2010, uses as its foundation a number of model debris populations, each truncated at a minimum object-size ranging from 10 micron to 1 m. This paper discusses the development of the ORDEM2010 model debris populations in LEO (low Earth orbit), focusing on centimeter (smaller than 10 cm) and millimeter size regimes. Primary data sets used in the statistical derivation of the cm- and mm-size model populations are from the Haystack radar operated in a staring mode. Unlike cataloged objects of sizes greater than approximately 10 cm, ground-based radars monitor smaller-size debris only in a statistical manner instead of tracking every piece. The mono-static Haystack radar can detect debris as small as approximately 5 mm at moderate LEO altitudes. Estimation of millimeter debris populations (for objects smaller than approximately 6 mm) rests largely on Goldstone radar measurements. The bi-static Goldstone radar can detect 2- to 3-mm objects. The modeling of the cm- and mm-debris populations follows the general approach to developing other ORDEM2010-required model populations for various components and types of debris. It relies on appropriate reference populations to provide necessary prior information on the orbital structures and other important characteristics of the debris objects. NASA's LEO-to-GEO Environment Debris (LEGEND) model is capable of furnishing such reference populations in the desired size range. A Bayesian statistical inference process, commonly adopted in ORDEM2010 model-population derivations, changes a priori distribution into a posteriori distribution and thus refines the reference populations in terms of data. This paper describes key elements and major steps in the statistical derivations of the cm- and mm-size debris populations and presents results. Due to lack of data for near 1-mm sizes, the model populations of 1- to 3.16-mm objects are an empirical extension from larger debris. The extension takes into account the results of micro-debris (from 10 micron to 1 mm) population modeling that is based on shuttle impact data, in the hope of making a smooth transition between micron and millimeter size regimes. This paper also includes a brief discussion on issues and potential future work concerning the analysis and interpretation of Goldstone radar data.

  19. The Nature of Objectivity with the Rasch Model.

    ERIC Educational Resources Information Center

    Whitely, Susan E.; Dawis, Rene V.

    Although it has been claimed that the Rasch model leads to a higher degree of objectivity in measurement than has been previously possible, this model has had little impact on test development. Population-invariant item and ability calibrations along with the statistical equivalency of any two item subsets are supposedly possible if the item pool…

  20. Self-organized network of fractal-shaped components coupled through statistical interaction.

    PubMed

    Ugajin, R

    2001-09-01

    A dissipative dynamics is introduced to generate self-organized networks of interacting objects, which we call coupled-fractal networks. The growth model is constructed based on a growth hypothesis in which the growth rate of each object is a product of the probability of receiving source materials from faraway and the probability of receiving adhesives from other grown objects, where each object grows to be a random fractal if isolated, but connects with others if glued. The network is governed by the statistical interaction between fractal-shaped components, which can only be identified in a statistical manner over ensembles. This interaction is investigated using the degree of correlation between fractal-shaped components, enabling us to determine whether it is attractive or repulsive.

  1. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  2. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers (LWO's) use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit (AMU; Bauman et ai, 2004) to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature (T) and dew pOint (T d), as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network shown in Table 1. These objective statistics give the forecasters knowledge of the model's strengths and weaknesses, which will result in improved forecasts for operations.

  3. Environmental Interfaces in Teaching Economic Statistics

    ERIC Educational Resources Information Center

    Campos, Celso; Wodewotzki, Maria Lucia; Jacobini, Otavio; Ferrira, Denise

    2016-01-01

    The objective of this article is, based on the Critical Statistics Education assumptions, to value some environmental interfaces in teaching Statistics by modeling projects. Due to this, we present a practical case, one in which we address an environmental issue, placed in the context of the teaching of index numbers, within the Statistics…

  4. RooStatsCms: A tool for analysis modelling, combination and statistical studies

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Schott, G.; Quast, G.

    2010-04-01

    RooStatsCms is an object oriented statistical framework based on the RooFit technology. Its scope is to allow the modelling, statistical analysis and combination of multiple search channels for new phenomena in High Energy Physics. It provides a variety of methods described in literature implemented as classes, whose design is oriented to the execution of multiple CPU intensive jobs on batch systems or on the Grid.

  5. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  6. Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations

    NASA Technical Reports Server (NTRS)

    Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)

    1998-01-01

    This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are discussed in the companion paper by Manobianco and Nutter. Overall verification results presented here and in part two should establish a reasonable benchmark from which model users and developers may pursue the ongoing eta model verification strategies in the future.

  7. Enhanced detection and visualization of anomalies in spectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.; Messinger, David W.

    2009-05-01

    Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.

  8. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  9. Statistics of high-level scene context.

    PubMed

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.

  10. A multi-object statistical atlas adaptive for deformable registration errors in anomalous medical image segmentation

    NASA Astrophysics Data System (ADS)

    Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.

    2017-02-01

    Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.

  11. Three Dimensional Object Recognition Using an Unsupervised Neural Network: Understanding the Distinguishing Features

    DTIC Science & Technology

    1992-12-23

    predominance of structural models of recognition, of which a recent example is the Recognition By Components (RBC) theory ( Biederman , 1987 ). Structural...related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived from a biologically motivated computational theory (Bienenstock et...dimensional object recognition (Intrator and Gold, 1991). The method is related to recent statistical theory (Huber, 1985; Friedman, 1987 ) and is derived

  12. A cloud and radiation model-based algorithm for rainfall retrieval from SSM/I multispectral microwave measurements

    NASA Technical Reports Server (NTRS)

    Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.

    1992-01-01

    A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.

  13. A 3D object-based model to simulate highly-heterogeneous, coarse, braided river deposits

    NASA Astrophysics Data System (ADS)

    Huber, E.; Huggenberger, P.; Caers, J.

    2016-12-01

    There is a critical need in hydrogeological modeling for geologically more realistic representation of the subsurface. Indeed, widely-used representations of the subsurface heterogeneity based on smooth basis functions such as cokriging or the pilot-point approach fail at reproducing the connectivity of high permeable geological structures that control subsurface solute transport. To realistically model the connectivity of high permeable structures of coarse, braided river deposits, multiple-point statistics and object-based models are promising alternatives. We therefore propose a new object-based model that, according to a sedimentological model, mimics the dominant processes of floodplain dynamics. Contrarily to existing models, this object-based model possesses the following properties: (1) it is consistent with field observations (outcrops, ground-penetrating radar data, etc.), (2) it allows different sedimentological dynamics to be modeled that result in different subsurface heterogeneity patterns, (3) it is light in memory and computationally fast, and (4) it can be conditioned to geophysical data. In this model, the main sedimentological elements (scour fills with open-framework-bimodal gravel cross-beds, gravel sheet deposits, open-framework and sand lenses) and their internal structures are described by geometrical objects. Several spatial distributions are proposed that allow to simulate the horizontal position of the objects on the floodplain as well as the net rate of sediment deposition. The model is grid-independent and any vertical section can be computed algebraically. Furthermore, model realizations can serve as training images for multiple-point statistics. The significance of this model is shown by its impact on the subsurface flow distribution that strongly depends on the sedimentological dynamics modeled. The code will be provided as a free and open-source R-package.

  14. Adopting adequate leaching requirement for practical response models of basil to salinity

    NASA Astrophysics Data System (ADS)

    Babazadeh, Hossein; Tabrizi, Mahdi Sarai; Darvishi, Hossein Hassanpour

    2016-07-01

    Several mathematical models are being used for assessing plant response to salinity of the root zone. Objectives of this study included quantifying the yield salinity threshold value of basil plants to irrigation water salinity and investigating the possibilities of using irrigation water salinity instead of saturated extract salinity in the available mathematical models for estimating yield. To achieve the above objectives, an extensive greenhouse experiment was conducted with 13 irrigation water salinity levels, namely 1.175 dS m-1 (control treatment) and 1.8 to 10 dS m-1. The result indicated that, among these models, the modified discount model (one of the most famous root water uptake model which is based on statistics) produced more accurate results in simulating the basil yield reduction function using irrigation water salinities. Overall the statistical model of Steppuhn et al. on the modified discount model and the math-empirical model of van Genuchten and Hoffman provided the best results. In general, all of the statistical models produced very similar results and their results were better than math-empirical models. It was also concluded that if enough leaching was present, there was no significant difference between the soil salinity saturated extract models and the models using irrigation water salinity.

  15. Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging

    NASA Astrophysics Data System (ADS)

    Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.

    2008-12-01

    Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.

  16. Combining inventories of land cover and forest resources with prediction models and remotely sensed data

    Treesearch

    Raymond L. Czaplewski

    1989-01-01

    It is difficult to design systems for national and global resource inventory and analysis that efficiently satisfy changing, and increasingly complex objectives. It is proposed that individual inventory, monitoring, modeling, and remote sensing systems be specialized to achieve portions of the objectives. These separate systems can be statistically linked to accomplish...

  17. Reentry survivability modeling

    NASA Astrophysics Data System (ADS)

    Fudge, Michael L.; Maher, Robert L.

    1997-10-01

    Statistical methods for expressing the impact risk posed to space systems in general [and the International Space Station (ISS) in particular] by other resident space objects have been examined. One of the findings of this investigation is that there are legitimate physical modeling reasons for the common statistical expression of the collision risk. A combination of statistical methods and physical modeling is also used to express the impact risk posed by re-entering space systems to objects of interest (e.g., people and property) on Earth. One of the largest uncertainties in the expressing of this risk is the estimation of survivable material which survives reentry to impact Earth's surface. This point was recently demonstrated in dramatic fashion by the impact of an intact expendable launch vehicle (ELV) upper stage near a private residence in the continental United States. Since approximately half of the missions supporting ISS will utilize ELVs, it is appropriate to examine the methods used to estimate the amount and physical characteristics of ELV debris surviving reentry to impact Earth's surface. This paper examines reentry survivability estimation methodology, including the specific methodology used by Caiman Sciences' 'Survive' model. Comparison between empirical results (observations of objects which have been recovered on Earth after surviving reentry) and Survive estimates are presented for selected upper stage or spacecraft components and a Delta launch vehicle second stage.

  18. OSSOS: X. How to use a Survey Simulator: Statistical Testing of Dynamical Models Against the Real Kuiper Belt

    NASA Astrophysics Data System (ADS)

    Lawler, Samantha M.; Kavelaars, J. J.; Alexandersen, Mike; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Shankman, Cory

    2018-05-01

    All surveys include observational biases, which makes it impossible to directly compare properties of discovered trans-Neptunian Objects (TNOs) with dynamical models. However, by carefully keeping track of survey pointings on the sky, detection limits, tracking fractions, and rate cuts, the biases from a survey can be modelled in Survey Simulator software. A Survey Simulator takes an intrinsic orbital model (from, for example, the output of a dynamical Kuiper belt emplacement simulation) and applies the survey biases, so that the biased simulated objects can be directly compared with real discoveries. This methodology has been used with great success in the Outer Solar System Origins Survey (OSSOS) and its predecessor surveys. In this chapter, we give four examples of ways to use the OSSOS Survey Simulator to gain knowledge about the true structure of the Kuiper Belt. We demonstrate how to statistically compare different dynamical model outputs with real TNO discoveries, how to quantify detection biases within a TNO population, how to measure intrinsic population sizes, and how to use upper limits from non-detections. We hope this will provide a framework for dynamical modellers to statistically test the validity of their models.

  19. Smaller = Denser, and the Brain Knows It: Natural Statistics of Object Density Shape Weight Expectations

    PubMed Central

    Peters, Megan A. K.; Balzer, Jonathan; Shams, Ladan

    2015-01-01

    If one nondescript object’s volume is twice that of another, is it necessarily twice as heavy? As larger objects are typically heavier than smaller ones, one might assume humans use such heuristics in preparing to lift novel objects if other informative cues (e.g., material, previous lifts) are unavailable. However, it is also known that humans are sensitive to statistical properties of our environments, and that such sensitivity can bias perception. Here we asked whether statistical regularities in properties of liftable, everyday objects would bias human observers’ predictions about objects’ weight relationships. We developed state-of-the-art computer vision techniques to precisely measure the volume of everyday objects, and also measured their weight. We discovered that for liftable man-made objects, “twice as large” doesn’t mean “twice as heavy”: Smaller objects are typically denser, following a power function of volume. Interestingly, this “smaller is denser” relationship does not hold for natural or unliftable objects, suggesting some ideal density range for objects designed to be lifted. We then asked human observers to predict weight relationships between novel objects without lifting them; crucially, these weight predictions quantitatively match typical weight relationships shown by similarly-sized objects in everyday environments. These results indicate that the human brain represents the statistics of everyday objects and that this representation can be quantitatively abstracted and applied to novel objects. Finally, that the brain possesses and can use precise knowledge of the nonlinear association between size and weight carries important implications for implementation of forward models of motor control in artificial systems. PMID:25768977

  20. From fields to objects: A review of geographic boundary analysis

    NASA Astrophysics Data System (ADS)

    Jacquez, G. M.; Maruca, S.; Fortin, M.-J.

    Geographic boundary analysis is a relatively new approach unfamiliar to many spatial analysts. It is best viewed as a technique for defining objects - geographic boundaries - on spatial fields, and for evaluating the statistical significance of characteristics of those boundary objects. This is accomplished using null spatial models representative of the spatial processes expected in the absence of boundary-generating phenomena. Close ties to the object-field dialectic eminently suit boundary analysis to GIS data. The majority of existing spatial methods are field-based in that they describe, estimate, or predict how attributes (variables defining the field) vary through geographic space. Such methods are appropriate for field representations but not object representations. As the object-field paradigm gains currency in geographic information science, appropriate techniques for the statistical analysis of objects are required. The methods reviewed in this paper are a promising foundation. Geographic boundary analysis is clearly a valuable addition to the spatial statistical toolbox. This paper presents the philosophy of, and motivations for geographic boundary analysis. It defines commonly used statistics for quantifying boundaries and their characteristics, as well as simulation procedures for evaluating their significance. We review applications of these techniques, with the objective of making this promising approach accessible to the GIS-spatial analysis community. We also describe the implementation of these methods within geographic boundary analysis software: GEM.

  1. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition. PMID:24194723

  2. Inverse modeling with RZWQM2 to predict water quality

    USDA-ARS?s Scientific Manuscript database

    Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...

  3. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.

  4. Applying Objective Diagnostic Criteria to Students in a College Support Program for Learning Disabilities

    ERIC Educational Resources Information Center

    Sparks, Richard L.; Lovett, Benjamin J.

    2013-01-01

    This study examined whether a large group of postsecondary students participating in a support program for students classified as having learning disabilities (LD) met criteria for five objective diagnostic models for LD: IQ-achievement discrepancy (1.0 SD, 1.5 SD, and greater than 2.0 SD) models, a "Diagnostic and Statistical Manual of…

  5. Statistical Methodologies to Integrate Experimental and Computational Research

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  6. Parameterization of sparse vegetation in thermal images of natural ground landscapes

    NASA Astrophysics Data System (ADS)

    Agassi, Eyal; Ben-Yosef, Nissim

    1997-10-01

    The radiant statistics of thermal images of desert terrain scenes and their temporal behavior have been fully understood and well modeled. Unlike desert scenes, most natural terrestrial landscapes contain vegetative objects. A plant is a living object that regulates its temperature through evapotranspiration of leaf stomata, and plant interaction with the outside world is influenced by its physiological processes. Therefore, the heat balance equation for a vegetative object differs from that for an inorganic surface element. Despite this difficulty, plants can be incorporated into the desert surface model when an effective heat conduction parameter is associated with vegetation. Due to evapotranspiration, the effective heat conduction of plants during daytime is much higher than at night. As a result, plants (mainly trees and bushes) are usually the coldest objects in the scene in the daytime while they are not necessarily the warmest objects at night. The parameterization of vegetative objects in terms of effective heat conduction enables the extension of the desert terrain model for scenes with sparse vegetation and the estimation of their radiant statistics and their diurnal behavior. The effective heat conduction image can serve as a tool for vegetation type classification and assessment of the dominant physical process that determinate thermal image properties.

  7. Automatic identification of bacterial types using statistical imaging methods

    NASA Astrophysics Data System (ADS)

    Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon

    2003-05-01

    The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.

  8. Standard Entropy of Crystalline Iodine from Vapor Pressure Measurements: A Physical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Harris, Ronald M.

    1978-01-01

    Presents material dealing with an application of statistical thermodynamics to the diatomic solid I-2(s). The objective is to enhance the student's appreciation of the power of the statistical formulation of thermodynamics. The Simple Einstein Model is used. (Author/MA)

  9. Application of meandering centreline migration modelling and object-based approach of Long Nab member

    NASA Astrophysics Data System (ADS)

    Saadi, Saad

    2017-04-01

    Characterizing the complexity and heterogeneity of the geometries and deposits in meandering river system is an important concern for the reservoir modelling of fluvial environments. Re-examination of the Long Nab member in the Scalby formation of the Ravenscar Group (Yorkshire, UK), integrating digital outcrop data and forward modelling approaches, will lead to a geologically realistic numerical model of the meandering river geometry. The methodology is based on extracting geostatistics from modern analogous, meandering rivers that exemplify both the confined and non-confined meandering point bars deposits and morphodynamics of Long Nab member. The parameters derived from the modern systems (i.e. channel width, amplitude, radius of curvature, sinuosity, wavelength, channel length and migration rate) are used as a statistical control for the forward simulation and resulting object oriented channel models. The statistical data derived from the modern analogues is multi-dimensional in nature, making analysis difficult. We apply data mining techniques such as parallel coordinates to investigate and identify the important relationships within the modern analogue data, which can then be used drive the development of, and as input to the forward model. This work will increase our understanding of meandering river morphodynamics, planform architecture and stratigraphic signature of various fluvial deposits and features. We will then use these forward modelling based channel objects to build reservoir models, and compare the behaviour of the forward modelled channels with traditional object modelling in hydrocarbon flow simulations.

  10. 75 FR 16202 - Notice of Issuance of Regulatory Guide

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-31

    ..., Revision 2, ``An Acceptable Model and Related Statistical Methods for the Analysis of Fuel Densification.... Introduction The U.S. Nuclear Regulatory Commission (NRC) is issuing a revision to an existing guide in the... nuclear power reactors. To meet these objectives, the guide describes statistical methods related to...

  11. Basic Statistical Concepts and Methods for Earth Scientists

    USGS Publications Warehouse

    Olea, Ricardo A.

    2008-01-01

    INTRODUCTION Statistics is the science of collecting, analyzing, interpreting, modeling, and displaying masses of numerical data primarily for the characterization and understanding of incompletely known systems. Over the years, these objectives have lead to a fair amount of analytical work to achieve, substantiate, and guide descriptions and inferences.

  12. Encoding Dissimilarity Data for Statistical Model Building.

    PubMed

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  13. Statistical characteristics of trajectories of diamagnetic unicellular organisms in a magnetic field.

    PubMed

    Gorobets, Yu I; Gorobets, O Yu

    2015-01-01

    The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Use of microcomputers for planning and managing silviculture habitat relationships.

    Treesearch

    B.G. Marcot; R.S. McNay; R.E. Page

    1988-01-01

    Microcomputers aid in monitoring, modeling, and decision support for integrating objectives of silviculture and wildlife habitat management. Spreadsheets, data bases, statistics, and graphics programs are described for use in monitoring. Stand growth models, modeling languages, area and geobased information systems, and optimization models are discussed for use in...

  15. [Road Extraction in Remote Sensing Images Based on Spectral and Edge Analysis].

    PubMed

    Zhao, Wen-zhi; Luo, Li-qun; Guo, Zhou; Yue, Jun; Yu, Xue-ying; Liu, Hui; Wei, Jing

    2015-10-01

    Roads are typically man-made objects in urban areas. Road extraction from high-resolution images has important applications for urban planning and transportation development. However, due to the confusion of spectral characteristic, it is difficult to distinguish roads from other objects by merely using traditional classification methods that mainly depend on spectral information. Edge is an important feature for the identification of linear objects (e. g. , roads). The distribution patterns of edges vary greatly among different objects. It is crucial to merge edge statistical information into spectral ones. In this study, a new method that combines spectral information and edge statistical features has been proposed. First, edge detection is conducted by using self-adaptive mean-shift algorithm on the panchromatic band, which can greatly reduce pseudo-edges and noise effects. Then, edge statistical features are obtained from the edge statistical model, which measures the length and angle distribution of edges. Finally, by integrating the spectral and edge statistical features, SVM algorithm is used to classify the image and roads are ultimately extracted. A series of experiments are conducted and the results show that the overall accuracy of proposed method is 93% comparing with only 78% overall accuracy of the traditional. The results demonstrate that the proposed method is efficient and valuable for road extraction, especially on high-resolution images.

  16. Improvements to an earth observing statistical performance model with applications to LWIR spectral variability

    NASA Astrophysics Data System (ADS)

    Zhao, Runchen; Ientilucci, Emmett J.

    2017-05-01

    Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.

  17. Final Report - Enhanced LAW Glass Property - Composition Models - Phase 1 VSL-13R2940-1, Rev. 0, dated 9/27/2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, Albert A.; Muller, I.; Gilbo, K.

    2013-11-13

    The objectives of this work are aimed at the development of enhanced LAW propertycomposition models that expand the composition region covered by the models. The models of interest include PCT, VHT, viscosity and electrical conductivity. This is planned as a multi-year effort that will be performed in phases with the objectives listed below for the current phase.  Incorporate property- composition data from the new glasses into the database.  Assess the database and identify composition spaces in the database that need augmentation.  Develop statistically-designed composition matrices to cover the composition regions identified in the above analysis.  Preparemore » crucible melts of glass compositions from the statistically-designed composition matrix and measure the properties of interest.  Incorporate the above property-composition data into the database.  Assess existing models against the complete dataset and, as necessary, start development of new models.« less

  18. Invariant visual object recognition: a model, with lighting invariance.

    PubMed

    Rolls, Edmund T; Stringer, Simon M

    2006-01-01

    How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.

  19. Does objective cluster analysis serve as a useful precursor to seasonal precipitation prediction at local scale? Application to western Ethiopia

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Moges, Semu; Block, Paul

    2018-01-01

    Prediction of seasonal precipitation can provide actionable information to guide management of various sectoral activities. For instance, it is often translated into hydrological forecasts for better water resources management. However, many studies assume homogeneity in precipitation across an entire study region, which may prove ineffective for operational and local-level decisions, particularly for locations with high spatial variability. This study proposes advancing local-level seasonal precipitation predictions by first conditioning on regional-level predictions, as defined through objective cluster analysis, for western Ethiopia. To our knowledge, this is the first study predicting seasonal precipitation at high resolution in this region, where lives and livelihoods are vulnerable to precipitation variability given the high reliance on rain-fed agriculture and limited water resources infrastructure. The combination of objective cluster analysis, spatially high-resolution prediction of seasonal precipitation, and a modeling structure spanning statistical and dynamical approaches makes clear advances in prediction skill and resolution, as compared with previous studies. The statistical model improves versus the non-clustered case or dynamical models for a number of specific clusters in northwestern Ethiopia, with clusters having regional average correlation and ranked probability skill score (RPSS) values of up to 0.5 and 33 %, respectively. The general skill (after bias correction) of the two best-performing dynamical models over the entire study region is superior to that of the statistical models, although the dynamical models issue predictions at a lower resolution and the raw predictions require bias correction to guarantee comparable skills.

  20. Objects and categories: feature statistics and object processing in the ventral stream.

    PubMed

    Tyler, Lorraine K; Chiu, Shannon; Zhuang, Jie; Randall, Billi; Devereux, Barry J; Wright, Paul; Clarke, Alex; Taylor, Kirsten I

    2013-10-01

    Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions-the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.

  1. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  2. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  3. An astronomer's guide to period searching

    NASA Astrophysics Data System (ADS)

    Schwarzenberg-Czerny, A.

    2003-03-01

    We concentrate on analysis of unevenly sampled time series, interrupted by periodic gaps, as often encountered in astronomy. While some of our conclusions may appear surprising, all are based on classical statistical principles of Fisher & successors. Except for discussion of the resolution issues, it is best for the reader to forget temporarily about Fourier transforms and to concentrate on problems of fitting of a time series with a model curve. According to their statistical content we divide the issues into several sections, consisting of: (ii) statistical numerical aspects of model fitting, (iii) evaluation of fitted models as hypotheses testing, (iv) the role of the orthogonal models in signal detection (v) conditions for equivalence of periodograms (vi) rating sensitivity by test power. An experienced observer working with individual objects would benefit little from formalized statistical approach. However, we demonstrate the usefulness of this approach in evaluation of performance of periodograms and in quantitative design of large variability surveys.

  4. Multiple Versus Single Set Validation of Multivariate Models to Avoid Mistakes.

    PubMed

    Harrington, Peter de Boves

    2018-01-02

    Validation of multivariate models is of current importance for a wide range of chemical applications. Although important, it is neglected. The common practice is to use a single external validation set for evaluation. This approach is deficient and may mislead investigators with results that are specific to the single validation set of data. In addition, no statistics are available regarding the precision of a derived figure of merit (FOM). A statistical approach using bootstrapped Latin partitions is advocated. This validation method makes an efficient use of the data because each object is used once for validation. It was reviewed a decade earlier but primarily for the optimization of chemometric models this review presents the reasons it should be used for generalized statistical validation. Average FOMs with confidence intervals are reported and powerful, matched-sample statistics may be applied for comparing models and methods. Examples demonstrate the problems with single validation sets.

  5. 2D Affine and Projective Shape Analysis.

    PubMed

    Bryner, Darshan; Klassen, Eric; Huiling Le; Srivastava, Anuj

    2014-05-01

    Current techniques for shape analysis tend to seek invariance to similarity transformations (rotation, translation, and scale), but certain imaging situations require invariance to larger groups, such as affine or projective groups. Here we present a general Riemannian framework for shape analysis of planar objects where metrics and related quantities are invariant to affine and projective groups. Highlighting two possibilities for representing object boundaries-ordered points (or landmarks) and parameterized curves-we study different combinations of these representations (points and curves) and transformations (affine and projective). Specifically, we provide solutions to three out of four situations and develop algorithms for computing geodesics and intrinsic sample statistics, leading up to Gaussian-type statistical models, and classifying test shapes using such models learned from training data. In the case of parameterized curves, we also achieve the desired goal of invariance to re-parameterizations. The geodesics are constructed by particularizing the path-straightening algorithm to geometries of current manifolds and are used, in turn, to compute shape statistics and Gaussian-type shape models. We demonstrate these ideas using a number of examples from shape and activity recognition.

  6. Effects of Instructional Design with Mental Model Analysis on Learning.

    ERIC Educational Resources Information Center

    Hong, Eunsook

    This paper presents a model for systematic instructional design that includes mental model analysis together with the procedures used in developing computer-based instructional materials in the area of statistical hypothesis testing. The instructional design model is based on the premise that the objective for learning is to achieve expert-like…

  7. Classical subjective expected utility.

    PubMed

    Cerreia-Vioglio, Simone; Maccheroni, Fabio; Marinacci, Massimo; Montrucchio, Luigi

    2013-04-23

    We consider decision makers who know that payoff-relevant observations are generated by a process that belongs to a given class M, as postulated in Wald [Wald A (1950) Statistical Decision Functions (Wiley, New York)]. We incorporate this Waldean piece of objective information within an otherwise subjective setting à la Savage [Savage LJ (1954) The Foundations of Statistics (Wiley, New York)] and show that this leads to a two-stage subjective expected utility model that accounts for both state and model uncertainty.

  8. Verify MesoNAM Performance

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The AMU conducted an objective analysis of the MesoNAM forecasts compared to observed values from sensors at specified KSC/CCAFS wind towers by calculating the following statistics to verify the performance of the model: 1) Bias (mean difference), 2) Standard deviation of Bias, 3) Root Mean Square Error (RMSE), and 4) Hypothesis test for Bias = O. The 45 WS LWOs use the MesoNAM to support launch weather operations. However, the actual performance of the model at KSC and CCAFS had not been measured objectively. The analysis compared the MesoNAM forecast winds, temperature and dew point to the observed values from the sensors on wind towers. The data were stratified by tower sensor, month and onshore/offshore wind direction based on the orientation of the coastline to each tower's location. The model's performance statistics were then calculated for each wind tower based on sensor height and model initialization time. The period of record for the data used in this task was based on the operational start of the current MesoNAM in mid-August 2006 and so the task began with the first full month of data, September 2006, through May 2010. The analysis of model performance indicated: a) The accuracy decreased as the forecast valid time from the model initialization increased, b) There was a diurnal signal in T with a cool bias during the late night and a warm bias during the afternoon, c) There was a diurnal signal in Td with a low bias during the afternoon and a high bias during the late night, and d) The model parameters at each vertical level most closely matched the observed parameters at heights closest to those vertical levels. The AMU developed a GUI that consists of a multi-level drop-down menu written in JavaScript embedded within the HTML code. This tool allows the LWO to easily and efficiently navigate among the charts and spreadsheet files containing the model performance statistics. The objective statistics give the LWOs knowledge of the model's strengths and weaknesses and the GUI allows quick access to the data which will result in improved forecasts for operations.

  9. A modified F-test for evaluating model performance by including both experimental and simulation uncertainties

    USDA-ARS?s Scientific Manuscript database

    Experimental and simulation uncertainties have not been included in many of the statistics used in assessing agricultural model performance. The objectives of this study were to develop an F-test that can be used to evaluate model performance considering experimental and simulation uncertainties, an...

  10. Polychromatic wave-optics models for image-plane speckle. 2. Unresolved objects.

    PubMed

    Van Zandt, Noah R; Spencer, Mark F; Steinbock, Michael J; Anderson, Brian M; Hyde, Milo W; Fiorino, Steven T

    2018-05-20

    Polychromatic laser light can reduce speckle noise in many wavefront-sensing and imaging applications. To help quantify the achievable reduction in speckle noise, this study investigates the accuracy of three polychromatic wave-optics models under the specific conditions of an unresolved object. Because existing theory assumes a well-resolved object, laboratory experiments are used to evaluate model accuracy. The three models use Monte-Carlo averaging, depth slicing, and spectral slicing, respectively, to simulate the laser-object interaction. The experiments involve spoiling the temporal coherence of laser light via a fiber-based, electro-optic modulator. After the light scatters off of the rough object, speckle statistics are measured. The Monte-Carlo method is found to be highly inaccurate, while depth-slicing error peaks at 7.8% but is generally much lower in comparison. The spectral-slicing method is the most accurate, always producing results within the error bounds of the experiment.

  11. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, J; Chan, T; Lee, T

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less

  12. Statistical modeling of temperature, humidity and wind fields in the atmospheric boundary layer over the Siberian region

    NASA Astrophysics Data System (ADS)

    Lomakina, N. Ya.

    2017-11-01

    The work presents the results of the applied climatic division of the Siberian region into districts based on the methodology of objective classification of the atmospheric boundary layer climates by the "temperature-moisture-wind" complex realized with using the method of principal components and the special similarity criteria of average profiles and the eigen values of correlation matrices. On the territory of Siberia, it was identified 14 homogeneous regions for winter season and 10 regions were revealed for summer. The local statistical models were constructed for each region. These include vertical profiles of mean values, mean square deviations, and matrices of interlevel correlation of temperature, specific humidity, zonal and meridional wind velocity. The advantage of the obtained local statistical models over the regional models is shown.

  13. Electromagnetic Induction E-Sensor for Underwater UXO Detection

    DTIC Science & Technology

    2011-12-01

    EMF Electromotive force FET Field Effect Transitor Hz Hertz ms millisecond nV nanoVolt QFS QUASAR Federal...processing. Statistical discrimination techniques based on model analysis, such as the Time-Domain Three Dipole (TD3D) model, can separate UXO-like objects

  14. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  15. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, Max

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.

  17. Multiple object tracking with non-unique data-to-object association via generalized hypothesis testing. [tracking several aircraft near each other or ships at sea

    NASA Technical Reports Server (NTRS)

    Porter, D. W.; Lefler, R. M.

    1979-01-01

    A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.

  18. Identification of market trends with string and D2-brane maps

    NASA Astrophysics Data System (ADS)

    Bartoš, Erik; Pinčák, Richard

    2017-08-01

    The multidimensional string objects are introduced as a new alternative for an application of string models for time series forecasting in trading on financial markets. The objects are represented by open string with 2-endpoints and D2-brane, which are continuous enhancement of 1-endpoint open string model. We show how new object properties can change the statistics of the predictors, which makes them the candidates for modeling a wide range of time series systems. String angular momentum is proposed as another tool to analyze the stability of currency rates except the historical volatility. To show the reliability of our approach with application of string models for time series forecasting we present the results of real demo simulations for four currency exchange pairs.

  19. A subdivision-based parametric deformable model for surface extraction and statistical shape modeling of the knee cartilages

    NASA Astrophysics Data System (ADS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2006-03-01

    Subdivision surfaces and parameterization are desirable for many algorithms that are commonly used in Medical Image Analysis. However, extracting an accurate surface and parameterization can be difficult for many anatomical objects of interest, due to noisy segmentations and the inherent variability of the object. The thin cartilages of the knee are an example of this, especially after damage is incurred from injuries or conditions like osteoarthritis. As a result, the cartilages can have different topologies or exist in multiple pieces. In this paper we present a topology preserving (genus 0) subdivision-based parametric deformable model that is used to extract the surfaces of the patella and tibial cartilages in the knee. These surfaces have minimal thickness in areas without cartilage. The algorithm inherently incorporates several desirable properties, including: shape based interpolation, sub-division remeshing and parameterization. To illustrate the usefulness of this approach, the surfaces and parameterizations of the patella cartilage are used to generate a 3D statistical shape model.

  20. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  1. Statistical Optimization of 1,3-Propanediol (1,3-PD) Production from Crude Glycerol by Considering Four Objectives: 1,3-PD Concentration, Yield, Selectivity, and Productivity.

    PubMed

    Supaporn, Pansuwan; Yeom, Sung Ho

    2018-04-30

    This study investigated the biological conversion of crude glycerol generated from a commercial biodiesel production plant as a by-product to 1,3-propanediol (1,3-PD). Statistical analysis was employed to derive a statistical model for the individual and interactive effects of glycerol, (NH 4 ) 2 SO 4 , trace elements, pH, and cultivation time on the four objectives: 1,3-PD concentration, yield, selectivity, and productivity. Optimum conditions for each objective with its maximum value were predicted by statistical optimization, and experiments under the optimum conditions verified the predictions. In addition, by systematic analysis of the values of four objectives, optimum conditions for 1,3-PD concentration (49.8 g/L initial glycerol, 4.0 g/L of (NH 4 ) 2 SO 4 , 2.0 mL/L of trace element, pH 7.5, and 11.2 h of cultivation time) were determined to be the global optimum culture conditions for 1,3-PD production. Under these conditions, we could achieve high 1,3-PD yield (47.4%), 1,3-PD selectivity (88.8%), and 1,3-PD productivity (2.1/g/L/h) as well as high 1,3-PD concentration (23.6 g/L).

  2. Simulation and assimilation of satellite altimeter data at the oceanic mesoscale

    NASA Technical Reports Server (NTRS)

    Demay, P.; Robinson, A. R.

    1984-01-01

    An improved "objective analysis' technique is used along with an altimeter signal statistical model, an altimeter noise statistical model, an orbital model, and synoptic surface current maps in the POLYMODE-SDE area, to evaluate the performance of various observational strategies in catching the mesoscale variability at mid-latitudes. In particular, simulated repetitive nominal orbits of ERS-1, TOPEX, and SPOT/POSEIDON are examined. Results show the critical importance of existence of a subcycle, scanning in either direction. Moreover, long repeat cycles ( 20 days) and short cross-track distances ( 300 km) seem preferable, since they match mesoscale statistics. Another goal of the study is to prepare and discuss sea-surface height (SSH) assimilation in quasigeostrophic models. Restored SSH maps are shown to meet that purpose, if an efficient extrapolation method or deep in-situ data (floats) are used on the vertical to start and update the model.

  3. Use of observational and model-derived fields and regime model output statistics in mesoscale forecasting

    NASA Technical Reports Server (NTRS)

    Forbes, G. S.; Pielke, R. A.

    1985-01-01

    Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.

  4. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  5. Investigating the impact of design characteristics on statistical efficiency within discrete choice experiments: A systematic survey.

    PubMed

    Vanniyasingam, Thuva; Daly, Caitlin; Jin, Xuejing; Zhang, Yuan; Foster, Gary; Cunningham, Charles; Thabane, Lehana

    2018-06-01

    This study reviews simulation studies of discrete choice experiments to determine (i) how survey design features affect statistical efficiency, (ii) and to appraise their reporting quality. Statistical efficiency was measured using relative design (D-) efficiency, D-optimality, or D-error. For this systematic survey, we searched Journal Storage (JSTOR), Since Direct, PubMed, and OVID which included a search within EMBASE. Searches were conducted up to year 2016 for simulation studies investigating the impact of DCE design features on statistical efficiency. Studies were screened and data were extracted independently and in duplicate. Results for each included study were summarized by design characteristic. Previously developed criteria for reporting quality of simulation studies were also adapted and applied to each included study. Of 371 potentially relevant studies, 9 were found to be eligible, with several varying in study objectives. Statistical efficiency improved when increasing the number of choice tasks or alternatives; decreasing the number of attributes, attribute levels; using an unrestricted continuous "manipulator" attribute; using model-based approaches with covariates incorporating response behaviour; using sampling approaches that incorporate previous knowledge of response behaviour; incorporating heterogeneity in a model-based design; correctly specifying Bayesian priors; minimizing parameter prior variances; and using an appropriate method to create the DCE design for the research question. The simulation studies performed well in terms of reporting quality. Improvement is needed in regards to clearly specifying study objectives, number of failures, random number generators, starting seeds, and the software used. These results identify the best approaches to structure a DCE. An investigator can manipulate design characteristics to help reduce response burden and increase statistical efficiency. Since studies varied in their objectives, conclusions were made on several design characteristics, however, the validity of each conclusion was limited. Further research should be conducted to explore all conclusions in various design settings and scenarios. Additional reviews to explore other statistical efficiency outcomes and databases can also be performed to enhance the conclusions identified from this review.

  6. Towards a Population Synthesis Model of Objects formed by Self-Gravitating Disc Fragmentation and Tidal Downsizing

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan; Rice, Ken

    2013-07-01

    Recently, the gravitational instability (GI) model of giant planet and brown dwarf formation has been revisited and recast into what is often referred to as the "tidal downsizing" hypothesis. The fragmentation of self-gravitating protostellar discs into gravitationally bound embryos - with masses of a few to tens of Jupiter masses, at semi major axes above 30 - 40 AU - is followed by a combination of grain sedimentation inside the embryo, radial migration towards the central star and tidal disruption of the embryo's upper layers. The properties of the resultant object depends sensitively on the timescales upon which each process occurs. Therefore, GI followed by tidal downsizing can theoretically produce objects spanning a large mass range, from terrestrial planets to giant planets and brown dwarfs. Whether such objects can be formed in practice, and what proportions of the observed population they would represent, requires a more involved statistical analysis. We present a simple population synthesis model of star and planet formation via GI and tidal downsizing. We couple a semi-analytic model of protostellar disc evolution to analytic calculations of fragmentation, initial embryo mass, grain growth and sedimentation, embryo migration and tidal disruption. While there are key pieces of physics yet to be incorporated, it represents a first step towards a mature statistical model of GI and tidal downsizing as a mode of star and planet formation. We show results from four runs of the population synthesis model, varying the opacity law and the strength of migration, as well as investigating the effect of disc truncation during the fragmentation process.

  7. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    NASA Technical Reports Server (NTRS)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  8. MesoNAM Verification Phase II

    NASA Technical Reports Server (NTRS)

    Watson, Leela R.

    2011-01-01

    The 45th Weather Squadron Launch Weather Officers use the 12-km resolution North American Mesoscale model (MesoNAM) forecasts to support launch weather operations. In Phase I, the performance of the model at KSC/CCAFS was measured objectively by conducting a detailed statistical analysis of model output compared to observed values. The objective analysis compared the MesoNAM forecast winds, temperature, and dew point to the observed values from the sensors in the KSC/CCAFS wind tower network. In Phase II, the AMU modified the current tool by adding an additional 15 months of model output to the database and recalculating the verification statistics. The bias, standard deviation of bias, Root Mean Square Error, and Hypothesis test for bias were calculated to verify the performance of the model. The results indicated that the accuracy decreased as the forecast progressed, there was a diurnal signal in temperature with a cool bias during the late night and a warm bias during the afternoon, and there was a diurnal signal in dewpoint temperature with a low bias during the afternoon and a high bias during the late night.

  9. Experience and Sentence Processing: Statistical Learning and Relative Clause Comprehension

    PubMed Central

    Wells, Justine B.; Christiansen, Morten H.; Race, David S.; Acheson, Daniel J.; MacDonald, Maryellen C.

    2009-01-01

    Many explanations of the difficulties associated with interpreting object relative clauses appeal to the demands that object relatives make on working memory. MacDonald and Christiansen (2002) pointed to variations in reading experience as a source of differences, arguing that the unique word order of object relatives makes their processing more difficult and more sensitive to the effects of previous experience than the processing of subject relatives. This hypothesis was tested in a large-scale study manipulating reading experiences of adults over several weeks. The group receiving relative clause experience increased reading speeds for object relatives more than for subject relatives, whereas a control experience group did not. The reading time data were compared to performance of a computational model given different amounts of experience. The results support claims for experience-based individual differences and an important role for statistical learning in sentence comprehension processes. PMID:18922516

  10. TH-CD-202-07: A Methodology for Generating Numerical Phantoms for Radiation Therapy Using Geometric Attribute Distribution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Chen, H; Mutic, S

    Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less

  11. Forecasting runout of rock and debris avalanches

    USGS Publications Warehouse

    Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.

    2006-01-01

    Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.

  12. Statistical framework for evaluation of climate model simulations by use of climate proxy data from the last millennium - Part 1: Theory

    NASA Astrophysics Data System (ADS)

    Sundberg, R.; Moberg, A.; Hind, A.

    2012-08-01

    A statistical framework for comparing the output of ensemble simulations from global climate models with networks of climate proxy and instrumental records has been developed, focusing on near-surface temperatures for the last millennium. This framework includes the formulation of a joint statistical model for proxy data, instrumental data and simulation data, which is used to optimize a quadratic distance measure for ranking climate model simulations. An essential underlying assumption is that the simulations and the proxy/instrumental series have a shared component of variability that is due to temporal changes in external forcing, such as volcanic aerosol load, solar irradiance or greenhouse gas concentrations. Two statistical tests have been formulated. Firstly, a preliminary test establishes whether a significant temporal correlation exists between instrumental/proxy and simulation data. Secondly, the distance measure is expressed in the form of a test statistic of whether a forced simulation is closer to the instrumental/proxy series than unforced simulations. The proposed framework allows any number of proxy locations to be used jointly, with different seasons, record lengths and statistical precision. The goal is to objectively rank several competing climate model simulations (e.g. with alternative model parameterizations or alternative forcing histories) by means of their goodness of fit to the unobservable true past climate variations, as estimated from noisy proxy data and instrumental observations.

  13. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    ERIC Educational Resources Information Center

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  14. Determination of optimal imaging settings for urolithiasis CT using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR): a physical human phantom study

    PubMed Central

    Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In

    2016-01-01

    Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542

  15. Effect of homeopathic Arnica montana on bruising in face-lifts: results of a randomized, double-blind, placebo-controlled clinical trial.

    PubMed

    Seeley, Brook M; Denton, Andrew B; Ahn, Min S; Maas, Corey S

    2006-01-01

    To design a model for performing reproducible, objective analyses of skin color changes and to apply this model to evaluate the efficacy of homeopathic Arnica montana as an antiecchymotic agent when taken perioperatively. Twenty-nine patients undergoing rhytidectomy at a tertiary care center were treated perioperatively with either homeopathic A. montana or placebo in a double-blind fashion. Postoperative photographs were analyzed using a novel computer model for color changes, and subjective assessments of postoperative ecchymosis were obtained. No subjective differences were noted between the treatment group and the control group, either by the patients or by the professional staff. No objective difference in the degree of color change was found. Patients receiving homeopathic A. montana were found to have a smaller area of ecchymosis on postoperative days 1, 5, 7, and 10. These differences were statistically significant (P<.05) only on postoperative days 1 (P<.005) and 7 (P<.001). This computer model provides an efficient, objective, and reproducible means with which to assess perioperative color changes, both in terms of area and degree. Patients taking perioperative homeopathic A. montana exhibited less ecchymosis, and that difference was statistically significant (P<.05) on 2 of the 4 postoperative data points evaluated.

  16. Computational modeling of the neural representation of object shape in the primate ventral visual system

    PubMed Central

    Eguchi, Akihiro; Mender, Bedeho M. W.; Evans, Benjamin D.; Humphreys, Glyn W.; Stringer, Simon M.

    2015-01-01

    Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognize the whole object. PMID:26300766

  17. Combustion Technology for Incinerating Wastes from Air Force Industrial Processes.

    DTIC Science & Technology

    1984-02-01

    The assumption of equilibrium between environmental compartments. * The statistical extrapolations yielding "safe" doses of various constituents...would be contacted to identify the assumptions and data requirements needed to design, construct and implement the model. The model’s primary objective...Recovery Planning Model (RRPLAN) is described. This section of the paper summarizes the model’s assumptions , major components and modes of operation

  18. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  19. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, M.

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.

  20. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  1. Hyperspectral imaging simulation of object under sea-sky background

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  2. 77 FR 41986 - Division of Nursing, Public Health Nursing Community Based Model of PHN Case Management Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-17

    ... scientific literature related to epidemiology, statistics, surveillance, Healthy People 2020 Objectives, and... eligible projects or activities. This requirement applies whether the delinquency is attributable to the...

  3. Statistical modeling of an integrated boiler for coal fired thermal power plant.

    PubMed

    Chandrasekharan, Sreepradha; Panda, Rames Chandra; Swaminathan, Bhuvaneswari Natrajan

    2017-06-01

    The coal fired thermal power plants plays major role in the power production in the world as they are available in abundance. Many of the existing power plants are based on the subcritical technology which can produce power with the efficiency of around 33%. But the newer plants are built on either supercritical or ultra-supercritical technology whose efficiency can be up to 50%. Main objective of the work is to enhance the efficiency of the existing subcritical power plants to compensate for the increasing demand. For achieving the objective, the statistical modeling of the boiler units such as economizer, drum and the superheater are initially carried out. The effectiveness of the developed models is tested using analysis methods like R 2 analysis and ANOVA (Analysis of Variance). The dependability of the process variable (temperature) on different manipulated variables is analyzed in the paper. Validations of the model are provided with their error analysis. Response surface methodology (RSM) supported by DOE (design of experiments) are implemented to optimize the operating parameters. Individual models along with the integrated model are used to study and design the predictive control of the coal-fired thermal power plant.

  4. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  5. Referenceless perceptual fog density prediction model

    NASA Astrophysics Data System (ADS)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.

    2014-02-01

    We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.

  6. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  7. A Systems Analysis of the Library and Information Science Statistical Data System: The Preliminary Study. Interim Report.

    ERIC Educational Resources Information Center

    Hamburg, Morris; And Others

    The long-term goal of this investigation is to design and establish a national model for a system of library statistical data. This is a report on The Preliminary Study which was carried out over an 11-month period ending May, 1969. The objective of The Preliminary Study was to design and delimit The Research Investigation in the most efficient…

  8. The relationship between venture capital investment and macro economic variables via statistical computation method

    NASA Astrophysics Data System (ADS)

    Aygunes, Gunes

    2017-07-01

    The objective of this paper is to survey and determine the macroeconomic factors affecting the level of venture capital (VC) investments in a country. The literary depends on venture capitalists' quality and countries' venture capital investments. The aim of this paper is to give relationship between venture capital investment and macro economic variables via statistical computation method. We investigate the countries and macro economic variables. By using statistical computation method, we derive correlation between venture capital investments and macro economic variables. According to method of logistic regression model (logit regression or logit model), macro economic variables are correlated with each other in three group. Venture capitalists regard correlations as a indicator. Finally, we give correlation matrix of our results.

  9. Improving the Validity of Activity of Daily Living Dependency Risk Assessment

    PubMed Central

    Clark, Daniel O.; Stump, Timothy E.; Tu, Wanzhu; Miller, Douglas K.

    2015-01-01

    Objectives Efforts to prevent activity of daily living (ADL) dependency may be improved through models that assess older adults’ dependency risk. We evaluated whether cognition and gait speed measures improve the predictive validity of interview-based models. Method Participants were 8,095 self-respondents in the 2006 Health and Retirement Survey who were aged 65 years or over and independent in five ADLs. Incident ADL dependency was determined from the 2008 interview. Models were developed using random 2/3rd cohorts and validated in the remaining 1/3rd. Results Compared to a c-statistic of 0.79 in the best interview model, the model including cognitive measures had c-statistics of 0.82 and 0.80 while the best fitting gait speed model had c-statistics of 0.83 and 0.79 in the development and validation cohorts, respectively. Conclusion Two relatively brief models, one that requires an in-person assessment and one that does not, had excellent validity for predicting incident ADL dependency but did not significantly improve the predictive validity of the best fitting interview-based models. PMID:24652867

  10. The Spiral Arm Segments of the Galaxy within 3 kpc from the Sun: A Statistical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griv, Evgeny; Jiang, Ing-Guey; Hou, Li-Gang, E-mail: griv@bgu.ac.il

    As can be reasonably expected, upcoming large-scale APOGEE, GAIA, GALAH, LAMOST, and WEAVE stellar spectroscopic surveys will yield rather noisy Galactic distributions of stars. In view of the possibility of employing these surveys, our aim is to present a statistical method to extract information about the spiral structure of the Galaxy from currently available data, and to demonstrate the effectiveness of this method. The model differs from previous works studying how objects are distributed in space in its calculation of the statistical significance of the hypothesis that some of the objects are actually concentrated in a spiral. A statistical analysismore » of the distribution of cold dust clumps within molecular clouds, H ii regions, Cepheid stars, and open clusters in the nearby Galactic disk within 3 kpc from the Sun is carried out. As an application of the method, we obtain distances between the Sun and the centers of the neighboring Sagittarius arm segment, the Orion arm segment in which the Sun is located, and the Perseus arm segment. Pitch angles of the logarithmic spiral segments and their widths are also estimated. The hypothesis that the collected objects accidentally form spirals is refuted with almost 100% statistical confidence. We show that these four independent distributions of young objects lead to essentially the same results. We also demonstrate that our newly deduced values of the mean distances and pitch angles for the segments are not too far from those found recently by Reid et al. using VLBI-based trigonometric parallaxes of massive star-forming regions.« less

  11. Statistical Field Estimation for Complex Coastal Regions and Archipelagos (PREPRINT)

    DTIC Science & Technology

    2011-04-09

    and study the computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal...computational properties of these schemes. Specifically, we extend a multiscale Objective Analysis (OA) approach to complex coastal regions and... multiscale free-surface code builds on the primitive-equation model of the Harvard Ocean Predic- tion System (HOPS, Haley et al. (2009)). Additionally

  12. Thomas-Fermi model for a bulk self-gravitating stellar object in two dimensions

    NASA Astrophysics Data System (ADS)

    De, Sanchari; Chakrabarty, Somenath

    2015-09-01

    In this article we have solved a hypothetical problem related to the stability and gross properties of two-dimensional self-gravitating stellar objects using the Thomas-Fermi model. The formalism presented here is an extension of the standard three-dimensional problem discussed in the book on statistical physics, Part I by Landau and Lifshitz. Further, the formalism presented in this article may be considered a class problem for post-graduate-level students of physics or may be assigned as a part of their dissertation project.

  13. Object recognition with hierarchical discriminant saliency networks.

    PubMed

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.

  14. Statistical Interior Tomography

    PubMed Central

    Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.

    2011-01-01

    This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044

  15. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  16. Decision-making for foot-and-mouth disease control: Objectives matter

    USGS Publications Warehouse

    Probert, William J. M.; Shea, Katriona; Fonnesbeck, Christopher J.; Runge, Michael C.; Carpenter, Tim E.; Durr, Salome; Garner, M. Graeme; Harvey, Neil; Stevenson, Mark A.; Webb, Colleen T.; Werkman, Marleen; Tildesley, Michael J.; Ferrari, Matthew J.

    2016-01-01

    Formal decision-analytic methods can be used to frame disease control problems, the first step of which is to define a clear and specific objective. We demonstrate the imperative of framing clearly-defined management objectives in finding optimal control actions for control of disease outbreaks. We illustrate an analysis that can be applied rapidly at the start of an outbreak when there are multiple stakeholders involved with potentially multiple objectives, and when there are also multiple disease models upon which to compare control actions. The output of our analysis frames subsequent discourse between policy-makers, modellers and other stakeholders, by highlighting areas of discord among different management objectives and also among different models used in the analysis. We illustrate this approach in the context of a hypothetical foot-and-mouth disease (FMD) outbreak in Cumbria, UK using outputs from five rigorously-studied simulation models of FMD spread. We present both relative rankings and relative performance of controls within each model and across a range of objectives. Results illustrate how control actions change across both the base metric used to measure management success and across the statistic used to rank control actions according to said metric. This work represents a first step towards reconciling the extensive modelling work on disease control problems with frameworks for structured decision making.

  17. Impact of the calibration period on the conceptual rainfall-runoff model parameter estimates

    NASA Astrophysics Data System (ADS)

    Todorovic, Andrijana; Plavsic, Jasna

    2015-04-01

    A conceptual rainfall-runoff model is defined by its structure and parameters, which are commonly inferred through model calibration. Parameter estimates depend on objective function(s), optimisation method, and calibration period. Model calibration over different periods may result in dissimilar parameter estimates, while model efficiency decreases outside calibration period. Problem of model (parameter) transferability, which conditions reliability of hydrologic simulations, has been investigated for decades. In this paper, dependence of the parameter estimates and model performance on calibration period is analysed. The main question that is addressed is: are there any changes in optimised parameters and model efficiency that can be linked to the changes in hydrologic or meteorological variables (flow, precipitation and temperature)? Conceptual, semi-distributed HBV-light model is calibrated over five-year periods shifted by a year (sliding time windows). Length of the calibration periods is selected to enable identification of all parameters. One water year of model warm-up precedes every simulation, which starts with the beginning of a water year. The model is calibrated using the built-in GAP optimisation algorithm. The objective function used for calibration is composed of Nash-Sutcliffe coefficient for flows and logarithms of flows, and volumetric error, all of which participate in the composite objective function with approximately equal weights. Same prior parameter ranges are used in all simulations. The model is calibrated against flows observed at the Slovac stream gauge on the Kolubara River in Serbia (records from 1954 to 2013). There are no trends in precipitation nor in flows, however, there is a statistically significant increasing trend in temperatures at this catchment. Parameter variability across the calibration periods is quantified in terms of standard deviations of normalised parameters, enabling detection of the most variable parameters. Correlation coefficients among optimised model parameters and total precipitation P, mean temperature T and mean flow Q are calculated to give an insight into parameter dependence on the hydrometeorological drivers. The results reveal high sensitivity of almost all model parameters towards calibration period. The highest variability is displayed by the refreezing coefficient, water holding capacity, and temperature gradient. The only statistically significant (decreasing) trend is detected in the evapotranspiration reduction threshold. Statistically significant correlation is detected between the precipitation gradient and precipitation depth, and between the time-area histogram base and flows. All other correlations are not statistically significant, implying that changes in optimised parameters cannot generally be linked to the changes in P, T or Q. As for the model performance, the model reproduces the observed runoff satisfactorily, though the runoff is slightly overestimated in wet periods. The Nash-Sutcliffe efficiency coefficient (NSE) ranges from 0.44 to 0.79. Higher NSE values are obtained over wetter periods, what is supported by statistically significant correlation between NSE and flows. Overall, no systematic variations in parameters or in model performance are detected. Parameter variability may therefore rather be attributed to errors in data or inadequacies in the model structure. Further research is required to examine the impact of the calibration strategy or model structure on the variability in optimised parameters in time.

  18. Sparse intervertebral fence composition for 3D cervical vertebra segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian

    2018-06-01

    Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.

  19. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  20. Mid-infrared interferometry of AGNs: A statistical view into the dusty nuclear environment of the Seyfert Galaxies.

    NASA Astrophysics Data System (ADS)

    Lopez-Gonzaga, N.

    2015-09-01

    The high resolution achieved by the instrument MIDI at the VLTI allowed to obtain more detail information about the geometry and structure of the nuclear mid-infrared emission of AGNs, but due to the lack of real images, the interpretation of the results is not an easy task. To profit more from the high resolution data, we developed a statistical tool that allows interpret these data using clumpy torus models. A statistical approach is needed to overcome effects such as, the randomness in the position of the clouds and the uncertainty of the true position angle on the sky. Our results, obtained by studying the mid-infrared emission at the highest resolution currently available, suggest that the dusty environment of Type I objects is formed by a lower number of clouds than Type II objects.

  1. Statistical modelling for recurrent events: an application to sports injuries

    PubMed Central

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  2. 2-Point microstructure archetypes for improved elastic properties

    NASA Astrophysics Data System (ADS)

    Adams, Brent L.; Gao, Xiang

    2004-01-01

    Rectangular models of material microstructure are described by their 1- and 2-point (spatial) correlation statistics of placement of local state. In the procedure described here the local state space is described in discrete form; and the focus is on placement of local state within a finite number of cells comprising rectangular models. It is illustrated that effective elastic properties (generalized Hashin Shtrikman bounds) can be obtained that are linear in components of the correlation statistics. Within this framework the concept of an eigen-microstructure within the microstructure hull is useful. Given the practical innumerability of the microstructure hull, however, we introduce a method for generating a sequence of archetypes of eigen-microstructure, from the 2-point correlation statistics of local state, assuming that the 1-point statistics are stationary. The method is illustrated by obtaining an archetype for an imaginary two-phase material where the objective is to maximize the combination C_{xxxx}^{*} + C_{xyxy}^{*}

  3. Exploring patient satisfaction predictors in relation to a theoretical model.

    PubMed

    Grøndahl, Vigdis Abrahamsen; Hall-Lord, Marie Louise; Karlsson, Ingela; Appelgren, Jari; Wilde-Larsson, Bodil

    2013-01-01

    The aim is to describe patients' care quality perceptions and satisfaction and to explore potential patient satisfaction predictors as person-related conditions, external objective care conditions and patients' perception of actual care received ("PR") in relation to a theoretical model. A cross-sectional design was used. Data were collected using one questionnaire combining questions from four instruments: Quality from patients' perspective; Sense of coherence; Big five personality trait; and Emotional stress reaction questionnaire (ESRQ), together with questions from previous research. In total, 528 patients (83.7 per cent response rate) from eight medical, three surgical and one medical/surgical ward in five Norwegian hospitals participated. Answers from 373 respondents with complete ESRQ questionnaires were analysed. Sequential multiple regression analysis with ESRQ as dependent variable was run in three steps: person-related conditions, external objective care conditions, and PR (p < 0.05). Step 1 (person-related conditions) explained 51.7 per cent of the ESRQ variance. Step 2 (external objective care conditions) explained an additional 2.4 per cent. Step 3 (PR) gave no significant additional explanation (0.05 per cent). Steps 1 and 2 contributed statistical significance to the model. Patients rated both quality-of-care and satisfaction highly. The paper shows that the theoretical model using an emotion-oriented approach to assess patient satisfaction can explain 54 per cent of patient satisfaction in a statistically significant manner.

  4. Parent Ratings of ADHD Symptoms: Generalized Partial Credit Model Analysis of Differential Item Functioning across Gender

    ERIC Educational Resources Information Center

    Gomez, Rapson

    2012-01-01

    Objective: Generalized partial credit model, which is based on item response theory (IRT), was used to test differential item functioning (DIF) for the "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.), inattention (IA), and hyperactivity/impulsivity (HI) symptoms across boys and girls. Method: To accomplish this, parents completed…

  5. Shallow Turbulence in Rivers and Estuaries

    DTIC Science & Technology

    2012-09-30

    objectives are to: 1. Determine spatial patterns of shallow turbulence from in-situ and remote sensing data and investigate the effects and...production through a model parameter study, and determine the optimal model configuration that statistically reproduces the shallow turbulence...more probable cause. According to Nezu et al. (1993), longitudinal vorticity streets would cause alternating upwelling (boils) and down welling

  6. Analysis of Time-Series Quasi-Experiments. Final Report.

    ERIC Educational Resources Information Center

    Glass, Gene V.; Maguire, Thomas O.

    The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…

  7. Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation

    DTIC Science & Technology

    1990-05-01

    process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE

  8. A Model for Indexing Medical Documents Combining Statistical and Symbolic Knowledge.

    PubMed Central

    Avillach, Paul; Joubert, Michel; Fieschi, Marius

    2007-01-01

    OBJECTIVES: To develop and evaluate an information processing method based on terminologies, in order to index medical documents in any given documentary context. METHODS: We designed a model using both symbolic general knowledge extracted from the Unified Medical Language System (UMLS) and statistical knowledge extracted from a domain of application. Using statistical knowledge allowed us to contextualize the general knowledge for every particular situation. For each document studied, the extracted terms are ranked to highlight the most significant ones. The model was tested on a set of 17,079 French standardized discharge summaries (SDSs). RESULTS: The most important ICD-10 term of each SDS was ranked 1st or 2nd by the method in nearly 90% of the cases. CONCLUSIONS: The use of several terminologies leads to more precise indexing. The improvement achieved in the model’s implementation performances as a result of using semantic relationships is encouraging. PMID:18693792

  9. The relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards: A Monte Carlo study

    PubMed Central

    Austin, Peter C.; Reeves, Mathew J.

    2015-01-01

    Background Hospital report cards, in which outcomes following the provision of medical or surgical care are compared across health care providers, are being published with increasing frequency. Essential to the production of these reports is risk-adjustment, which allows investigators to account for differences in the distribution of patient illness severity across different hospitals. Logistic regression models are frequently used for risk-adjustment in hospital report cards. Many applied researchers use the c-statistic (equivalent to the area under the receiver operating characteristic curve) of the logistic regression model as a measure of the credibility and accuracy of hospital report cards. Objectives To determine the relationship between the c-statistic of a risk-adjustment model and the accuracy of hospital report cards. Research Design Monte Carlo simulations were used to examine this issue. We examined the influence of three factors on the accuracy of hospital report cards: the c-statistic of the logistic regression model used for risk-adjustment, the number of hospitals, and the number of patients treated at each hospital. The parameters used to generate the simulated datasets came from analyses of patients hospitalized with a diagnosis of acute myocardial infarction in Ontario, Canada. Results The c-statistic of the risk-adjustment model had, at most, a very modest impact on the accuracy of hospital report cards, whereas the number of patients treated at each hospital had a much greater impact. Conclusions The c-statistic of a risk-adjustment model should not be used to assess the accuracy of a hospital report card. PMID:23295579

  10. Are some BL Lac objects artefacts of gravitational lensing?

    NASA Technical Reports Server (NTRS)

    Ostriker, J. P.; Vietri, M.

    1985-01-01

    It is proposed here that a significant fraction of BL Lac objects are optically violently variable quasars whose continuum emission has been greatly amplified, relative to the line emission, by pointlike gravitational lenses in intervening galaxies. Several anomalous physical and statistical properties of BL Lacs can be understood on the basis of this model, which is immediately testable on the basis of absorption line studies and by direct imaging.

  11. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.

  12. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  13. Selecting statistical model and optimum maintenance policy: a case study of hydraulic pump.

    PubMed

    Ruhi, S; Karim, M R

    2016-01-01

    Proper maintenance policy can play a vital role for effective investigation of product reliability. Every engineered object such as product, plant or infrastructure needs preventive and corrective maintenance. In this paper we look at a real case study. It deals with the maintenance of hydraulic pumps used in excavators by a mining company. We obtain the data that the owner had collected and carry out an analysis and building models for pump failures. The data consist of both failure and censored lifetimes of the hydraulic pump. Different competitive mixture models are applied to analyze a set of maintenance data of a hydraulic pump. Various characteristics of the mixture models, such as the cumulative distribution function, reliability function, mean time to failure, etc. are estimated to assess the reliability of the pump. Akaike Information Criterion, adjusted Anderson-Darling test statistic, Kolmogrov-Smirnov test statistic and root mean square error are considered to select the suitable models among a set of competitive models. The maximum likelihood estimation method via the EM algorithm is applied mainly for estimating the parameters of the models and reliability related quantities. In this study, it is found that a threefold mixture model (Weibull-Normal-Exponential) fits well for the hydraulic pump failures data set. This paper also illustrates how a suitable statistical model can be applied to estimate the optimum maintenance period at a minimum cost of a hydraulic pump.

  14. A Comparative Evaluation of Mixed Dentition Analysis on Reliability of Cone Beam Computed Tomography Image Compared to Plaster Model

    PubMed Central

    Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam

    2017-01-01

    Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639

  15. Maxwell's color statistics: from reduction of visible errors to reduction to invisible molecules.

    PubMed

    Cat, Jordi

    2014-12-01

    This paper presents a cross-disciplinary and multi-disciplinary account of Maxwell's introduction of statistical models of molecules for the composition of gases. The account focuses on Maxwell's deployment of statistical models of data in his contemporaneous color researches as established in Cambridge mathematical physics, especially by Maxwell's seniors and mentors. The paper also argues that the cross-disciplinary, or cross-domain, transfer of resources from the natural and social sciences took place in both directions and relied on the complex intra-disciplinary, or intra-domain, dynamics of Maxwell's researches in natural sciences, in color theory, physical astronomy, electromagnetism and dynamical theory of gases, as well as involving a variety of types of communicating and mediating media, from material objects to concepts, techniques and institutions.

  16. Statistical Analyses of Satellite Cloud Object Data from CERES. Part II; Tropical Convective Cloud Objects During 1998 El Nino and Validation of the Fixed Anvil Temperature Hypothesis

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce a.; Parker, Lindsay; Lin, Bing; Eitzen, Zachary A.; Branson, Mark

    2006-01-01

    Characteristics of tropical deep convective cloud objects observed over the tropical Pacific during January-August 1998 are examined using the Tropical Rainfall Measuring Mission/ Clouds and the Earth s Radiant Energy System single scanner footprint (SSF) data. These characteristics include the frequencies of occurrence and statistical distributions of cloud physical properties. Their variations with cloud-object size, sea surface temperature (SST), and satellite precessing cycle are analyzed in detail. A cloud object is defined as a contiguous patch of the Earth composed of satellite footprints within a single dominant cloud-system type. It is found that statistical distributions of cloud physical properties are significantly different among three size categories of cloud objects with equivalent diameters of 100 - 150 km (small), 150 - 300 km (medium), and > 300 km (large), respectively, except for the distributions of ice particle size. The distributions for the larger-size category of cloud objects are more skewed towards high SSTs, high cloud tops, low cloud-top temperature, large ice water path, high cloud optical depth, low outgoing longwave (LW) radiation, and high albedo than the smaller-size category. As SST varied from one satellite precessing cycle to another, the changes in macrophysical properties of cloud objects over the entire tropical Pacific were small for the large-size category of cloud objects, relative to those of the small- and medium-size categories. This result suggests that the fixed anvil temperature hypothesis of Hartmann and Larson may be valid for the large-size category. Combining with the result that a higher percentage of the large-size category of cloud objects occurs during higher SST subperiods, this implies that macrophysical properties of cloud objects would be less sensitive to further warming of the climate. On the other hand, when cloud objects are classified according to SSTs where large-scale dynamics plays important roles, statistical characteristics of cloud microphysical properties, optical depth and albedo are not sensitive to the SST, but those of cloud macrophysical properties are strongly dependent upon the SST. Frequency distributions of vertical velocity from the European Center for Medium-range Weather Forecasts model that is matched to each cloud object are used to interpret some of the findings in this study.

  17. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  18. Cross-cultural variation of memory colors of familiar objects.

    PubMed

    Smet, Kevin A G; Lin, Yandan; Nagy, Balázs V; Németh, Zoltan; Duque-Chica, Gloria L; Quintero, Jesús M; Chen, Hung-Shing; Luo, Ronnier M; Safi, Mahdi; Hanselaer, Peter

    2014-12-29

    The effect of cross-regional or cross-cultural differences on color appearance ratings and memory colors of familiar objects was investigated in seven different countries/regions - Belgium, Hungary, Brazil, Colombia, Taiwan, China and Iran. In each region the familiar objects were presented on a calibrated monitor in over 100 different colors to a test panel of observers that were asked to rate the similarity of the presented object color with respect to what they thought the object looks like in reality (memory color). For each object and region the mean observer ratings were modeled by a bivariate Gaussian function. A statistical analysis showed significant (p < 0.001) differences between the region average observers and the global average observer obtained by pooling the data from all regions. However, the effect size of geographical region or culture was found to be small. In fact, the differences between the region average observers and the global average observer were found to of the same magnitude or smaller than the typical within region inter-observer variability. Thus, although statistical differences in color appearance ratings and memory between regions were found, regional impact is not likely to be of practical importance.

  19. Statistical Issues for Uncontrolled Reentry Hazards Empirical Tests of the Predicted Footprint for Uncontrolled Satellite Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2011-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, material, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. Because this information is used in making policy and engineering decisions, it is important that these assumptions be tested using empirical data. This study uses the latest database of known uncontrolled reentry locations measured by the United States Department of Defense. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors in the final stages of reentry - including the effects of gravitational harmonics, the effects of the Earth s equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and possibly change the probability of reentering over a given location. In this paper, the measured latitude and longitude distributions of these objects are directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  20. Functional annotation of regulatory pathways.

    PubMed

    Pandey, Jayesh; Koyutürk, Mehmet; Kim, Yohan; Szpankowski, Wojciech; Subramaniam, Shankar; Grama, Ananth

    2007-07-01

    Standardized annotations of biomolecules in interaction networks (e.g. Gene Ontology) provide comprehensive understanding of the function of individual molecules. Extending such annotations to pathways is a critical component of functional characterization of cellular signaling at the systems level. We propose a framework for projecting gene regulatory networks onto the space of functional attributes using multigraph models, with the objective of deriving statistically significant pathway annotations. We first demonstrate that annotations of pairwise interactions do not generalize to indirect relationships between processes. Motivated by this result, we formalize the problem of identifying statistically overrepresented pathways of functional attributes. We establish the hardness of this problem by demonstrating the non-monotonicity of common statistical significance measures. We propose a statistical model that emphasizes the modularity of a pathway, evaluating its significance based on the coupling of its building blocks. We complement the statistical model by an efficient algorithm and software, Narada, for computing significant pathways in large regulatory networks. Comprehensive results from our methods applied to the Escherichia coli transcription network demonstrate that our approach is effective in identifying known, as well as novel biological pathway annotations. Narada is implemented in Java and is available at http://www.cs.purdue.edu/homes/jpandey/narada/.

  1. Limited data tomographic image reconstruction via dual formulation of total variation minimization

    NASA Astrophysics Data System (ADS)

    Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong

    2011-03-01

    The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.

  2. Statistical tools for analysis and modeling of cosmic populations and astronomical time series: CUDAHM and TSE

    NASA Astrophysics Data System (ADS)

    Loredo, Thomas; Budavari, Tamas; Scargle, Jeffrey D.

    2018-01-01

    This presentation provides an overview of open-source software packages addressing two challenging classes of astrostatistics problems. (1) CUDAHM is a C++ framework for hierarchical Bayesian modeling of cosmic populations, leveraging graphics processing units (GPUs) to enable applying this computationally challenging paradigm to large datasets. CUDAHM is motivated by measurement error problems in astronomy, where density estimation and linear and nonlinear regression must be addressed for populations of thousands to millions of objects whose features are measured with possibly complex uncertainties, potentially including selection effects. An example calculation demonstrates accurate GPU-accelerated luminosity function estimation for simulated populations of $10^6$ objects in about two hours using a single NVIDIA Tesla K40c GPU. (2) Time Series Explorer (TSE) is a collection of software in Python and MATLAB for exploratory analysis and statistical modeling of astronomical time series. It comprises a library of stand-alone functions and classes, as well as an application environment for interactive exploration of times series data. The presentation will summarize key capabilities of this emerging project, including new algorithms for analysis of irregularly-sampled time series.

  3. Covariation of depressive mood and spontaneous physical activity in major depressive disorder: toward continuous monitoring of depressive mood.

    PubMed

    Kim, Jinhyuk; Nakamura, Toru; Kikuchi, Hiroe; Yoshiuchi, Kazuhiro; Sasaki, Tsukasa; Yamamoto, Yoshiharu

    2015-07-01

    The objective evaluation of depressive mood is considered to be useful for the diagnosis and treatment of depressive disorders. Thus, we investigated psychobehavioral correlates, particularly the statistical associations between momentary depressive mood and behavioral dynamics measured objectively, in patients with major depressive disorder (MDD) and healthy subjects. Patients with MDD ( n = 14) and healthy subjects ( n = 43) wore a watch-type computer device and rated their momentary symptoms using ecological momentary assessment. Spontaneous physical activity in daily life, referred to as locomotor activity, was also continuously measured by an activity monitor built into the device. A multilevel modeling approach was used to model the associations between changes in depressive mood scores and the local statistics of locomotor activity simultaneously measured. We further examined the cross validity of such associations across groups. The statistical model established indicated that worsening of the depressive mood was associated with the increased intermittency of locomotor activity, as characterized by a lower mean and higher skewness. The model was cross validated across groups, suggesting that the same psychobehavioral correlates are shared by both healthy subjects and patients, although the latter had significantly higher mean levels of depressive mood scores. Our findings suggest the presence of robust as well as common associations between momentary depressive mood and behavioral dynamics in healthy individuals and patients with depression, which may lead to the continuous monitoring of the pathogenic processes (from healthy states) and pathological states of MDD.

  4. Development of Turbulent Biological Closure Parameterizations

    DTIC Science & Technology

    2011-09-30

    LONG-TERM GOAL: The long-term goals of this project are: (1) to develop a theoretical framework to quantify turbulence induced NPZ interactions. (2) to apply the theory to develop parameterizations to be used in realistic environmental physical biological coupling numerical models. OBJECTIVES: Connect the Goodman and Robinson (2008) statistically based pdf theory to Advection Diffusion Reaction (ADR) modeling of NPZ interaction.

  5. A Model for Determining Teaching Efficacy through the Use of Qualitative Single Subject Design, Student Learning Outcomes and Associative Statistics

    ERIC Educational Resources Information Center

    Osler, James Edward, II; Mansaray, Mahmud

    2014-01-01

    Many universities and colleges are increasingly concerned about enhancing the comprehension and knowledge of their students, particularly in the classroom. One of the method to enhancing student success is teaching effectiveness. The objective of this research paper is to propose a novel research model which examines the relationship between…

  6. Test anxiety and academic performance in chiropractic students.

    PubMed

    Zhang, Niu; Henderson, Charles N R

    2014-01-01

    Objective : We assessed the level of students' test anxiety, and the relationship between test anxiety and academic performance. Methods : We recruited 166 third-quarter students. The Test Anxiety Inventory (TAI) was administered to all participants. Total scores from written examinations and objective structured clinical examinations (OSCEs) were used as response variables. Results : Multiple regression analysis shows that there was a modest, but statistically significant negative correlation between TAI scores and written exam scores, but not OSCE scores. Worry and emotionality were the best predictive models for written exam scores. Mean total anxiety and emotionality scores for females were significantly higher than those for males, but not worry scores. Conclusion : Moderate-to-high test anxiety was observed in 85% of the chiropractic students examined. However, total test anxiety, as measured by the TAI score, was a very weak predictive model for written exam performance. Multiple regression analysis demonstrated that replacing total anxiety (TAI) with worry and emotionality (TAI subscales) produces a much more effective predictive model of written exam performance. Sex, age, highest current academic degree, and ethnicity contributed little additional predictive power in either regression model. Moreover, TAI scores were not found to be statistically significant predictors of physical exam skill performance, as measured by OSCEs.

  7. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet

    PubMed Central

    Rolls, Edmund T.

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777

  8. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    PubMed

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  9. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  10. Mutual interference between statistical summary perception and statistical learning.

    PubMed

    Zhao, Jiaying; Ngo, Nhi; McKendrick, Ryan; Turk-Browne, Nicholas B

    2011-09-01

    The visual system is an efficient statistician, extracting statistical summaries over sets of objects (statistical summary perception) and statistical regularities among individual objects (statistical learning). Although these two kinds of statistical processing have been studied extensively in isolation, their relationship is not yet understood. We first examined how statistical summary perception influences statistical learning by manipulating the task that participants performed over sets of objects containing statistical regularities (Experiment 1). Participants who performed a summary task showed no statistical learning of the regularities, whereas those who performed control tasks showed robust learning. We then examined how statistical learning influences statistical summary perception by manipulating whether the sets being summarized contained regularities (Experiment 2) and whether such regularities had already been learned (Experiment 3). The accuracy of summary judgments improved when regularities were removed and when learning had occurred in advance. In sum, calculating summary statistics impeded statistical learning, and extracting statistical regularities impeded statistical summary perception. This mutual interference suggests that statistical summary perception and statistical learning are fundamentally related.

  11. Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size

    NASA Technical Reports Server (NTRS)

    Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.

    2008-01-01

    Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.

  12. Adaptable state based control system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Robert D. (Inventor); Dvorak, Daniel L. (Inventor); Gostelow, Kim P. (Inventor); Starbird, Thomas W. (Inventor); Gat, Erann (Inventor); Chien, Steve Ankuo (Inventor); Keller, Robert M. (Inventor)

    2004-01-01

    An autonomous controller, comprised of a state knowledge manager, a control executor, hardware proxies and a statistical estimator collaborates with a goal elaborator, with which it shares common models of the behavior of the system and the controller. The elaborator uses the common models to generate from temporally indeterminate sets of goals, executable goals to be executed by the controller. The controller may be updated to operate in a different system or environment than that for which it was originally designed by the replacement of shared statistical models and by the instantiation of a new set of state variable objects derived from a state variable class. The adaptation of the controller does not require substantial modification of the goal elaborator for its application to the new system or environment.

  13. A Confirmatory Factor Analysis of the Structure of Statistics Anxiety Measure: An examination of four alternative models

    PubMed Central

    Vahedi, Shahram; Farrokhi, Farahman

    2011-01-01

    Objective The aim of this study is to explore the confirmatory factor analysis results of the Persian adaptation of Statistics Anxiety Measure (SAM), proposed by Earp. Method The validity and reliability assessments of the scale were performed on 298 college students chosen randomly from Tabriz University in Iran. Confirmatory factor analysis (CFA) was carried out to determine the factor structures of the Persian adaptation of SAM. Results As expected, the second order model provided a better fit to the data than the three alternative models. Conclusions Hence, SAM provides an equally valid measure for use among college students. The study both expands and adds support to the existing body of math anxiety literature. PMID:22952530

  14. An Object-Relational Ifc Storage Model Based on Oracle Database

    NASA Astrophysics Data System (ADS)

    Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan

    2016-06-01

    With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.

  15. ON A POSSIBLE SIZE/COLOR RELATIONSHIP IN THE KUIPER BELT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, R. E.; Kavelaars, J. J., E-mail: repike@uvic.ca

    2013-10-01

    Color measurements and albedo distributions introduce non-intuitive observational biases in size-color relationships among Kuiper Belt Objects (KBOs) that cannot be disentangled without a well characterized sample population with systematic photometry. Peixinho et al. report that the form of the KBO color distribution varies with absolute magnitude, H. However, Tegler et al. find that KBO color distributions are a property of object classification. We construct synthetic models of observed KBO colors based on two B-R color distribution scenarios: color distribution dependent on H magnitude (H-Model) and color distribution based on object classification (Class-Model). These synthetic B-R color distributions were modified tomore » account for observational flux biases. We compare our synthetic B-R distributions to the observed ''Hot'' and ''Cold'' detected objects from the Canada-France Ecliptic Plane Survey and the Meudon Multicolor Survey. For both surveys, the Hot population color distribution rejects the H-Model, but is well described by the Class-Model. The Cold objects reject the H-Model, but the Class-Model (while not statistically rejected) also does not provide a compelling match for data. Although we formally reject models where the structure of the color distribution is a strong function of H magnitude, we also do not find that a simple dependence of color distribution on orbit classification is sufficient to describe the color distribution of classical KBOs.« less

  16. Alternative approaches to predicting methane emissions from dairy cows.

    PubMed

    Mills, J A N; Kebreab, E; Yates, C M; Crompton, L A; Cammell, S B; Dhanoa, M S; Agnew, R E; France, J

    2003-12-01

    Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited value where predictions are obtained for nutrient intakes and diet types outside those used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three nonlinear alternatives that were all of modified Mitscherlich (monomolecular) form. Of the linear models tested, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.

  17. A Review of Statistical Failure Time Models with Application of a Discrete Hazard Based Model to 1Cr1Mo-0.25V Steel for Turbine Rotors and Shafts

    PubMed Central

    2017-01-01

    Producing predictions of the probabilistic risks of operating materials for given lengths of time at stated operating conditions requires the assimilation of existing deterministic creep life prediction models (that only predict the average failure time) with statistical models that capture the random component of creep. To date, these approaches have rarely been combined to achieve this objective. The first half of this paper therefore provides a summary review of some statistical models to help bridge the gap between these two approaches. The second half of the paper illustrates one possible assimilation using 1Cr1Mo-0.25V steel. The Wilshire equation for creep life prediction is integrated into a discrete hazard based statistical model—the former being chosen because of its novelty and proven capability in accurately predicting average failure times and the latter being chosen because of its flexibility in modelling the failure time distribution. Using this model it was found that, for example, if this material had been in operation for around 15 years at 823 K and 130 MPa, the chances of failure in the next year is around 35%. However, if this material had been in operation for around 25 years, the chance of failure in the next year rises dramatically to around 80%. PMID:29039773

  18. The prior statistics of object colors.

    PubMed

    Koenderink, Jan J

    2010-02-01

    The prior statistics of object colors is of much interest because extensive statistical investigations of reflectance spectra reveal highly non-uniform structure in color space common to several very different databases. This common structure is due to the visual system rather than to the statistics of environmental structure. Analysis involves an investigation of the proper sample space of spectral reflectance factors and of the statistical consequences of the projection of spectral reflectances on the color solid. Even in the case of reflectance statistics that are translationally invariant with respect to the wavelength dimension, the statistics of object colors is highly non-uniform. The qualitative nature of this non-uniformity is due to trichromacy.

  19. Emergent kink statistics at finite temperature

    DOE PAGES

    Lopez-Ruiz, Miguel Angel; Yepez-Martinez, Tochtli; Szczepaniak, Adam; ...

    2017-07-25

    In this paper we use 1D quantum mechanical systems with Higgs-like interaction potential to study the emergence of topological objects at finite temperature. Two different model systems are studied, the standard double-well potential model and a newly introduced discrete kink model. Using Monte-Carlo simulations as well as analytic methods, we demonstrate how kinks become abundant at low temperatures. These results may shed useful insights on how topological phenomena may occur in QCD.

  20. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  1. Tooth-size discrepancy: A comparison between manual and digital methods

    PubMed Central

    Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge

    2014-01-01

    Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529

  2. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  3. APA's Learning Objectives for Research Methods and Statistics in Practice: A Multimethod Analysis

    ERIC Educational Resources Information Center

    Tomcho, Thomas J.; Rice, Diana; Foels, Rob; Folmsbee, Leah; Vladescu, Jason; Lissman, Rachel; Matulewicz, Ryan; Bopp, Kara

    2009-01-01

    Research methods and statistics courses constitute a core undergraduate psychology requirement. We analyzed course syllabi and faculty self-reported coverage of both research methods and statistics course learning objectives to assess the concordance with APA's learning objectives (American Psychological Association, 2007). We obtained a sample of…

  4. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  5. Population dynamics of minimally cognitive individuals. Part I: Introducing knowledge into the dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmieder, R.W.

    The author presents a new approach for modeling the dynamics of collections of objects with internal structure. Based on the fact that the behavior of an individual in a population is modified by its knowledge of other individuals, a procedure for accounting for knowledge in a population of interacting objects is presented. It is assumed that each object has partial (or complete) knowledge of some (or all) other objects in the population. The dynamical equations for the objects are then modified to include the effects of this pairwise knowledge. This procedure has the effect of projecting out what the populationmore » will do from the much larger space of what it could do, i.e., filtering or smoothing the dynamics by replacing the complex detailed physical model with an effective model that produces the behavior of interest. The procedure therefore provides a minimalist approach for obtaining emergent collective behavior. The use of knowledge as a dynamical quantity, and its relationship to statistical mechanics, thermodynamics, information theory, and cognition microstructure are discussed.« less

  6. TH-CD-207B-12: Quantification of Clinical Feedback On Image Quality Differences Between Two CT Scanner Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bache, S; Liu, X; Loyer, E

    Purpose: This work sought to quantify a radiology team’s assessment of image quality differences between two CT scanner models currently in clinical use, with emphasis on noise and low-contrast detectability (LCD). Methods: A water phantom and a Kagaku anthropomorphic body phantom were scanned on GE Discovery CT750 HD and LightSpeed VCT scanners (4 each) with identical scan parameters and reconstructed to 2.5mm/5.0mm thicknesses. Images of water phantom were analyzed at the scanner console with a built-in LCD tool that uses statistical methods to compute requisite CT-number contrast for 95% confidence in detection of a user-defined object size. LCD value wasmore » computed for 5mm, 3mm, and 1mm objects. Analysis of standard deviation and LCD values were performed on Kagaku phantom images within liver, stomach, and spleen. LCD value was computed for 4mm, 3mm, and 1mm objects using a benchmarked MATLAB implementation of the GE scanner-console tool. Results: Water LCD values were larger (poorer performance) for all HD scanners compared to VCT scanners. Mean scanner model difference in requisite CT-number contrast for 5mm, 3mm, and 1mm objects for 5.0mm/2.5mm images was 3.0%/3.4% (p=0.02/p=0.10), 5.3%/5.7% (0.00002/0.02), and 8.5%/8.2% (0.0004/0.002), respectively. Mean standard deviations within Kagaku phantom ROIs were greater in HD compared to VCT images, with mean differences for the liver, stomach, and spleen for 5.0mm/2.5mm of 16%/12% (p=0.04/0.10), 8%/12% (0.15/0.11), and 16%/15% (0.05/0.11), respectively. Mean LCD value difference between HD and VCT scanners over all ROIs for 4mm, 3m, and 1mm objects and 5.0mm/2.5mm was 34%/9%, 16%/8%, and 18%/10%, respectively. HD scanners outperformed VCT scanners only for the 4mm stomach object. Conclusion: Using both water and anthropomorphic phantoms, it was shown that HD scanners are outperformed by VCT scanners with respect to noise and LCD in a consistent and in most cases statistically significant manner. The relationship between statistical and clinical significance demands further work.« less

  7. Prediction of objectively measured physical activity and sedentariness among blue-collar workers using survey questionnaires.

    PubMed

    Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas

    2016-05-01

    We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.

  8. The impact on midlevel vision of statistically optimal divisive normalization in V1.

    PubMed

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-07-15

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.

  9. Infrared near-Earth-object survey modeling for observatories interior to the Earth's orbit

    NASA Astrophysics Data System (ADS)

    Buie, M.

    2014-07-01

    The search for and dynamical characterization of the near-Earth population of objects (NEOs) has been a busy topic for surveys for many years. Most of the work thus far has been from ground-based optical surveys such as the Catalina Sky Survey and LINEAR. These surveys have essentially reached a complete inventory of objects down to 1 km diameter and have shown that the known objects do not pose any significant impact threat. Smaller objects are correspondingly smaller threats but there are more of them and fewer of them have so far been discovered. The next generation of surveys is looking to extend their reach down to much smaller sizes. From an impact risk perspective, those objects as small as 30--40 m are still of interest (similar in size to the Tunguska bolide). Smaller objects than this are largely of interest from a space resource or in-situ analysis efforts. A recent mission concept promoted by the B612 Foundation and Ball Aerospace calls for an infrared survey telescope in a Venus-like orbit, known as the Sentinel Mission. This wide-field facility has been designed to complete the inventory down to a 140 m diameter while also providing substantial constraints on the NEO population down to a Tunguska-sized object. I have been working to develop a suite of tools to provide survey modeling for this class of survey telescope. The purpose of the tool is to uncover hidden complexities that govern mission design and operation while also working to quantitatively understand the orbit quality provided on its catalog of objects without additional followup assets. The baseline mission design calls for a 6.5 year survey lifetime. This survey model is a statistically based tool for establishing completeness as a function of object size and survey duration. Effects modeled include the ability to adjust the field-of-regard (includes all pointing restrictions), field-of-view, focal plane array fill factor, and the observatory orbit. Consequences tracked include time-tagged detection times from which orbit quality can be derived and efficiency by dynamical class. The dominant noise term in the simulations comes from the noise in the background flux caused by thermal emission from zodiacal dust. The model used is sufficient for the study of reasonably low-inclination spacecraft orbits such as are being considered. Results to date are based on the 2002 Bottke NEA orbit-distribution model. The system can work with any orbit-distribution model and with any size-frequency distribution. This tool also serves to quantify the amount of data that will also be collected on main-belt objects by simply testing against the known catalog of bodies. The orbit quality work clearly shows the benefit of a self-followup survey such as Sentinel. Most objects discovered will be seen in multiple observing epochs and the resulting orbits will preclude losing track of them for decades to come (or longer). All of the ephemeris calculations, including investigation of orbit determination quality, are done with the OpenOrb software package. The presentation for this meeting will be based on results of modeling the Sentinel Mission and other similar variants. The focus will be on evaluating the survey completion for different dynamical classes as well as for different sized objects. Within the fidelity of such statistically-based models, the planned Sentinel observatory is well capable of a huge step forward in the efforts to build a complete catalog of all objects that could pose future harm to planet Earth.

  10. Future perspectives - proposal for Oxford Physiome Project.

    PubMed

    Oku, Yoshitaka

    2010-01-01

    The Physiome Project is an effort to understand living creatures using "analysis by synthesis" strategy, i.e., by reproducing their behaviors. In order to achieve its goal, sharing developed models between different computer languages and application programs to incorporate into integrated models is critical. To date, several XML-based markup languages has been developed for this purpose. However, source codes written with XML-based languages are very difficult to read and edit using text editors. An alternative way is to use an object-oriented meta-language, which can be translated to different computer languages and transplanted to different application programs. Object-oriented languages are suitable for describing structural organization by hierarchical classes and taking advantage of statistical properties to reduce the number of parameter while keeping the complexity of behaviors. Using object-oriented languages to describe each element and posting it to a public domain should be the next step to build up integrated models of the respiratory control system.

  11. Research Analysis on MOOC Course Dropout and Retention Rates

    ERIC Educational Resources Information Center

    Gomez-Zermeno, Marcela Gerogina; Aleman de La Garza, Lorena

    2016-01-01

    This research's objective was to identify the terminal efficiency of the Massive Online Open Course "Educational Innovation with Open Resources" offered by a Mexican private university. A quantitative methodology was used, combining descriptive statistics and probabilistic models to analyze the levels of retention, completion, and…

  12. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  13. Addressing economic development goals through innovative teaching of university statistics: a case study of statistical modelling in Nigeria

    NASA Astrophysics Data System (ADS)

    Oseloka Ezepue, Patrick; Ojo, Adegbola

    2012-12-01

    A challenging problem in some developing countries such as Nigeria is inadequate training of students in effective problem solving using the core concepts of their disciplines. Related to this is a disconnection between their learning and socio-economic development agenda of a country. These problems are more vivid in statistical education which is dominated by textbook examples and unbalanced assessment 'for' and 'of' learning within traditional curricula. The problems impede the achievement of socio-economic development objectives such as those stated in the Nigerian Vision 2020 blueprint and United Nations Millennium Development Goals. They also impoverish the ability of (statistics) graduates to creatively use their knowledge in relevant business and industry sectors, thereby exacerbating mass graduate unemployment in Nigeria and similar developing countries. This article uses a case study in statistical modelling to discuss the nature of innovations in statistics education vital to producing new kinds of graduates who can link their learning to national economic development goals, create wealth and alleviate poverty through (self) employment. Wider implications of the innovations for repositioning mathematical sciences education globally are explored in this article.

  14. A segmentation editing framework based on shape change statistics

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen

    2017-02-01

    Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.

  15. Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images

    PubMed Central

    Srivastava, Anuj

    2010-01-01

    We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692

  16. Visual search in scenes involves selective and non-selective pathways

    PubMed Central

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  17. Fixations on objects in natural scenes: dissociating importance from salience

    PubMed Central

    't Hart, Bernard M.; Schmidt, Hannah C. E. F.; Roth, Christine; Einhäuser, Wolfgang

    2013-01-01

    The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object's “importance” for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (“common”/“important”) or a rarely named (“rare”/“unimportant”) object, track the observers' eye movements during scene viewing and ask them to provide keywords describing the scene immediately after. When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast. Our data suggest a dissociation between object importance (relevance for the scene) and salience (relevance for attention). If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist), and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object's importance suggests an analogy to the effects of word frequency on landing positions in reading. PMID:23882251

  18. [Application of statistics on chronic-diseases-relating observational research papers].

    PubMed

    Hong, Zhi-heng; Wang, Ping; Cao, Wei-hua

    2012-09-01

    To study the application of statistics on Chronic-diseases-relating observational research papers which were recently published in the Chinese Medical Association Magazines, with influential index above 0.5. Using a self-developed criterion, two investigators individually participated in assessing the application of statistics on Chinese Medical Association Magazines, with influential index above 0.5. Different opinions reached an agreement through discussion. A total number of 352 papers from 6 magazines, including the Chinese Journal of Epidemiology, Chinese Journal of Oncology, Chinese Journal of Preventive Medicine, Chinese Journal of Cardiology, Chinese Journal of Internal Medicine and Chinese Journal of Endocrinology and Metabolism, were reviewed. The rate of clear statement on the following contents as: research objectives, t target audience, sample issues, objective inclusion criteria and variable definitions were 99.43%, 98.57%, 95.43%, 92.86% and 96.87%. The correct rates of description on quantitative and qualitative data were 90.94% and 91.46%, respectively. The rates on correctly expressing the results, on statistical inference methods related to quantitative, qualitative data and modeling were 100%, 95.32% and 87.19%, respectively. 89.49% of the conclusions could directly response to the research objectives. However, 69.60% of the papers did not mention the exact names of the study design, statistically, that the papers were using. 11.14% of the papers were in lack of further statement on the exclusion criteria. Percentage of the papers that could clearly explain the sample size estimation only taking up as 5.16%. Only 24.21% of the papers clearly described the variable value assignment. Regarding the introduction on statistical conduction and on database methods, the rate was only 24.15%. 18.75% of the papers did not express the statistical inference methods sufficiently. A quarter of the papers did not use 'standardization' appropriately. As for the aspect of statistical inference, the rate of description on statistical testing prerequisite was only 24.12% while 9.94% papers did not even employ the statistical inferential method that should be used. The main deficiencies on the application of Statistics used in papers related to Chronic-diseases-related observational research were as follows: lack of sample-size determination, variable value assignment description not sufficient, methods on statistics were not introduced clearly or properly, lack of consideration for pre-requisition regarding the use of statistical inferences.

  19. Object extraction in photogrammetric computer vision

    NASA Astrophysics Data System (ADS)

    Mayer, Helmut

    This paper discusses state and promising directions of automated object extraction in photogrammetric computer vision considering also practical aspects arising for digital photogrammetric workstations (DPW). A review of the state of the art shows that there are only few practically successful systems on the market. Therefore, important issues for a practical success of automated object extraction are identified. A sound and most important powerful theoretical background is the basis. Here, we particularly point to statistical modeling. Testing makes clear which of the approaches are suited best and how useful they are for praxis. A key for commercial success of a practical system is efficient user interaction. As the means for data acquisition are changing, new promising application areas such as extremely detailed three-dimensional (3D) urban models for virtual television or mission rehearsal evolve.

  20. Visual learning in drosophila: application on a roving robot and comparisons

    NASA Astrophysics Data System (ADS)

    Arena, P.; De Fiore, S.; Patané, L.; Termini, P. S.; Strauss, R.

    2011-05-01

    Visual learning is an important aspect of fly life. Flies are able to extract visual cues from objects, like colors, vertical and horizontal distributedness, and others, that can be used for learning to associate a meaning to specific features (i.e. a reward or a punishment). Interesting biological experiments show trained stationary flying flies avoiding flying towards specific visual objects, appearing on the surrounding environment. Wild-type flies effectively learn to avoid those objects but this is not the case for the learning mutant rutabaga defective in the cyclic AMP dependent pathway for plasticity. A bio-inspired architecture has been proposed to model the fly behavior and experiments on roving robots were performed. Statistical comparisons have been considered and mutant-like effect on the model has been also investigated.

  1. Statistical appearance models based on probabilistic correspondences.

    PubMed

    Krüger, Julia; Ehrhardt, Jan; Handels, Heinz

    2017-04-01

    Model-based image analysis is indispensable in medical image processing. One key aspect of building statistical shape and appearance models is the determination of one-to-one correspondences in the training data set. At the same time, the identification of these correspondences is the most challenging part of such methods. In our earlier work, we developed an alternative method using correspondence probabilities instead of exact one-to-one correspondences for a statistical shape model (Hufnagel et al., 2008). In this work, a new approach for statistical appearance models without one-to-one correspondences is proposed. A sparse image representation is used to build a model that combines point position and appearance information at the same time. Probabilistic correspondences between the derived multi-dimensional feature vectors are used to omit the need for extensive preprocessing of finding landmarks and correspondences as well as to reduce the dependence of the generated model on the landmark positions. Model generation and model fitting can now be expressed by optimizing a single global criterion derived from a maximum a-posteriori (MAP) approach with respect to model parameters that directly affect both shape and appearance of the considered objects inside the images. The proposed approach describes statistical appearance modeling in a concise and flexible mathematical framework. Besides eliminating the demand for costly correspondence determination, the method allows for additional constraints as topological regularity in the modeling process. In the evaluation the model was applied for segmentation and landmark identification in hand X-ray images. The results demonstrate the feasibility of the model to detect hand contours as well as the positions of the joints between finger bones for unseen test images. Further, we evaluated the model on brain data of stroke patients to show the ability of the proposed model to handle partially corrupted data and to demonstrate a possible employment of the correspondence probabilities to indicate these corrupted/pathological areas. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Orientation estimation of anatomical structures in medical images for object recognition

    NASA Astrophysics Data System (ADS)

    Bağci, Ulaş; Udupa, Jayaram K.; Chen, Xinjian

    2011-03-01

    Recognition of anatomical structures is an important step in model based medical image segmentation. It provides pose estimation of objects and information about "where" roughly the objects are in the image and distinguishing them from other object-like entities. In,1 we presented a general method of model-based multi-object recognition to assist in segmentation (delineation) tasks. It exploits the pose relationship that can be encoded, via the concept of ball scale (b-scale), between the binary training objects and their associated grey images. The goal was to place the model, in a single shot, close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. Unlike position and scale parameters, we observe that orientation parameters require more attention when estimating the pose of the model as even small differences in orientation parameters can lead to inappropriate recognition. Motivated from the non-Euclidean nature of the pose information, we propose in this paper the use of non-Euclidean metrics to estimate orientation of the anatomical structures for more accurate recognition and segmentation. We statistically analyze and evaluate the following metrics for orientation estimation: Euclidean, Log-Euclidean, Root-Euclidean, Procrustes Size-and-Shape, and mean Hermitian metrics. The results show that mean Hermitian and Cholesky decomposition metrics provide more accurate orientation estimates than other Euclidean and non-Euclidean metrics.

  3. Comparing geological and statistical approaches for element selection in sediment tracing research

    NASA Astrophysics Data System (ADS)

    Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon

    2015-04-01

    Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only the Lexys Creek modelling results differing significantly (35%). The statistical model with expanded elemental selection (DFA0.35) differed from the geological model by an average of 30% for all 6 models. Elemental selection for sediment fingerprinting therefore has the potential to impact modeling results. Accordingly is important to incorporate both robust geological and statistical approaches when selecting elements for sediment fingerprinting. For the Baroon Pocket Dam, management should focus on reducing the supply of sediments derived from felsic sources in each of the subcatchments.

  4. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  5. SOCR: Statistics Online Computational Resource

    PubMed Central

    Dinov, Ivo D.

    2011-01-01

    The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR). This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student’s intuition and enhance their learning. PMID:21451741

  6. An investigation on fatality of drivers in vehicle-fixed object accidents on expressways in China: Using multinomial logistic regression model.

    PubMed

    Peng, Yong; Peng, Shuangling; Wang, Xinghua; Tan, Shiyang

    2018-06-01

    This study aims to identify the effects of characteristics of vehicle, roadway, driver, and environment on fatality of drivers in vehicle-fixed object accidents on expressways in Changsha-Zhuzhou-Xiangtan district of Hunan province in China by developing multinomial logistic regression models. For this purpose, 121 vehicle-fixed object accidents from 2011-2017 are included in the modeling process. First, descriptive statistical analysis is made to understand the main characteristics of the vehicle-fixed object crashes. Then, 19 explanatory variables are selected, and correlation analysis of each two variables is conducted to choose the variables to be concluded. Finally, five multinomial logistic regression models including different independent variables are compared, and the model with best fitting and prediction capability is chosen as the final model. The results showed that the turning direction in avoiding fixed objects raised the possibility that drivers would die. About 64% of drivers died in the accident were found being ejected out of the car, of which 50% did not use a seatbelt before the fatal accidents. Drivers are likely to die when they encounter bad weather on the expressway. Drivers with less than 10 years of driving experience are more likely to die in these accidents. Fatigue or distracted driving is also a significant factor in fatality of drivers. Findings from this research provide an insight into reducing fatality of drivers in vehicle-fixed object accidents.

  7. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    NASA Astrophysics Data System (ADS)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  8. Functional status predicts acute care readmission in the traumatic spinal cord injury population.

    PubMed

    Huang, Donna; Slocum, Chloe; Silver, Julie K; Morgan, James W; Goldstein, Richard; Zafonte, Ross; Schneider, Jeffrey C

    2018-03-29

    Context/objective Acute care readmission has been identified as an important marker of healthcare quality. Most previous models assessing risk prediction of readmission incorporate variables for medical comorbidity. We hypothesized that functional status is a more robust predictor of readmission in the spinal cord injury population than medical comorbidities. Design Retrospective cross-sectional analysis. Setting Inpatient rehabilitation facilities, Uniform Data System for Medical Rehabilitation data from 2002 to 2012 Participants traumatic spinal cord injury patients. Outcome measures A logistic regression model for predicting acute care readmission based on demographic variables and functional status (Functional Model) was compared with models incorporating demographics, functional status, and medical comorbidities (Functional-Plus) or models including demographics and medical comorbidities (Demographic-Comorbidity). The primary outcomes were 3- and 30-day readmission, and the primary measure of model performance was the c-statistic. Results There were a total of 68,395 patients with 1,469 (2.15%) readmitted at 3 days and 7,081 (10.35%) readmitted at 30 days. The c-statistics for the Functional Model were 0.703 and 0.654 for 3 and 30 days. The Functional Model outperformed Demographic-Comorbidity models at 3 days (c-statistic difference: 0.066-0.096) and outperformed two of the three Demographic-Comorbidity models at 30 days (c-statistic difference: 0.029-0.056). The Functional-Plus models exhibited negligible improvements (0.002-0.010) in model performance compared to the Functional models. Conclusion Readmissions are used as a marker of hospital performance. Function-based readmission models in the spinal cord injury population outperform models incorporating medical comorbidities. Readmission risk models for this population would benefit from the inclusion of functional status.

  9. NIR calibration of soluble stem carbohydrates for predicting drought tolerance in spring wheat

    USDA-ARS?s Scientific Manuscript database

    Soluble stem carbohydrates are a component of drought response in wheat (Triticum aestivum L.) and other grasses. Near-infrared spectroscopy (NIR) can rapidly assay for soluble carbohydrates indirectly, but this requires a statistical model for calibration. The objectives of this study were: (i) to ...

  10. Do Algorithms Homogenize Students' Achievements in Secondary School Better than Teachers' Tracking Decisions?

    ERIC Educational Resources Information Center

    Klapproth, Florian

    2015-01-01

    Two objectives guided this research. First, this study examined how well teachers' tracking decisions contribute to the homogenization of their students' achievements. Second, the study explored whether teachers' tracking decisions would be outperformed in homogenizing the students' achievements by statistical models of tracking decisions. These…

  11. THE WATER BALANCE OF THE SUSQUEHANNA RIVER BASIN AND ITS RESPONSE TO CLIMATE CHANGE. (R824995)

    EPA Science Inventory

    Abstract

    Historical precipitation, temperature and streamflow data for the Susquehanna River Basin (SRB) are analyzed with the objective of developing simple statistical and water balance models of streamflow at the watershed's outlet. Annual streamflow is highly corre...

  12. [Application of a mathematical algorithm for the detection of electroneuromyographic results in the pathogenesis study of facial dyskinesia].

    PubMed

    Gribova, N P; Iudel'son, Ia B; Golubev, V L; Abramenkova, I V

    2003-01-01

    To carry out a differential diagnosis of two facial dyskinesia (FD) models--facial hemispasm (FH) and facial paraspasm (FP), a combined program of electroneuromyographic (ENMG) examination has been created, using statistical analyses, including that for objects identification based on hybrid neural network with the application of adaptive fuzzy logic method and standard statistics programs (Wilcoxon, Student statistics). In FH, a lesion of peripheral facial neuromotor apparatus with augmentation of functions of inter-neurons in segmental and upper segmental stem levels predominated. In FP, primary afferent strengthening in mimic muscles was accompanied by increased motor neurons activity and reciprocal augmentation of inter-neurons, inhibiting motor portion of V pair. Mathematical algorithm for ENMG results recognition worked out in the study provides a precise differentiation of two FD models and opens possibilities for differential diagnosis of other facial motor disorders.

  13. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  14. Segmentation-free statistical image reconstruction for polyenergetic x-ray computed tomography with experimental validation.

    PubMed

    Idris A, Elbakri; Fessler, Jeffrey A

    2003-08-07

    This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.

  15. Orbit-Attitude Changes of Objects in Near Earth Space Induced by Natural Charging

    DTIC Science & Technology

    2017-05-02

    depends upon Earth’s magnetosphere. Typically, magneto-sphere models can be grouped under two classes: statistical and physics -based. The Physics ...models were primarily physics -based due to unavailability of sufficient space-data, but over the last three decades, with the availability of huge...Attitude Determination and Control,” Astrophysics and Space Sci- ence Library, Vol. 73, D. Reidel Publishing Company, London, 1978 [17] Fairfield

  16. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  17. Forecasting daily source air quality using multivariate statistical analysis and radial basis function networks.

    PubMed

    Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A

    2008-12-01

    It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.

  18. The effect of a major cigarette price change on smoking behavior in california: a zero-inflated negative binomial model.

    PubMed

    Sheu, Mei-Ling; Hu, Teh-Wei; Keeler, Theodore E; Ong, Michael; Sung, Hai-Yen

    2004-08-01

    The objective of this paper is to determine the price sensitivity of smokers in their consumption of cigarettes, using evidence from a major increase in California cigarette prices due to Proposition 10 and the Tobacco Settlement. The study sample consists of individual survey data from Behavioral Risk Factor Survey (BRFS) and price data from the Bureau of Labor Statistics between 1996 and 1999. A zero-inflated negative binomial (ZINB) regression model was applied for the statistical analysis. The statistical model showed that price did not have an effect on reducing the estimated prevalence of smoking. However, it indicated that among smokers the price elasticity was at the level of -0.46 and statistically significant. Since smoking prevalence is significantly lower than it was a decade ago, price increases are becoming less effective as an inducement for hard-core smokers to quit, although they may respond by decreasing consumption. For those who only smoke occasionally (many of them being young adults) price increases alone may not be an effective inducement to quit smoking. Additional underlying behavioral factors need to be identified so that more effective anti-smoking strategies can be developed.

  19. An object-based approach to weather analysis and its applications

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Diederich, Malte; Horvath, Akos; Simmer, Clemens; Kumjian, Matthew

    2013-04-01

    The research group 'Object-based Analysis and SEamless prediction' (OASE) within the Hans Ertel Centre for Weather Research programme (HErZ) pursues an object-based approach to weather analysis. The object-based tracking approach adopts the Lagrange perspective by identifying and following the development of convective events over the course of their lifetime. Prerequisites of the object-based analysis are a high-resolved observational data base and a tracking algorithm. A near real-time radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. A 3D scale-space tracking identifies convective rain events in the dual-composite and monitors the development over the course of their lifetime. The OASE-group exploits the object-based approach in several fields of application: (1) For a better understanding and analysis of precipitation processes responsible for extreme weather events, (2) in nowcasting, (3) as a novel approach for validation of meso-γ atmospheric models, and (4) in data assimilation. Results from the different fields of application will be presented. The basic idea of the object-based approach is to identify a small set of radar- and satellite derived descriptors which characterize the temporal development of precipitation systems which constitute the objects. So-called proxies of the precipitation process are e.g. the temporal change of the brightband, vertically extensive columns of enhanced differential reflectivity ZDR or the cloud top temperature and heights identified in the 4D field of ground-based radar reflectivities and satellite retrievals generated by a cell during its life time. They quantify (micro-) physical differences among rain events and relate to the precipitation yield. Analyses on the informative content of ZDR columns as precursor for storm evolution for example will be presented to demonstrate the use of such system-oriented predictors for nowcasting. Columns of differential reflectivity ZDR measured by polarimetric weather radars are prominent signatures associated with thunderstorm updrafts. Since greater vertical velocities can loft larger drops and water-coated ice particles to higher altitudes above the environmental freezing level, the integrated ZDR column above the freezing level increases with increasing updraft intensity. Validation of atmospheric models concerning precipitation representation or prediction is usually confined to comparisons of precipitation fields or their temporal and spatial statistics. A comparison of the rain rates alone, however, does not immediately explain discrepancies between models and observations, because similar rain rates might be produced by different processes. Within the event-based approach for validation of models both observed and modeled rain events are analyzed by means of proxies of the precipitation process. Both sets of descriptors represent the basis for model validation since different leading descriptors - in a statistical sense- hint at process formulations potentially responsible for model failures.

  20. Irradiation-hyperthermia in canine hemangiopericytomas: large-animal model for therapeutic response.

    PubMed

    Richardson, R C; Anderson, V L; Voorhees, W D; Blevins, W E; Inskeep, T K; Janas, W; Shupe, R E; Babbs, C F

    1984-11-01

    Results of irradiation-hyperthermia treatment in 11 dogs with naturally occurring hemangiopericytoma were reported. Similarities of canine and human hemangiopericytomas were described. Orthovoltage X-irradiation followed by microwave-induced hyperthermia resulted in a 91% objective response rate. A statistical procedure was given to evaluate quantitatively the clinical behavior of locally invasive, nonmetastatic tumors in dogs that were undergoing therapy for control of local disease. The procedure used a small sample size and demonstrated distribution of the data on a scaled response as well as transformation of the data through classical parametric and nonparametric statistical methods. These statistical methods set confidence limits on the population mean and placed tolerance limits on a population percentage. Application of the statistical methods to human and animal clinical trials was apparent.

  1. Re-entry survivability and risk

    NASA Astrophysics Data System (ADS)

    Fudge, Michael L.

    1998-11-01

    This paper is the culmination of the research effort which was reported on last year while still in-progress. As previously reported, statistical methods for expressing the impact risk posed to space systems in general [and the International Space Station (ISS) in particular] by other resident space objects have been examined. One of the findings of this investigation is that there are legitimate physical modeling reasons for the common statistical expression of the collision risk. A combination of statistical methods and physical modeling is also used to express the impact risk posed by reentering space systems to objects of interest (e.g., people and property) on Earth. One of the largest uncertainties in the expressing of this risk is the estimation of survivable material which survives reentry to impact Earth's surface. This point was demonstrated in dramatic fashion in January 1997 by the impact of an intact expendable launch vehicle (ELV) upper stage near a private residence in the continental United States. Since approximately half of the missions supporting ISS will utilize ELVs, it is appropriate to examine the methods used to estimate the amount and physical characteristics of ELV debris surviving reentry to impact Earth's surface. This report details reentry survivability estimation methodology, including the specific methodology used by ITT Systems' (formerly Kaman Sciences) 'SURVIVE' model. The major change to the model in the last twelve months has been the increase in the fidelity with which upper- atmospheric aerodynamics has been modeled. This has resulted in an adjustment in the factor relating the amount of kinetic energy loss to the amount of heating entering and reentering body, and also validated and removed the necessity for certain empirically-based adjustments made to the theoretical heating expressions. Comparisons between empirical results (observations of objects which have been recovered on Earth after surviving reentry) and SURVIVE estimates are presented for selected generic upper stage or spacecraft components, a Soyuz launch vehicle second stage, and for a Delta II launch vehicle second stage and its significant components. Significant similarity is demonstrated between the type and dispersion pattern of the recovered debris from the January 1997 Delta II 2nd stage event and the simulation of that reentry and breakup.

  2. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.

  3. Dose response explorer: an integrated open-source tool for exploring and modelling radiotherapy dose volume outcome relationships

    NASA Astrophysics Data System (ADS)

    El Naqa, I.; Suneja, G.; Lindsay, P. E.; Hope, A. J.; Alaly, J. R.; Vicic, M.; Bradley, J. D.; Apte, A.; Deasy, J. O.

    2006-11-01

    Radiotherapy treatment outcome models are a complicated function of treatment, clinical and biological factors. Our objective is to provide clinicians and scientists with an accurate, flexible and user-friendly software tool to explore radiotherapy outcomes data and build statistical tumour control or normal tissue complications models. The software tool, called the dose response explorer system (DREES), is based on Matlab, and uses a named-field structure array data type. DREES/Matlab in combination with another open-source tool (CERR) provides an environment for analysing treatment outcomes. DREES provides many radiotherapy outcome modelling features, including (1) fitting of analytical normal tissue complication probability (NTCP) and tumour control probability (TCP) models, (2) combined modelling of multiple dose-volume variables (e.g., mean dose, max dose, etc) and clinical factors (age, gender, stage, etc) using multi-term regression modelling, (3) manual or automated selection of logistic or actuarial model variables using bootstrap statistical resampling, (4) estimation of uncertainty in model parameters, (5) performance assessment of univariate and multivariate analyses using Spearman's rank correlation and chi-square statistics, boxplots, nomograms, Kaplan-Meier survival plots, and receiver operating characteristics curves, and (6) graphical capabilities to visualize NTCP or TCP prediction versus selected variable models using various plots. DREES provides clinical researchers with a tool customized for radiotherapy outcome modelling. DREES is freely distributed. We expect to continue developing DREES based on user feedback.

  4. Model Checking Techniques for Assessing Functional Form Specifications in Censored Linear Regression Models.

    PubMed

    León, Larry F; Cai, Tianxi

    2012-04-01

    In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.

  5. The impact on midlevel vision of statistically optimal divisive normalization in V1

    PubMed Central

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-01-01

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950

  6. North American Extreme Temperature Events and Related Large Scale Meteorological Patterns: A Review of Statistical Methods, Dynamics, Modeling, and Trends

    NASA Technical Reports Server (NTRS)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael G.; Gershunov, Alexander; Gutowski, William J., Jr.; Gyakum, John R.; Katz, Richard W.; hide

    2015-01-01

    The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and landatmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.

  7. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  8. Uranium resource assessment through statistical analysis of exploration geochemical and other data. Final report. [Codes EVAL, SURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, G.S. Jr.; Howarth, R.J.; Schuenemeyer, J.H.

    1981-02-01

    We have developed a procedure that can help quadrangle evaluators to systematically summarize and use hydrogeochemical and stream sediment reconnaissance (HSSR) and occurrence data. Although we have not provided an independent estimate of uranium endowment, we have devised a methodology that will provide this independent estimate when additional calibration is done by enlarging the study area. Our statistical model for evaluation (system EVAL) ranks uranium endowment for each quadrangle. Because using this model requires experience in geology, statistics, and data analysis, we have also devised a simplified model, presented in the package SURE, a System for Uranium Resource Evaluation. Wemore » have developed and tested these models for the four quadrangles in southern Colorado that comprise the study area; to investigate their generality, the models should be applied to other quandrangles. Once they are calibrated with accepted uranium endowments for several well-known quadrangles, the models can be used to give independent estimates for less-known quadrangles. The point-oriented models structure the objective comparison of the quandrangles on the bases of: (1) Anomalies (a) derived from stream sediments, (b) derived from waters (stream, well, pond, etc.), (2) Geology (a) source rocks, as defined by the evaluator, (b) host rocks, as defined by the evaluator, and (3) Aerial radiometric anomalies.« less

  9. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    PubMed

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  10. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  11. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  12. Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.

    2017-12-01

    The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.

  13. Study of photon correlation techniques for processing of laser velocimeter signals

    NASA Technical Reports Server (NTRS)

    Mayo, W. T., Jr.

    1977-01-01

    The objective was to provide the theory and a system design for a new type of photon counting processor for low level dual scatter laser velocimeter (LV) signals which would be capable of both the first order measurements of mean flow and turbulence intensity and also the second order time statistics: cross correlation auto correlation, and related spectra. A general Poisson process model for low level LV signals and noise which is valid from the photon-resolved regime all the way to the limiting case of nonstationary Gaussian noise was used. Computer simulation algorithms and higher order statistical moment analysis of Poisson processes were derived and applied to the analysis of photon correlation techniques. A system design using a unique dual correlate and subtract frequency discriminator technique is postulated and analyzed. Expectation analysis indicates that the objective measurements are feasible.

  14. Multiwavelength and Statistical Research in Space Astrophysics

    NASA Technical Reports Server (NTRS)

    Feigelson, Eric D.

    1997-01-01

    The accomplishments in the following three research areas are summarized: multiwavelength study of active galactic nuclei; magnetic activity of young stellar objects; and statistical methodology for astronomical data analysis. The research is largely based on observations of the ROSAT and ASCA X-ray observatories, complemented by ground-based optical and radio studies. Major findings include: discovery of inverse Compton X-ray emission from radio galaxy lobes; creation of the largest and least biased available sample of BL Lac objects; characterization of X-ray and nonthermal radio emission from T Tauri stars; obtaining an improved census of young stars in a star forming region and modeling the star formation history and kinematics; discovery of X-ray emission from protostars; development of linear regression methods and codes for interpreting astronomical data; and organization of the first cross-disciplinary conferences for astronomers and statisticians.

  15. A diagnostic model for chronic hypersensitivity pneumonitis

    PubMed Central

    Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R

    2017-01-01

    The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist’s diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. PMID:27245779

  16. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  17. Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter.

    PubMed

    Lord, Dominique

    2006-07-01

    There has been considerable research conducted on the development of statistical models for predicting crashes on highway facilities. Despite numerous advancements made for improving the estimation tools of statistical models, the most common probabilistic structure used for modeling motor vehicle crashes remains the traditional Poisson and Poisson-gamma (or Negative Binomial) distribution; when crash data exhibit over-dispersion, the Poisson-gamma model is usually the model of choice most favored by transportation safety modelers. Crash data collected for safety studies often have the unusual attributes of being characterized by low sample mean values. Studies have shown that the goodness-of-fit of statistical models produced from such datasets can be significantly affected. This issue has been defined as the "low mean problem" (LMP). Despite recent developments on methods to circumvent the LMP and test the goodness-of-fit of models developed using such datasets, no work has so far examined how the LMP affects the fixed dispersion parameter of Poisson-gamma models used for modeling motor vehicle crashes. The dispersion parameter plays an important role in many types of safety studies and should, therefore, be reliably estimated. The primary objective of this research project was to verify whether the LMP affects the estimation of the dispersion parameter and, if it is, to determine the magnitude of the problem. The secondary objective consisted of determining the effects of an unreliably estimated dispersion parameter on common analyses performed in highway safety studies. To accomplish the objectives of the study, a series of Poisson-gamma distributions were simulated using different values describing the mean, the dispersion parameter, and the sample size. Three estimators commonly used by transportation safety modelers for estimating the dispersion parameter of Poisson-gamma models were evaluated: the method of moments, the weighted regression, and the maximum likelihood method. In an attempt to complement the outcome of the simulation study, Poisson-gamma models were fitted to crash data collected in Toronto, Ont. characterized by a low sample mean and small sample size. The study shows that a low sample mean combined with a small sample size can seriously affect the estimation of the dispersion parameter, no matter which estimator is used within the estimation process. The probability the dispersion parameter becomes unreliably estimated increases significantly as the sample mean and sample size decrease. Consequently, the results show that an unreliably estimated dispersion parameter can significantly undermine empirical Bayes (EB) estimates as well as the estimation of confidence intervals for the gamma mean and predicted response. The paper ends with recommendations about minimizing the likelihood of producing Poisson-gamma models with an unreliable dispersion parameter for modeling motor vehicle crashes.

  18. Incorporating big data into treatment plan evaluation: Development of statistical DVH metrics and visualization dashboards.

    PubMed

    Mayo, Charles S; Yao, John; Eisbruch, Avraham; Balter, James M; Litzenberg, Dale W; Matuszak, Martha M; Kessler, Marc L; Weyburn, Grant; Anderson, Carlos J; Owen, Dawn; Jackson, William C; Haken, Randall Ten

    2017-01-01

    To develop statistical dose-volume histogram (DVH)-based metrics and a visualization method to quantify the comparison of treatment plans with historical experience and among different institutions. The descriptive statistical summary (ie, median, first and third quartiles, and 95% confidence intervals) of volume-normalized DVH curve sets of past experiences was visualized through the creation of statistical DVH plots. Detailed distribution parameters were calculated and stored in JavaScript Object Notation files to facilitate management, including transfer and potential multi-institutional comparisons. In the treatment plan evaluation, structure DVH curves were scored against computed statistical DVHs and weighted experience scores (WESs). Individual, clinically used, DVH-based metrics were integrated into a generalized evaluation metric (GEM) as a priority-weighted sum of normalized incomplete gamma functions. Historical treatment plans for 351 patients with head and neck cancer, 104 with prostate cancer who were treated with conventional fractionation, and 94 with liver cancer who were treated with stereotactic body radiation therapy were analyzed to demonstrate the usage of statistical DVH, WES, and GEM in a plan evaluation. A shareable dashboard plugin was created to display statistical DVHs and integrate GEM and WES scores into a clinical plan evaluation within the treatment planning system. Benchmarking with normal tissue complication probability scores was carried out to compare the behavior of GEM and WES scores. DVH curves from historical treatment plans were characterized and presented, with difficult-to-spare structures (ie, frequently compromised organs at risk) identified. Quantitative evaluations by GEM and/or WES compared favorably with the normal tissue complication probability Lyman-Kutcher-Burman model, transforming a set of discrete threshold-priority limits into a continuous model reflecting physician objectives and historical experience. Statistical DVH offers an easy-to-read, detailed, and comprehensive way to visualize the quantitative comparison with historical experiences and among institutions. WES and GEM metrics offer a flexible means of incorporating discrete threshold-prioritizations and historic context into a set of standardized scoring metrics. Together, they provide a practical approach for incorporating big data into clinical practice for treatment plan evaluations.

  19. Attendance and Substance Use Outcomes for the Seeking Safety Program: Sometimes Less Is More

    ERIC Educational Resources Information Center

    Hien, Denise A.; Morgan-Lopez, Antonio A.; Campbell, Aimee N. C.; Saavedra, Lissette M.; Wu, Elwin; Cohen, Lisa; Ruglass, Lesia; Nunes, Edward V.

    2012-01-01

    Objective: This study uses data from the largest effectiveness trial to date on treatment of co-occurring posttraumatic stress and substance use disorders, using advances in statistical methodology for modeling treatment attendance and membership turnover in rolling groups. Method: Women receiving outpatient substance abuse treatment (N = 353)…

  20. Borderline Personality Traits and Disorder: Predicting Prospective Patient Functioning

    ERIC Educational Resources Information Center

    Hopwood, Christopher J.; Zanarini, Mary C.

    2010-01-01

    Objective: Decisions about the composition of personality assessment in the "Diagnostic and Statistical Manual of Mental Disorders" (5th ed.; DSM-V) will be heavily influenced by the clinical utility of candidate constructs. In this study, we addressed 1 aspect of clinical utility by testing the incremental validity of 5-factor model (FFM)…

  1. Analysis of Open-Ended Statistics Questions with Many Facet Rasch Model

    ERIC Educational Resources Information Center

    Güler, Nese

    2014-01-01

    Problem Statement: The most significant disadvantage of open-ended items that allow the valid measurement of upper level cognitive behaviours, such as synthesis and evaluation, is scoring. The difficulty associated with objectively scoring the answers to the items contributes to the reduction of the reliability of the scores. Moreover, other…

  2. Trabecular bone analysis in CT and X-ray images of the proximal femur for the assessment of local bone quality.

    PubMed

    Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer

    2009-10-01

    Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.

  3. Statistical analysis plan for the family-led rehabilitation after stroke in India (ATTEND) trial: A multicenter randomized controlled trial of a new model of stroke rehabilitation compared to usual care.

    PubMed

    Billot, Laurent; Lindley, Richard I; Harvey, Lisa A; Maulik, Pallab K; Hackett, Maree L; Murthy, Gudlavalleti Vs; Anderson, Craig S; Shamanna, Bindiganavale R; Jan, Stephen; Walker, Marion; Forster, Anne; Langhorne, Peter; Verma, Shweta J; Felix, Cynthia; Alim, Mohammed; Gandhi, Dorcas Bc; Pandian, Jeyaraj Durai

    2017-02-01

    Background In low- and middle-income countries, few patients receive organized rehabilitation after stroke, yet the burden of chronic diseases such as stroke is increasing in these countries. Affordable models of effective rehabilitation could have a major impact. The ATTEND trial is evaluating a family-led caregiver delivered rehabilitation program after stroke. Objective To publish the detailed statistical analysis plan for the ATTEND trial prior to trial unblinding. Methods Based upon the published registration and protocol, the blinded steering committee and management team, led by the trial statistician, have developed a statistical analysis plan. The plan has been informed by the chosen outcome measures, the data collection forms and knowledge of key baseline data. Results The resulting statistical analysis plan is consistent with best practice and will allow open and transparent reporting. Conclusions Publication of the trial statistical analysis plan reduces potential bias in trial reporting, and clearly outlines pre-specified analyses. Clinical Trial Registrations India CTRI/2013/04/003557; Australian New Zealand Clinical Trials Registry ACTRN1261000078752; Universal Trial Number U1111-1138-6707.

  4. A statistical approach to quasi-extinction forecasting.

    PubMed

    Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric

    2007-12-01

    Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.

  5. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.

  6. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  7. Intelligent Systems Approaches to Product Sound Quality Analysis

    NASA Astrophysics Data System (ADS)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach. Next, an unsupervised jury clustering algorithm is used to identify and classify subgroups within a jury who have conflicting preferences. In addition, a nested Artificial Neural Network (ANN) architecture is developed to predict subjective preference based on objective sound quality metrics, in the presence of non-linear preferences. Finally, statistical decomposition and correlation algorithms are reviewed that can help an analyst establish a clear understanding of the variability of the product sounds used as inputs into the jury study and to identify correlations between preference scores and sound quality metrics in the presence of non-linearities.

  8. Fractional properties of geophysical field variability on the example of hydrochemical parameters

    NASA Astrophysics Data System (ADS)

    Shevtsov, Boris; Shevtsova, Olga

    2017-10-01

    Using the properties of compound Poisson process and its fractional generalizations, statistical models of geophysical fields variability are considered on an example of hydrochemical parameters system. These models are universal to describe objects of different nature and allow us to explain various pulsing regime. Manifestations of non-conservatism in hydrochemical parameters system and the advantages of the system approach in the description of geophysical fields variability are discussed.

  9. Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning.

    PubMed

    Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn

    2017-06-01

    To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.

  10. Specialized data analysis of SSME and advanced propulsion system vibration measurements

    NASA Technical Reports Server (NTRS)

    Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi

    1993-01-01

    The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.

  11. An Integrative Account of Constraints on Cross-Situational Learning

    PubMed Central

    Yurovsky, Daniel; Frank, Michael C.

    2015-01-01

    Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052

  12. Coherent spectroscopic methods for monitoring pathogens, genetically modified products and nanostructured materials in colloidal solution

    NASA Astrophysics Data System (ADS)

    Moguilnaya, T.; Suminov, Y.; Botikov, A.; Ignatov, S.; Kononenko, A.; Agibalov, A.

    2017-01-01

    We developed the new automatic method that combines the method of forced luminescence and stimulated Brillouin scattering. This method is used for monitoring pathogens, genetically modified products and nanostructured materials in colloidal solution. We carried out the statistical spectral analysis of pathogens, genetically modified soy and nano-particles of silver in water from different regions in order to determine the statistical errors of the method. We studied spectral characteristics of these objects in water to perform the initial identification with 95% probability. These results were used for creation of the model of the device for monitor of pathogenic organisms and working model of the device to determine the genetically modified soy in meat.

  13. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    PubMed

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model. Published by Elsevier Ireland Ltd.

  14. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  15. Stochastic Individual-Based Modeling of Bacterial Growth and Division Using Flow Cytometry.

    PubMed

    García, Míriam R; Vázquez, José A; Teixeira, Isabel G; Alonso, Antonio A

    2017-01-01

    A realistic description of the variability in bacterial growth and division is critical to produce reliable predictions of safety risks along the food chain. Individual-based modeling of bacteria provides the theoretical framework to deal with this variability, but it requires information about the individual behavior of bacteria inside populations. In this work, we overcome this problem by estimating the individual behavior of bacteria from population statistics obtained with flow cytometry. For this objective, a stochastic individual-based modeling framework is defined based on standard assumptions during division and exponential growth. The unknown single-cell parameters required for running the individual-based modeling simulations, such as cell size growth rate, are estimated from the flow cytometry data. Instead of using directly the individual-based model, we make use of a modified Fokker-Plank equation. This only equation simulates the population statistics in function of the unknown single-cell parameters. We test the validity of the approach by modeling the growth and division of Pediococcus acidilactici within the exponential phase. Estimations reveal the statistics of cell growth and division using only data from flow cytometry at a given time. From the relationship between the mother and daughter volumes, we also predict that P. acidilactici divide into two successive parallel planes.

  16. Functional Status Outperforms Comorbidities as a Predictor of 30-Day Acute Care Readmissions in the Inpatient Rehabilitation Population.

    PubMed

    Shih, Shirley L; Zafonte, Ross; Bates, David W; Gerrard, Paul; Goldstein, Richard; Mix, Jacqueline; Niewczyk, Paulette; Greysen, S Ryan; Kazis, Lewis; Ryan, Colleen M; Schneider, Jeffrey C

    2016-10-01

    Functional status is associated with patient outcomes, but is rarely included in hospital readmission risk models. The objective of this study was to determine whether functional status is a better predictor of 30-day acute care readmission than traditionally investigated variables including demographics and comorbidities. Retrospective database analysis between 2002 and 2011. 1158 US inpatient rehabilitation facilities. 4,199,002 inpatient rehabilitation facility admissions comprising patients from 16 impairment groups within the Uniform Data System for Medical Rehabilitation database. Logistic regression models predicting 30-day readmission were developed based on age, gender, comorbidities (Elixhauser comorbidity index, Deyo-Charlson comorbidity index, and Medicare comorbidity tier system), and functional status [Functional Independence Measure (FIM)]. We hypothesized that (1) function-based models would outperform demographic- and comorbidity-based models and (2) the addition of demographic and comorbidity data would not significantly enhance function-based models. For each impairment group, Function Only Models were compared against Demographic-Comorbidity Models and Function Plus Models (Function-Demographic-Comorbidity Models). The primary outcome was 30-day readmission, and the primary measure of model performance was the c-statistic. All-cause 30-day readmission rate from inpatient rehabilitation facilities to acute care hospitals was 9.87%. C-statistics for the Function Only Models were 0.64 to 0.70. For all 16 impairment groups, the Function Only Model demonstrated better c-statistics than the Demographic-Comorbidity Models (c-statistic difference: 0.03-0.12). The best-performing Function Plus Models exhibited negligible improvements in model performance compared to Function Only Models, with c-statistic improvements of only 0.01 to 0.05. Readmissions are currently used as a marker of hospital performance, with recent financial penalties to hospitals for excessive readmissions. Function-based readmission models outperform models based only on demographics and comorbidities. Readmission risk models would benefit from the inclusion of functional status as a primary predictor. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.

  17. An integrated system for rainfall induced shallow landslides modeling

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Capparelli, Giovanna; Rigon, Riccardo; Versace, Pasquale

    2014-05-01

    Rainfall induced shallow landslides (RISL) cause significant damages involving loss of life and properties. Predict susceptible locations for RISL is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, statistic. Usually to accomplish this task two main approaches are used: statistical or physically based model. In this work an open source (OS), 3-D, fully distributed hydrological model was integrated in an OS modeling framework (Object Modeling System). The chain is closed by linking the system to a component for safety factor computation with infinite slope approximation able to take into account layered soils and suction contribution to hillslope stability. The model composition was tested for a case study in Calabria (Italy) in order to simulate the triggering of a landslide happened in the Cosenza Province. The integration in OMS allows the use of other components such as a GIS to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. Finally, model performances were quantified by comparing modelled and simulated trigger time. This research is supported by Ambito/Settore AMBIENTE E SICUREZZA (PON01_01503) project.

  18. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach

    PubMed Central

    Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.

    2015-01-01

    People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704

  19. Sex-Specific Prediction Models for Sleep Apnea From the Hispanic Community Health Study/Study of Latinos.

    PubMed

    Shah, Neomi; Hanna, David B; Teng, Yanping; Sotres-Alvarez, Daniela; Hall, Martica; Loredo, Jose S; Zee, Phyllis; Kim, Mimi; Yaggi, H Klar; Redline, Susan; Kaplan, Robert C

    2016-06-01

    We developed and validated the first-ever sleep apnea (SA) risk calculator in a large population-based cohort of Hispanic/Latino subjects. Cross-sectional data on adults from the Hispanic Community Health Study/Study of Latinos (2008-2011) were analyzed. Subjective and objective sleep measurements were obtained. Clinically significant SA was defined as an apnea-hypopnea index ≥ 15 events per hour. Using logistic regression, four prediction models were created: three sex-specific models (female-only, male-only, and a sex × covariate interaction model to allow differential predictor effects), and one overall model with sex included as a main effect only. Models underwent 10-fold cross-validation and were assessed by using the C statistic. SA and its predictive variables; a total of 17 variables were considered. A total of 12,158 participants had complete sleep data available; 7,363 (61%) were women. The population-weighted prevalence of SA (apnea-hypopnea index ≥ 15 events per hour) was 6.1% in female subjects and 13.5% in male subjects. Male-only (C statistic, 0.808) and female-only (C statistic, 0.836) prediction models had the same predictor variables (ie, age, BMI, self-reported snoring). The sex-interaction model (C statistic, 0.836) contained sex, age, age × sex, BMI, BMI × sex, and self-reported snoring. The final overall model (C statistic, 0.832) contained age, BMI, snoring, and sex. We developed two websites for our SA risk calculator: one in English (https://www.montefiore.org/sleepapneariskcalc.html) and another in Spanish (http://www.montefiore.org/sleepapneariskcalc-es.html). We created an internally validated, highly discriminating, well-calibrated, and parsimonious prediction model for SA. Contrary to the study hypothesis, the variables did not have different predictive magnitudes in male and female subjects. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  20. Two approaches to incorporate clinical data uncertainty into multiple criteria decision analysis for benefit-risk assessment of medicinal products.

    PubMed

    Wen, Shihua; Zhang, Lanju; Yang, Bo

    2014-07-01

    The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. Optimal hemodynamic response model for functional near-infrared spectroscopy

    PubMed Central

    Kamran, Muhammad A.; Jeong, Myung Yung; Mannan, Malik M. N.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650–950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > tcritical and p-value < 0.05). PMID:26136668

  2. Optimal hemodynamic response model for functional near-infrared spectroscopy.

    PubMed

    Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).

  3. Using LabView for real-time monitoring and tracking of multiple biological objects

    NASA Astrophysics Data System (ADS)

    Nikolskyy, Aleksandr I.; Krasilenko, Vladimir G.; Bilynsky, Yosyp Y.; Starovier, Anzhelika

    2017-04-01

    Today real-time studying and tracking of movement dynamics of various biological objects is important and widely researched. Features of objects, conditions of their visualization and model parameters strongly influence the choice of optimal methods and algorithms for a specific task. Therefore, to automate the processes of adaptation of recognition tracking algorithms, several Labview project trackers are considered in the article. Projects allow changing templates for training and retraining the system quickly. They adapt to the speed of objects and statistical characteristics of noise in images. New functions of comparison of images or their features, descriptors and pre-processing methods will be discussed. The experiments carried out to test the trackers on real video files will be presented and analyzed.

  4. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  5. What You Learn is What You See: Using Eye Movements to Study Infant Cross-Situational Word Learning

    PubMed Central

    Smith, Linda

    2016-01-01

    Recent studies show that both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, the underlying mechanisms that make statistical learning possible and powerful are not yet known. With the goal of providing new insights into this issue, the research reported in this paper used an eye tracker to record the moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. Various measures are applied to such fine-grained temporal data, such as looking duration and shift rate (the number of shifts in gaze from one visual object to the other) trial by trial, showing different eye movement patterns between strong and weak statistical learners. Moreover, an information-theoretic measure is developed and applied to gaze data to quantify the degree of learning uncertainty trial by trial. Next, a simple associative statistical learning model is applied to eye movement data and these simulation results are compared with empirical results from young children, showing strong correlations between these two. This suggests that an associative learning mechanism with selective attention can provide a cognitively plausible model of cross-situational statistical learning. The work represents the first steps to use eye movement data to infer underlying real-time processes in statistical word learning. PMID:22213894

  6. Generation of future potential scenarios in an Alpine Catchment by applying bias-correction techniques, delta-change approaches and stochastic Weather Generators at different spatial scale. Analysis of their influence on basic and drought statistics.

    NASA Astrophysics Data System (ADS)

    Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio

    2017-04-01

    Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.

  7. Content-Related Issues Pertaining to Teaching Statistics: Making Decisions about Educational Objectives in Statistics Courses.

    ERIC Educational Resources Information Center

    Bliss, Leonard B.; Tashakkori, Abbas

    This paper discusses the objectives that would be appropriate for statistics classes for students who are not majoring in statistics, evaluation, or quantitative research design. These "non-majors" should be able to choose appropriate analytical methods for specific sets of data based on the research question and the nature of the data, and they…

  8. The skeletal maturation status estimated by statistical shape analysis: axial images of Japanese cervical vertebra

    PubMed Central

    Shin, S M; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B

    2015-01-01

    Objectives: To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. Methods: The sample included 24 female and 19 male patients with hand–wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Results: Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Conclusions: Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index. PMID:25411713

  9. The use of higher-order statistics in rapid object categorization in natural scenes.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-02-04

    We can rapidly and efficiently recognize many types of objects embedded in complex scenes. What information supports this object recognition is a fundamental question for understanding our visual processing. We investigated the eccentricity-dependent role of shape and statistical information for ultrarapid object categorization, using the higher-order statistics proposed by Portilla and Simoncelli (2000). Synthesized textures computed by their algorithms have the same higher-order statistics as the originals, while the global shapes were destroyed. We used the synthesized textures to manipulate the availability of shape information separately from the statistics. We hypothesized that shape makes a greater contribution to central vision than to peripheral vision and that statistics show the opposite pattern. Results did not show contributions clearly biased by eccentricity. Statistical information demonstrated a robust contribution not only in peripheral but also in central vision. For shape, the results supported the contribution in both central and peripheral vision. Further experiments revealed some interesting properties of the statistics. They are available for a limited time, attributable to the presence or absence of animals without shape, and predict how easily humans detect animals in original images. Our data suggest that when facing the time constraint of categorical processing, higher-order statistics underlie our significant performance for rapid categorization, irrespective of eccentricity. © 2015 ARVO.

  10. Examining the Association between Patient-Reported Symptoms of Attention and Memory Dysfunction with Objective Cognitive Performance: A Latent Regression Rasch Model Approach.

    PubMed

    Li, Yuelin; Root, James C; Atkinson, Thomas M; Ahles, Tim A

    2016-06-01

    Patient-reported cognition generally exhibits poor concordance with objectively assessed cognitive performance. In this article, we introduce latent regression Rasch modeling and provide a step-by-step tutorial for applying Rasch methods as an alternative to traditional correlation to better clarify the relationship of self-report and objective cognitive performance. An example analysis using these methods is also included. Introduction to latent regression Rasch modeling is provided together with a tutorial on implementing it using the JAGS programming language for the Bayesian posterior parameter estimates. In an example analysis, data from a longitudinal neurocognitive outcomes study of 132 breast cancer patients and 45 non-cancer matched controls that included self-report and objective performance measures pre- and post-treatment were analyzed using both conventional and latent regression Rasch model approaches. Consistent with previous research, conventional analysis and correlations between neurocognitive decline and self-reported problems were generally near zero. In contrast, application of latent regression Rasch modeling found statistically reliable associations between objective attention and processing speed measures with self-reported Attention and Memory scores. Latent regression Rasch modeling, together with correlation of specific self-reported cognitive domains with neurocognitive measures, helps to clarify the relationship of self-report with objective performance. While the majority of patients attribute their cognitive difficulties to memory decline, the Rash modeling suggests the importance of processing speed and initial learning. To encourage the use of this method, a step-by-step guide and programming language for implementation is provided. Implications of this method in cognitive outcomes research are discussed. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Space Debris Symposium (A6.) Measurements and Space Surveillance (1.): Measurements of the Small Particle Debris Cloud from the 11 January, 2007 Chinese Anti-satellite Test

    NASA Technical Reports Server (NTRS)

    Matney, Mark J.; Stansbery, Eugene; J.-C Liou; Stokely, Christopher; Horstman, Matthew; Whitlock, David

    2008-01-01

    On January 11, 2007, the Chinese military conducted a test of an anti-satellite (ASAT) system, destroying their own Fengyun-1C spacecraft with an interceptor missile. The resulting hypervelocity collision created an unprecedented number of tracked debris - more than 2500 objects. These objects represent only those large enough for the US Space Surveillance Network (SSN) to track - typically objects larger than about 5-10 cm in diameter. There are expected to be even more debris objects at sizes too small to be seen and tracked by the SSN. Because of the altitude of the target satellite (865 x 845 km orbit), many of the debris are expected to have long orbital lifetimes and contribute to the orbital debris environment for decades to come. In the days and weeks following the ASAT test, NASA was able to use Lincoln Laboratory s Haystack radar on several occasions to observe portions of the ASAT debris cloud. Haystack has the capability of detecting objects down to less than one centimeter in diameter, and a large number of centimeter-sized particles corresponding to the ASAT cloud were clearly seen in the data. While Haystack cannot track these objects, the statistical sampling procedures NASA uses can give an accurate statistical picture of the characteristics of the debris from a breakup event. For years computer models based on data from ground hypervelocity collision tests (e.g., the SOCIT test) and orbital collision experiments (e.g., the P-78 and Delta-180 on-orbit collisions) have been used to predict the extent and characteristics of such hypervelocity collision debris clouds, but until now there have not been good ways to verify these models in the centimeter size regime. It is believed that unplanned collisions of objects in space similar to ASAT tests will drive the long-term future evolution of the debris environment in near-Earth space. Therefore, the Chinese ASAT test provides an excellent opportunity to test the models used to predict the future debris environment. For this study, Haystack detection events are compared to model predictions to test the model assumptions, including debris size distribution, velocity distribution, and assumptions about momentum transfer between the target and interceptor. In this paper we will present the results of these and other measurements on the size and extent of collisional breakup debris clouds.

  12. Structure of Herbig AeBe disks at the milliarcsecond scale . A statistical survey in the H band using PIONIER-VLTI

    NASA Astrophysics Data System (ADS)

    Lazareff, B.; Berger, J.-P.; Kluska, J.; Le Bouquin, J.-B.; Benisty, M.; Malbet, F.; Koen, C.; Pinte, C.; Thi, W.-F.; Absil, O.; Baron, F.; Delboulbé, A.; Duvert, G.; Isella, A.; Jocou, L.; Juhasz, A.; Kraus, S.; Lachaume, R.; Ménard, F.; Millan-Gabet, R.; Monnier, J. D.; Moulin, T.; Perraut, K.; Rochat, S.; Soulez, F.; Tallon, M.; Thiébaut, E.; Traub, W.; Zins, G.

    2017-03-01

    Context. It is now generally accepted that the near-infrared excess of Herbig AeBe stars originates in the dust of a circumstellar disk. Aims: The aims of this article are to infer the radial and vertical structure of these disks at scales of order 1 au, and the properties of the dust grains. Methods: The program objects (51 in total) were observed with the H-band (1.6 μm) PIONIER/VLTI interferometer. The largest baselines allowed us to resolve (at least partially) structures of a few tenths of an au at typical distances of a few hundred parsecs. Dedicated UBVRIJHK photometric measurements were also obtained. Spectral and 2D geometrical parameters are extracted via fits of a few simple models: ellipsoids and broadened rings with azimuthal modulation. Model bias is mitigated by parallel fits of physical disk models. Sample statistics were evaluated against similar statistics for the physical disk models to infer properties of the sample objects as a group. Results: We find that dust at the inner rim of the disk has a sublimation temperature Tsub ≈ 1800 K. A ring morphology is confirmed for approximately half the resolved objects; these rings are wide δr/r ≥ 0.5. A wide ring favors a rim that, on the star-facing side, looks more like a knife edge than a doughnut. The data are also compatible with the combination of a narrow ring and an inner disk of unspecified nature inside the dust sublimation radius. The disk inner part has a thickness z/r ≈ 0.2, flaring to z/r ≈ 0.5 in the outer part. We confirm the known luminosity-radius relation; a simple physical model is consistent with both the mean luminosity-radius relation and the ring relative width; however, a significant spread around the mean relation is present. In some of the objects we find a halo component, fully resolved at the shortest interferometer spacing, that is related to the HAeBe class. Full Tables B1-B3, as well as results of other parametric fits, are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A85 The calibrated interferometric data can be found as FITS format files at http://oidb.jmmc.fr/collection.html?id=HAeBeLP

  13. Information Measures for Statistical Orbit Determination

    ERIC Educational Resources Information Center

    Mashiku, Alinda K.

    2013-01-01

    The current Situational Space Awareness (SSA) is faced with a huge task of tracking the increasing number of space objects. The tracking of space objects requires frequent and accurate monitoring for orbit maintenance and collision avoidance using methods for statistical orbit determination. Statistical orbit determination enables us to obtain…

  14. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  15. Treatment of control data in lunar phototriangulation. [application of statistical procedures and development of mathematical and computer techniques

    NASA Technical Reports Server (NTRS)

    Wong, K. W.

    1974-01-01

    In lunar phototriangulation, there is a complete lack of accurate ground control points. The accuracy analysis of the results of lunar phototriangulation must, therefore, be completely dependent on statistical procedure. It was the objective of this investigation to examine the validity of the commonly used statistical procedures, and to develop both mathematical techniques and computer softwares for evaluating (1) the accuracy of lunar phototriangulation; (2) the contribution of the different types of photo support data on the accuracy of lunar phototriangulation; (3) accuracy of absolute orientation as a function of the accuracy and distribution of both the ground and model points; and (4) the relative slope accuracy between any triangulated pass points.

  16. A study of the continuum of integration of mathematics content with science concepts at the middle school level in West Virginia

    NASA Astrophysics Data System (ADS)

    Meisel, Edna Marie

    The purpose of this study was to examine the practices and perceptions of regular education seventh grade middle school mathematics teachers in West Virginia concerning the integration of mathematics objectives with science concepts. In addition, this study also emphasized the use of integrated curriculum continuum models to study mathematics teachers' practices and perceptions for teaching mathematics objectives in connection with science concepts. It was argued that the integrated curriculum continuum model can be used to help educators begin to form a common definition of integrated curriculum. The population was described as the regular education seventh grade middle school mathematics teachers in West Virginia. The entire population (N = 173) was used as the participants in this study. Data was collected using an integrated curriculum practices and perceptions survey constructed by the researcher. This was a descriptive study that incorporated the Chi Square statistic to show trends in teacher practices and perceptions. Also, an ex post facto design, that incorporated the Mann-Whitney U statistic, was used to compare practices and perceptions between teachers grouped according to factors that influence teaching practices and perceptions. These factors included teaching certificate endorsement and teacher professional preparation. Results showed that the regular education seventh grade middle school mathematics teachers of West Virginia are teaching mathematics objectives mainly at a discipline-based level with no formal attempt for integration with science concepts. However, these teachers perceived that many of the mathematics objectives should be taught at varying levels of integration with science concepts. It was also shown that teachers who experienced professional preparation courses that emphasized integrated curriculum courses did teach many of the mathematics objectives at higher levels of integration with science than those teachers who did not experience integrated curriculum courses.

  17. Investigation using data from ERTS to develop and implement utilization of living marine resources

    NASA Technical Reports Server (NTRS)

    Stevenson, W. H. (Principal Investigator); Pastula, E. J., Jr.

    1973-01-01

    The author has identified the following significant results. The feasibility of utilizing ERTS-1 data in conjunction with aerial remote sensing and sea truth information to predict the distribution of menhaden in the Mississippi Sound during a specific time frame has been demonstrated by employing a number of uniquely designed empirical regression models. The construction of these models was made possible through innovative statistical routines specifically developed to meet the stated objectives.

  18. Model-based approach to the detection and classification of mines in sidescan sonar.

    PubMed

    Reed, Scott; Petillot, Yvan; Bell, Judith

    2004-01-10

    This paper presents a model-based approach to mine detection and classification by use of sidescan sonar. Advances in autonomous underwater vehicle technology have increased the interest in automatic target recognition systems in an effort to automate a process that is currently carried out by a human operator. Current automated systems generally require training and thus produce poor results when the test data set is different from the training set. This has led to research into unsupervised systems, which are able to cope with the large variability in conditions and terrains seen in sidescan imagery. The system presented in this paper first detects possible minelike objects using a Markov random field model, which operates well on noisy images, such as sidescan, and allows a priori information to be included through the use of priors. The highlight and shadow regions of the object are then extracted with a cooperating statistical snake, which assumes these regions are statistically separate from the background. Finally, a classification decision is made using Dempster-Shafer theory, where the extracted features are compared with synthetic realizations generated with a sidescan sonar simulator model. Results for the entire process are shown on real sidescan sonar data. Similarities between the sidescan sonar and synthetic aperture radar (SAR) imaging processes ensure that the approach outlined here could be made applied to SAR image analysis.

  19. Towards a population synthesis model of objects formed by self-gravitating disc fragmentation and tidal downsizing

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan; Rice, Ken

    2013-07-01

    Recently, the gravitational instability (GI) model of giant planet and brown dwarf formation has been revisited and recast into what is often referred to as the `tidal downsizing' hypothesis. The fragmentation of self-gravitating protostellar discs into gravitationally bound embryos - with masses of a few to tens of Jupiter masses, at semimajor axes above 30-40 au - is followed by a combination of grain sedimentation inside the embryo, radial migration towards the central star and tidal disruption of the embryo's upper layers. The properties of the resultant object depends sensitively on the time-scales upon which each process occurs. Therefore, GI followed by tidal downsizing can theoretically produce objects spanning a large mass range, from terrestrial planets to giant planets and brown dwarfs. Whether such objects can be formed in practice, and what proportions of the observed population they would represent, requires a more involved statistical analysis. We present a simple population synthesis model of star and planet formation via GI and tidal downsizing. We couple a semi-analytic model of protostellar disc evolution to analytic calculations of fragmentation, initial embryo mass, grain growth and sedimentation, embryo migration and tidal disruption. While there are key pieces of physics yet to be incorporated, it represents a first step towards a mature statistical model of GI and tidal downsizing as a mode of star and planet formation. We show results from four runs of the population synthesis model, varying the opacity law and the strength of migration, as well as investigating the effect of disc truncation during the fragmentation process. We find that a large fraction of disc fragments are completely destroyed by tidal disruption (typically 40 per cent of the initial population). The tidal downsizing process tends to prohibit low-mass embryos reaching small semimajor axis. The majority of surviving objects are brown dwarfs without solid cores of any kind. Around 40 per cent of surviving objects form solid cores of the order of 5-10 M⊕, and of this group a few do migrate to distances amenable to current exoplanet observations. Over a million disc fragments were simulated in this work, and only one resulted in the formation of a terrestrial planet (i.e. with a core mass of a few Earth masses and no gaseous envelope). These early results suggest that GI followed by tidal downsizing is not the principal mode of planet formation, but remains an excellent means of forming gas giant planets, brown dwarfs and low-mass stars at large semimajor axes.

  20. The Value of a Well-Being Improvement Strategy

    PubMed Central

    Guo, Xiaobo; Coberley, Carter; Pope, James E.; Wells, Aaron

    2015-01-01

    Objective: The objective of this study is to evaluate effectiveness of a firm's 5-year strategy toward improving well-being while lowering health care costs amidst adoption of a Consumer-Driven Health Plan. Methods: Repeated measures statistical models were employed to test and quantify association between key demographic factors, employment type, year, individual well-being, and outcomes of health care costs, obesity, smoking, absence, and performance. Results: Average individual well-being trended upward by 13.5% over 5 years, monthly allowed amount health care costs declined 5.2% on average per person per year, and obesity and smoking rates declined by 4.8 and 9.7%, respectively, on average each year. The results show that individual well-being was significantly associated with each outcome and in the expected direction. Conclusions: The firm's strategy was successful in driving statistically significant, longitudinal well-being, biometric and productivity improvements, and health care cost reduction. PMID:26461860

  1. Objective determination of image end-members in spectral mixture analysis of AVIRIS data

    NASA Technical Reports Server (NTRS)

    Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.

    1993-01-01

    Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.

  2. Impact of a variational objective analysis scheme on a regional area numerical model: The Italian Air Force Weather Service experience

    NASA Astrophysics Data System (ADS)

    Bonavita, M.; Torrisi, L.

    2005-03-01

    A new data assimilation system has been designed and implemented at the National Center for Aeronautic Meteorology and Climatology of the Italian Air Force (CNMCA) in order to improve its operational numerical weather prediction capabilities and provide more accurate guidance to operational forecasters. The system, which is undergoing testing before operational use, is based on an “observation space” version of the 3D-VAR method for the objective analysis component, and on the High Resolution Regional Model (HRM) of the Deutscher Wetterdienst (DWD) for the prognostic component. Notable features of the system include a completely parallel (MPI+OMP) implementation of the solution of analysis equations by a preconditioned conjugate gradient descent method; correlation functions in spherical geometry with thermal wind constraint between mass and wind field; derivation of the objective analysis parameters from a statistical analysis of the innovation increments.

  3. Estimating migratory fish distribution from altitude and basin area: a case study in a large Neotropical river

    Treesearch

    Jose Ricardo Barradas; Lucas G. Silva; Bret C. Harvey; Nelson F. Fontoura

    2012-01-01

    1. The objective of this study was to identify longitudinal distribution patterns of large migratory fish species in the Uruguay River basin, southern Brazil, and construct statistical distribution models for Salminus brasiliensis, Prochilodus lineatus, Leporinus obtusidens and Pseudoplatystoma corruscans. 2. The sampling programme resulted in 202 interviews with old...

  4. Teacher Evaluation: Alternate Measures of Student Growth. Q&A with Brian Gill. REL Mid-Atlantic Webinar

    ERIC Educational Resources Information Center

    Regional Educational Laboratory Mid-Atlantic, 2013

    2013-01-01

    This webinar described the findings of our literature review on alternative measures of student growth that are used in teacher evaluation. The review focused on two types of alternative growth measures: statistical growth/value-added models and teacher-developed student learning objectives. This Q&A addressed the questions participants had…

  5. Evaluation of a weighted test in the analysis of ordinal gait scores in an additivity model for five OP pesticides.

    EPA Science Inventory

    Appropriate statistical analyses are critical for evaluating interactions of mixtures with a common mode of action, as is often the case for cumulative risk assessments. Our objective is to develop analyses for use when a response variable is ordinal, and to test for interaction...

  6. The National Evaluation of School Nutrition Programs. Final Report - Executive Summary.

    ERIC Educational Resources Information Center

    Radzikowski, Jack

    This is a summary of the final report of a study (begun in 1979) of the National School Lunch, School Breakfast, and Special Milk Programs. The major objectives of the evaluation were to (1) identify existing information on the school nutrition programs; (2) identify determinants of participation in the programs and develop statistical models for…

  7. Mapping fuels at multiple scales: landscape application of the fuel characteristic classification system.

    Treesearch

    D. McKenzie; C.L. Raymond; L.-K.B. Kellogg; R.A. Norheim; A.G. Andreu; A.C. Bayard; K.E. Kopper; E. Elman

    2007-01-01

    Fuel mapping is a complex and often multidisciplinary process, involving remote sensing, ground-based validation, statistical modeling, and knowledge-based systems. The scale and resolution of fuel mapping depend both on objectives and availability of spatial data layers. We demonstrate use of the Fuel Characteristic Classification System (FCCS) for fuel mapping at two...

  8. Adversarial Threshold Neural Computer for Molecular de Novo Design.

    PubMed

    Putin, Evgeny; Asadulaev, Arip; Vanhaelen, Quentin; Ivanenkov, Yan; Aladinskaya, Anastasia V; Aliper, Alex; Zhavoronkov, Alex

    2018-03-30

    In this article, we propose the deep neural network Adversarial Threshold Neural Computer (ATNC). The ATNC model is intended for the de novo design of novel small-molecule organic structures. The model is based on generative adversarial network architecture and reinforcement learning. ATNC uses a Differentiable Neural Computer as a generator and has a new specific block, called adversarial threshold (AT). AT acts as a filter between the agent (generator) and the environment (discriminator + objective reward functions). Furthermore, to generate more diverse molecules we introduce a new objective reward function named Internal Diversity Clustering (IDC). In this work, ATNC is tested and compared with the ORGANIC model. Both models were trained on the SMILES string representation of the molecules, using four objective functions (internal similarity, Muegge druglikeness filter, presence or absence of sp 3 -rich fragments, and IDC). The SMILES representations of 15K druglike molecules from the ChemDiv collection were used as a training data set. For the different functions, ATNC outperforms ORGANIC. Combined with the IDC, ATNC generates 72% of valid and 77% of unique SMILES strings, while ORGANIC generates only 7% of valid and 86% of unique SMILES strings. For each set of molecules generated by ATNC and ORGANIC, we analyzed distributions of four molecular descriptors (number of atoms, molecular weight, logP, and tpsa) and calculated five chemical statistical features (internal diversity, number of unique heterocycles, number of clusters, number of singletons, and number of compounds that have not been passed through medicinal chemistry filters). Analysis of key molecular descriptors and chemical statistical features demonstrated that the molecules generated by ATNC elicited better druglikeness properties. We also performed in vitro validation of the molecules generated by ATNC; results indicated that ATNC is an effective method for producing hit compounds.

  9. Geometry of the perceptual space

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Palmer, Stephen; Eghbalnia, Hamid; Carew, John

    1999-09-01

    The concept of space and geometry varies across the subjects. Following Poincare, we consider the construction of the perceptual space as a continuum equipped with a notion of magnitude. The study of the relationships of objects in the perceptual space gives rise to what we may call perceptual geometry. Computational modeling of objects and investigation of their deeper perceptual geometrical properties (beyond qualitative arguments) require a mathematical representation of the perceptual space. Within the realm of such a mathematical/computational representation, visual perception can be studied as in the well-understood logic-based geometry. This, however, does not mean that one could reduce all problems of visual perception to their geometric counterparts. Rather, visual perception as reported by a human observer, has a subjective factor that could be analytically quantified only through statistical reasoning and in the course of repetitive experiments. Thus, the desire to experimentally verify the statements in perceptual geometry leads to an additional probabilistic structure imposed on the perceptual space, whose amplitudes are measured through intervention by human observers. We propose a model for the perceptual space and the case of perception of textured surfaces as a starting point for object recognition. To rigorously present these ideas and propose computational simulations for testing the theory, we present the model of the perceptual geometry of surfaces through an amplification of theory of Riemannian foliation in differential topology, augmented by statistical learning theory. When we refer to the perceptual geometry of a human observer, the theory takes into account the Bayesian formulation of the prior state of the knowledge of the observer and Hebbian learning. We use a Parallel Distributed Connectionist paradigm for computational modeling and experimental verification of our theory.

  10. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-03-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. Creative Commons Attribution License

  11. Risk Estimation for Lung Cancer in Libya: Analysis Based on Standardized Morbidity Ratio, Poisson-Gamma Model, BYM Model and Mixture Model

    PubMed Central

    Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley

    2017-01-01

    Cancer is the most rapidly spreading disease in the world, especially in developing countries, including Libya. Cancer represents a significant burden on patients, families, and their societies. This disease can be controlled if detected early. Therefore, disease mapping has recently become an important method in the fields of public health research and disease epidemiology. The correct choice of statistical model is a very important step to producing a good map of a disease. Libya was selected to perform this work and to examine its geographical variation in the incidence of lung cancer. The objective of this paper is to estimate the relative risk for lung cancer. Four statistical models to estimate the relative risk for lung cancer and population censuses of the study area for the time period 2006 to 2011 were used in this work. They are initially known as Standardized Morbidity Ratio, which is the most popular statistic, which used in the field of disease mapping, Poisson-gamma model, which is one of the earliest applications of Bayesian methodology, Besag, York and Mollie (BYM) model and Mixture model. As an initial step, this study begins by providing a review of all proposed models, which we then apply to lung cancer data in Libya. Maps, tables and graph, goodness-of-fit (GOF) were used to compare and present the preliminary results. This GOF is common in statistical modelling to compare fitted models. The main general results presented in this study show that the Poisson-gamma model, BYM model, and Mixture model can overcome the problem of the first model (SMR) when there is no observed lung cancer case in certain districts. Results show that the Mixture model is most robust and provides better relative risk estimates across a range of models. PMID:28440974

  12. The Relationship Between Procrastination, Learning Strategies and Statistics Anxiety Among Iranian College Students: A Canonical Correlation Analysis

    PubMed Central

    Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali

    2012-01-01

    Objective: Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. Methods: A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Results: Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. Conclusion: These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction. PMID:24644468

  13. Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis

    NASA Technical Reports Server (NTRS)

    Hoffman, Ross N.

    2001-01-01

    We completed the formulation of the smoothness penalty functional this past quarter. We used a simplified procedure for estimating the statistics of the FCA solution spectral coefficients from the results of the unconstrained, low-truncation FCA (stopping criterion) solutions. During the current reporting period we have completed the calculation of GEOS-2 model-equivalent brightness temperatures for the 6.7 micron and 11 micron window channels used in the GOES imagery for all 10 cases from August 1999. These were simulated using the AER-developed Optimal Spectral Sampling (OSS) model.

  14. Parameter estimation and order selection for an empirical model of VO2 on-kinetics.

    PubMed

    Alata, O; Bernard, O

    2007-04-27

    In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.

  15. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  16. Model-based iterative reconstruction in low-dose CT colonography-feasibility study in 65 patients for symptomatic investigation.

    PubMed

    Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl

    2015-05-01

    To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  17. Computer aided weld defect delineation using statistical parametric active contours in radiographic inspection.

    PubMed

    Goumeidane, Aicha Baya; Nacereddine, Nafaa; Khamadja, Mohammed

    2015-01-01

    A perfect knowledge of a defect shape is determinant for the analysis step in automatic radiographic inspection. Image segmentation is carried out on radiographic images and extract defects indications. This paper deals with weld defect delineation in radiographic images. The proposed method is based on a new statistics-based explicit active contour. An association of local and global modeling of the image pixels intensities is used to push the model to the desired boundaries. Furthermore, other strategies are proposed to accelerate its evolution and make the convergence speed depending only on the defect size as selecting a band around the active contour curve. The experimental results are very promising, since experiments on synthetic and radiographic images show the ability of the proposed model to extract a piece-wise homogenous object from very inhomogeneous background, even in a bad quality image.

  18. Near-equilibrium dumb-bell-shaped figures for cohesionless small bodies

    NASA Astrophysics Data System (ADS)

    Descamps, Pascal

    2016-02-01

    In a previous paper (Descamps, P. [2015]. Icarus 245, 64-79), we developed a specific method aimed to retrieve the main physical characteristics (shape, density, surface scattering properties) of highly elongated bodies from their rotational lightcurves through the use of dumb-bell-shaped equilibrium figures. The present work is a test of this method. For that purpose we introduce near-equilibrium dumb-bell-shaped figures which are base dumb-bell equilibrium shapes modulated by lognormal statistics. Such synthetic irregular models are used to generate lightcurves from which our method is successfully applied. Shape statistical parameters of such near-equilibrium dumb-bell-shaped objects are in good agreement with those calculated for example for the Asteroid (216) Kleopatra from its dog-bone radar model. It may suggest that such bilobed and elongated asteroids can be approached by equilibrium figures perturbed be the interplay with a substantial internal friction modeled by a Gaussian random sphere.

  19. Automatic stage identification of Drosophila egg chamber based on DAPI images

    PubMed Central

    Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min

    2016-01-01

    The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176

  20. Multi-criterion model ensemble of CMIP5 surface air temperature over China

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming

    2018-05-01

    The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.

  1. The role of familiarity in binary choice inferences.

    PubMed

    Honda, Hidehito; Abe, Keiga; Matsuka, Toshihiko; Yamagishi, Kimihiko

    2011-07-01

    In research on the recognition heuristic (Goldstein & Gigerenzer, Psychological Review, 109, 75-90, 2002), knowledge of recognized objects has been categorized as "recognized" or "unrecognized" without regard to the degree of familiarity of the recognized object. In the present article, we propose a new inference model--familiarity-based inference. We hypothesize that when subjective knowledge levels (familiarity) of recognized objects differ, the degree of familiarity of recognized objects will influence inferences. Specifically, people are predicted to infer that the more familiar object in a pair of two objects has a higher criterion value on the to-be-judged dimension. In two experiments, using a binary choice task, we examined inferences about populations in a pair of two cities. Results support predictions of familiarity-based inference. Participants inferred that the more familiar city in a pair was more populous. Statistical modeling showed that individual differences in familiarity-based inference lie in the sensitivity to differences in familiarity. In addition, we found that familiarity-based inference can be generally regarded as an ecologically rational inference. Furthermore, when cue knowledge about the inference criterion was available, participants made inferences based on the cue knowledge about population instead of familiarity. Implications of the role of familiarity in psychological processes are discussed.

  2. Statistical approaches to account for missing values in accelerometer data: Applications to modeling physical activity.

    PubMed

    Yue Xu, Selene; Nelson, Sandahl; Kerr, Jacqueline; Godbole, Suneeta; Patterson, Ruth; Merchant, Gina; Abramson, Ian; Staudenmayer, John; Natarajan, Loki

    2018-04-01

    Physical inactivity is a recognized risk factor for many chronic diseases. Accelerometers are increasingly used as an objective means to measure daily physical activity. One challenge in using these devices is missing data due to device nonwear. We used a well-characterized cohort of 333 overweight postmenopausal breast cancer survivors to examine missing data patterns of accelerometer outputs over the day. Based on these observed missingness patterns, we created psuedo-simulated datasets with realistic missing data patterns. We developed statistical methods to design imputation and variance weighting algorithms to account for missing data effects when fitting regression models. Bias and precision of each method were evaluated and compared. Our results indicated that not accounting for missing data in the analysis yielded unstable estimates in the regression analysis. Incorporating variance weights and/or subject-level imputation improved precision by >50%, compared to ignoring missing data. We recommend that these simple easy-to-implement statistical tools be used to improve analysis of accelerometer data.

  3. Estimation of social value of statistical life using willingness-to-pay method in Nanjing, China.

    PubMed

    Yang, Zhao; Liu, Pan; Xu, Xin

    2016-10-01

    Rational decision making regarding the safety related investment programs greatly depends on the economic valuation of traffic crashes. The primary objective of this study was to estimate the social value of statistical life in the city of Nanjing in China. A stated preference survey was conducted to investigate travelers' willingness to pay for traffic risk reduction. Face-to-face interviews were conducted at stations, shopping centers, schools, and parks in different districts in the urban area of Nanjing. The respondents were categorized into two groups, including motorists and non-motorists. Both the binary logit model and mixed logit model were developed for the two groups of people. The results revealed that the mixed logit model is superior to the fixed coefficient binary logit model. The factors that significantly affect people's willingness to pay for risk reduction include income, education, gender, age, drive age (for motorists), occupation, whether the charged fees were used to improve private vehicle equipment (for motorists), reduction in fatality rate, and change in travel cost. The Monte Carlo simulation method was used to generate the distribution of value of statistical life (VSL). Based on the mixed logit model, the VSL had a mean value of 3,729,493 RMB ($586,610) with a standard deviation of 2,181,592 RMB ($343,142) for motorists; and a mean of 3,281,283 RMB ($505,318) with a standard deviation of 2,376,975 RMB ($366,054) for non-motorists. Using the tax system to illustrate the contribution of different income groups to social funds, the social value of statistical life was estimated. The average social value of statistical life was found to be 7,184,406 RMB ($1,130,032). Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.

    2017-12-01

    Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.

  5. Using statistical and machine learning to help institutions detect suspicious access to electronic health records

    PubMed Central

    Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila

    2011-01-01

    Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912

  6. Project-based introduction to aerospace engineering course: A model rocket

    NASA Astrophysics Data System (ADS)

    Jayaram, Sanjay; Boyer, Lawrence; George, John; Ravindra, K.; Mitchell, Kyle

    2010-05-01

    In this paper, a model rocket project suitable for sophomore aerospace engineering students is described. This project encompasses elements of drag estimation, thrust determination and analysis using digital data acquisition, statistical analysis of data, computer aided drafting, programming, team work and written communication skills. The student built rockets are launched in the university baseball field with the objective of carrying a specific amount of payload so that the rocket achieves a specific altitude before the parachute is deployed. During the course of the project, the students are introduced to real-world engineering practice through written report submission of their designs. Over the years, the project has proven to enhance the learning objectives, yet cost effective and has provided good outcome measures.

  7. Gravitational lensing by a smoothly variable surface mass density

    NASA Technical Reports Server (NTRS)

    Paczynski, Bohdan; Wambsganss, Joachim

    1989-01-01

    The statistical properties of gravitational lensing due to smooth but nonuniform distributions of matter are considered. It is found that a majority of triple images had a parity characteristic for 'shear-induced' lensing. Almost all cases of triple or multiple imaging were associated with large surface density enhancements, and lensing objects were present between the images. Thus, the observed gravitational lens candidates for which no lensing object has been detected between the images are unlikely to be a result of asymmetric distribution of mass external to the image circle. In a model with smoothly variable surface mass density, moderately and highly amplified images tended to be single rather than multiple. An opposite trend was found in models which had singularities in the surface mass distribution.

  8. SPOTting Model Parameters Using a Ready-Made Python Package

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2017-04-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  9. SPOTting Model Parameters Using a Ready-Made Python Package.

    PubMed

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function.

  10. SPOTting Model Parameters Using a Ready-Made Python Package

    PubMed Central

    Houska, Tobias; Kraft, Philipp; Chamorro-Chavez, Alejandro; Breuer, Lutz

    2015-01-01

    The choice for specific parameter estimation methods is often more dependent on its availability than its performance. We developed SPOTPY (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of methods typically used to calibrate, analyze and optimize parameters for a wide range of ecological models. SPOTPY currently contains eight widely used algorithms, 11 objective functions, and can sample from eight parameter distributions. SPOTPY has a model-independent structure and can be run in parallel from the workstation to large computation clusters using the Message Passing Interface (MPI). We tested SPOTPY in five different case studies to parameterize the Rosenbrock, Griewank and Ackley functions, a one-dimensional physically based soil moisture routine, where we searched for parameters of the van Genuchten-Mualem function and a calibration of a biogeochemistry model with different objective functions. The case studies reveal that the implemented SPOTPY methods can be used for any model with just a minimal amount of code for maximal power of parameter optimization. They further show the benefit of having one package at hand that includes number of well performing parameter search methods, since not every case study can be solved sufficiently with every algorithm or every objective function. PMID:26680783

  11. Connection between two statistical approaches for the modelling of particle velocity and concentration distributions in turbulent flow: The mesoscopic Eulerian formalism and the two-point probability density function method

    NASA Astrophysics Data System (ADS)

    Simonin, Olivier; Zaichik, Leonid I.; Alipchenkov, Vladimir M.; Février, Pierre

    2006-12-01

    The objective of the paper is to elucidate a connection between two approaches that have been separately proposed for modelling the statistical spatial properties of inertial particles in turbulent fluid flows. One of the approaches proposed recently by Février, Simonin, and Squires [J. Fluid Mech. 533, 1 (2005)] is based on the partitioning of particle turbulent velocity field into spatially correlated (mesoscopic Eulerian) and random-uncorrelated (quasi-Brownian) components. The other approach stems from a kinetic equation for the two-point probability density function of the velocity distributions of two particles [Zaichik and Alipchenkov, Phys. Fluids 15, 1776 (2003)]. Comparisons between these approaches are performed for isotropic homogeneous turbulence and demonstrate encouraging agreement.

  12. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  13. Learning Object Names at Different Hierarchical Levels Using Cross-Situational Statistics

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Zhang, Yayun; Yu, Chen

    2018-01-01

    Objects in the world usually have names at different hierarchical levels (e.g., "beagle," "dog," "animal"). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use…

  14. Mere exposure alters category learning of novel objects.

    PubMed

    Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  15. Mere Exposure Alters Category Learning of Novel Objects

    PubMed Central

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning. PMID:21833209

  16. External validation of ADO, DOSE, COTE and CODEX at predicting death in primary care patients with COPD using standard and machine learning approaches.

    PubMed

    Morales, Daniel R; Flynn, Rob; Zhang, Jianguo; Trucco, Emmanuel; Quint, Jennifer K; Zutis, Kris

    2018-05-01

    Several models for predicting the risk of death in people with chronic obstructive pulmonary disease (COPD) exist but have not undergone large scale validation in primary care. The objective of this study was to externally validate these models using statistical and machine learning approaches. We used a primary care COPD cohort identified using data from the UK Clinical Practice Research Datalink. Age-standardised mortality rates were calculated for the population by gender and discrimination of ADO (age, dyspnoea, airflow obstruction), COTE (COPD-specific comorbidity test), DOSE (dyspnoea, airflow obstruction, smoking, exacerbations) and CODEX (comorbidity, dyspnoea, airflow obstruction, exacerbations) at predicting death over 1-3 years measured using logistic regression and a support vector machine learning (SVM) method of analysis. The age-standardised mortality rate was 32.8 (95%CI 32.5-33.1) and 25.2 (95%CI 25.4-25.7) per 1000 person years for men and women respectively. Complete data were available for 54879 patients to predict 1-year mortality. ADO performed the best (c-statistic of 0.730) compared with DOSE (c-statistic 0.645), COTE (c-statistic 0.655) and CODEX (c-statistic 0.649) at predicting 1-year mortality. Discrimination of ADO and DOSE improved at predicting 1-year mortality when combined with COTE comorbidities (c-statistic 0.780 ADO + COTE; c-statistic 0.727 DOSE + COTE). Discrimination did not change significantly over 1-3 years. Comparable results were observed using SVM. In primary care, ADO appears superior at predicting death in COPD. Performance of ADO and DOSE improved when combined with COTE comorbidities suggesting better models may be generated with additional data facilitated using novel approaches. Copyright © 2018. Published by Elsevier Ltd.

  17. Improving Ecological Response Monitoring of Environmental Flows

    NASA Astrophysics Data System (ADS)

    King, Alison J.; Gawne, Ben; Beesley, Leah; Koehn, John D.; Nielsen, Daryl L.; Price, Amina

    2015-05-01

    Environmental flows are now an important restoration technique in flow-degraded rivers, and with the increasing public scrutiny of their effectiveness and value, the importance of undertaking scientifically robust monitoring is now even more critical. Many existing environmental flow monitoring programs have poorly defined objectives, nonjustified indicator choices, weak experimental designs, poor statistical strength, and often focus on outcomes from a single event. These negative attributes make them difficult to learn from. We provide practical recommendations that aim to improve the performance, scientific robustness, and defensibility of environmental flow monitoring programs. We draw on the literature and knowledge gained from working with stakeholders and managers to design, implement, and monitor a range of environmental flow types. We recommend that (1) environmental flow monitoring programs should be implemented within an adaptive management framework; (2) objectives of environmental flow programs should be well defined, attainable, and based on an agreed conceptual understanding of the system; (3) program and intervention targets should be attainable, measurable, and inform program objectives; (4) intervention monitoring programs should improve our understanding of flow-ecological responses and related conceptual models; (5) indicator selection should be based on conceptual models, objectives, and prioritization approaches; (6) appropriate monitoring designs and statistical tools should be used to measure and determine ecological response; (7) responses should be measured within timeframes that are relevant to the indicator(s); (8) watering events should be treated as replicates of a larger experiment; (9) environmental flow outcomes should be reported using a standard suite of metadata. Incorporating these attributes into future monitoring programs should ensure their outcomes are transferable and measured with high scientific credibility.

  18. Long-term strategy for the statistical design of a forest health monitoring system

    Treesearch

    Hans T. Schreuder; Raymond L. Czaplewski

    1993-01-01

    A conceptual framework is given for a broad-scale survey of forest health that accomplishes three objectives: generate descriptive statistics; detect changes in such statistics; and simplify analytical inferences that identify, and possibly establish cause-effect relationships. Our paper discusses the development of sampling schemes to satisfy these three objectives,...

  19. Match statistics related to winning in the group stage of 2014 Brazil FIFA World Cup.

    PubMed

    Liu, Hongyou; Gomez, Miguel-Ángel; Lago-Peñas, Carlos; Sampaio, Jaime

    2015-01-01

    Identifying match statistics that strongly contribute to winning in football matches is a very important step towards a more predictive and prescriptive performance analysis. The current study aimed to determine relationships between 24 match statistics and the match outcome (win, loss and draw) in all games and close games of the group stage of FIFA World Cup (2014, Brazil) by employing the generalised linear model. The cumulative logistic regression was run in the model taking the value of each match statistic as independent variable to predict the logarithm of the odds of winning. Relationships were assessed as effects of a two-standard-deviation increase in the value of each variable on the change in the probability of a team winning a match. Non-clinical magnitude-based inferences were employed and were evaluated by using the smallest worthwhile change. Results showed that for all the games, nine match statistics had clearly positive effects on the probability of winning (Shot, Shot on Target, Shot from Counter Attack, Shot from Inside Area, Ball Possession, Short Pass, Average Pass Streak, Aerial Advantage and Tackle), four had clearly negative effects (Shot Blocked, Cross, Dribble and Red Card), other 12 statistics had either trivial or unclear effects. While for the close games, the effects of Aerial Advantage and Yellow Card turned to trivial and clearly negative, respectively. Information from the tactical modelling can provide a more thorough and objective match understanding to coaches and performance analysts for evaluating post-match performances and for scouting upcoming oppositions.

  20. Feasibility Study on the Use of On-line Multivariate Statistical Process Control for Safeguards Applications in Natural Uranium Conversion Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd-Lively, Jennifer L

    2014-01-01

    The objective of this work was to determine the feasibility of using on-line multivariate statistical process control (MSPC) for safeguards applications in natural uranium conversion plants. Multivariate statistical process control is commonly used throughout industry for the detection of faults. For safeguards applications in uranium conversion plants, faults could include the diversion of intermediate products such as uranium dioxide, uranium tetrafluoride, and uranium hexafluoride. This study was limited to a 100 metric ton of uranium (MTU) per year natural uranium conversion plant (NUCP) using the wet solvent extraction method for the purification of uranium ore concentrate. A key component inmore » the multivariate statistical methodology is the Principal Component Analysis (PCA) approach for the analysis of data, development of the base case model, and evaluation of future operations. The PCA approach was implemented through the use of singular value decomposition of the data matrix where the data matrix represents normal operation of the plant. Component mole balances were used to model each of the process units in the NUCP. However, this approach could be applied to any data set. The monitoring framework developed in this research could be used to determine whether or not a diversion of material has occurred at an NUCP as part of an International Atomic Energy Agency (IAEA) safeguards system. This approach can be used to identify the key monitoring locations, as well as locations where monitoring is unimportant. Detection limits at the key monitoring locations can also be established using this technique. Several faulty scenarios were developed to test the monitoring framework after the base case or normal operating conditions of the PCA model were established. In all of the scenarios, the monitoring framework was able to detect the fault. Overall this study was successful at meeting the stated objective.« less

  1. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  2. Statistical models for fever forecasting based on advanced body temperature monitoring.

    PubMed

    Jordan, Jorge; Miro-Martinez, Pau; Vargas, Borja; Varela-Entrecanales, Manuel; Cuesta-Frau, David

    2017-02-01

    Body temperature monitoring provides health carers with key clinical information about the physiological status of patients. Temperature readings are taken periodically to detect febrile episodes and consequently implement the appropriate medical countermeasures. However, fever is often difficult to assess at early stages, or remains undetected until the next reading, probably a few hours later. The objective of this article is to develop a statistical model to forecast fever before a temperature threshold is exceeded to improve the therapeutic approach to the subjects involved. To this end, temperature series of 9 patients admitted to a general internal medicine ward were obtained with a continuous monitoring Holter device, collecting measurements of peripheral and core temperature once per minute. These series were used to develop different statistical models that could quantify the probability of having a fever spike in the following 60 minutes. A validation series was collected to assess the accuracy of the models. Finally, the results were compared with the analysis of some series by experienced clinicians. Two different models were developed: a logistic regression model and a linear discrimination analysis model. Both of them exhibited a fever peak forecasting accuracy greater than 84%. When compared with experts' assessment, both models identified 35 (97.2%) of 36 fever spikes. The models proposed are highly accurate in forecasting the appearance of fever spikes within a short period in patients with suspected or confirmed febrile-related illnesses. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Statistical characteristics of the spatial distribution of territorial contamination by radionuclides from the Chernobyl accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arutyunyan, R.V.; Bol`shov, L.A.; Vasil`ev, S.K.

    1994-06-01

    The objective of this study was to clarify a number of issues related to the spatial distribution of contaminants from the Chernobyl accident. The effects of local statistics were addressed by collecting and analyzing (for Cesium 137) soil samples from a number of regions, and it was found that sample activity differed by a factor of 3-5. The effect of local non-uniformity was estimated by modeling the distribution of the average activity of a set of five samples for each of the regions, with the spread in the activities for a {+-}2 range being equal to 25%. The statistical characteristicsmore » of the distribution of contamination were then analyzed and found to be a log-normal distribution with the standard deviation being a function of test area. All data for the Bryanskaya Oblast area were analyzed statistically and were adequately described by a log-normal function.« less

  4. Estimation of absorption rate constant (ka) following oral administration by Wagner-Nelson, Loo-Riegelman, and statistical moments in the presence of a secondary peak.

    PubMed

    Mahmood, Iftekhar

    2004-01-01

    The objective of this study was to evaluate the performance of Wagner-Nelson, Loo-Reigelman, and statistical moments methods in determining the absorption rate constant(s) in the presence of a secondary peak. These methods were also evaluated when there were two absorption rates without a secondary peak. Different sets of plasma concentration versus time data for a hypothetical drug following one or two compartment models were generated by simulation. The true ka was compared with the ka estimated by Wagner-Nelson, Loo-Riegelman and statistical moments methods. The results of this study indicate that Wagner-Nelson, Loo-Riegelman and statistical moments methods may not be used for the estimation of absorption rate constants in the presence of a secondary peak or when absorption takes place with two absorption rates.

  5. An empirical evaluation of landscape energetic models: Mallard and American black duck space use during the non-breeding period

    USGS Publications Warehouse

    Beatty, William S.; Webb, Elisabeth B.; Kesler, Dylan C.; Naylor, Luke W.; Raedeke, Andrew H.; Humburg, Dale D.; Coluccy, John M.; Soulliere, G.

    2015-01-01

    Bird conservation Joint Ventures are collaborative partnerships between public agencies and private organizations that facilitate habitat management to support waterfowl and other bird populations. A subset of Joint Ventures has developed energetic carrying capacity models (ECCs) to translate regional waterfowl population goals into habitat objectives during the non-breeding period. Energetic carrying capacity models consider food biomass, metabolism, and available habitat to estimate waterfowl carrying capacity within an area. To evaluate Joint Venture ECCs in the context of waterfowl space use, we monitored 33 female mallards (Anas platyrhynchos) and 55 female American black ducks (A. rubripes) using global positioning system satellite telemetry in the central and eastern United States. To quantify space use, we measured first-passage time (FPT: time required for an individual to transit across a circle of a given radius) at biologically relevant spatial scales for mallards (3.46 km) and American black ducks (2.30 km) during the non-breeding period, which included autumn migration, winter, and spring migration. We developed a series of models to predict FPT using Joint Venture ECCs and compared them to a biological null model that quantified habitat composition and a statistical null model, which included intercept and random terms. Energetic carrying capacity models predicted mallard space use more efficiently during autumn and spring migrations, but the statistical null was the top model for winter. For American black ducks, ECCs did not improve predictions of space use; the biological null was top ranked for winter and the statistical null was top ranked for spring migration. Thus, ECCs provided limited insight into predicting waterfowl space use during the non-breeding season. Refined estimates of spatial and temporal variation in food abundance, habitat conditions, and anthropogenic disturbance will likely improve ECCs and benefit conservation planners in linking non-breeding waterfowl habitat objectives with distribution and population parameters. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  6. A data-based conservation planning tool for Florida panthers

    USGS Publications Warehouse

    Murrow, Jennifer L.; Thatcher, Cindy A.; Van Manen, Frank T.; Clark, Joseph D.

    2013-01-01

    Habitat loss and fragmentation are the greatest threats to the endangered Florida panther (Puma concolor coryi). We developed a data-based habitat model and user-friendly interface so that land managers can objectively evaluate Florida panther habitat. We used a geographic information system (GIS) and the Mahalanobis distance statistic (D2) to develop a model based on broad-scale landscape characteristics associated with panther home ranges. Variables in our model were Euclidean distance to natural land cover, road density, distance to major roads, human density, amount of natural land cover, amount of semi-natural land cover, amount of permanent or semi-permanent flooded area–open water, and a cost–distance variable. We then developed a Florida Panther Habitat Estimator tool, which automates and replicates the GIS processes used to apply the statistical habitat model. The estimator can be used by persons with moderate GIS skills to quantify effects of land-use changes on panther habitat at local and landscape scales. Example applications of the tool are presented.

  7. The impact of alcohol taxation on liver cirrhosis mortality.

    PubMed

    Ponicki, William R; Gruenewald, Paul J

    2006-11-01

    The objective of this study is to investigate the impact of distilled spirits, wine, and beer taxes on cirrhosis mortality using a large-panel data set and statistical models that control for various other factors that may affect that mortality. The analyses were performed on a panel of 30 U.S. license states during the period 1971-1998 (N = 840 state-by-year observations). Exogenous measures included current and lagged versions of beverage taxes and income, as well as controls for states' age distribution, religion, race, health care availability, urbanity, tourism, and local bans on alcohol sales. Regression analyses were performed using random-effects models with corrections for serial autocorrelation and heteroscedasticity among states. Cirrhosis rates were found to be significantly related to taxes on distilled spirits but not to taxation of wine and beer. Consistent results were found using different statistical models and model specifications. Consistent with prior research, cirrhosis mortality in the United States appears more closely linked to consumption of distilled spirits than to that of other alcoholic beverages.

  8. Learning Object Names at Different Hierarchical Levels Using Cross-Situational Statistics.

    PubMed

    Chen, Chi-Hsin; Zhang, Yayun; Yu, Chen

    2018-05-01

    Objects in the world usually have names at different hierarchical levels (e.g., beagle, dog, animal). This research investigates adults' ability to use cross-situational statistics to simultaneously learn object labels at individual and category levels. The results revealed that adults were able to use co-occurrence information to learn hierarchical labels in contexts where the labels for individual objects and labels for categories were presented in completely separated blocks, in interleaved blocks, or mixed in the same trial. Temporal presentation schedules significantly affected the learning of individual object labels, but not the learning of category labels. Learners' subsequent generalization of category labels indicated sensitivity to the structure of statistical input. Copyright © 2017 Cognitive Science Society, Inc.

  9. What can 35 years and over 700,000 measurements tell us about noise exposure in the mining industry?

    PubMed Central

    Roberts, Benjamin; Sun, Kan; Neitzel, Richard L.

    2017-01-01

    Objective To analyze over 700,000 cross-sectional measurements from the Mine Safety and Health Administration (MHSA) and develop statistical models to predict noise exposure for a worker. Design Descriptive statistics were used to summarize the data. Two linear regression models were used to predict noise exposure based on MSHA permissible exposure limit (PEL) and action level (AL) respectively. Two-fold cross validation was used to compare the exposure estimates from the models to actual measurements in the hold out data. The mean difference and t-statistic was calculated for each job title to determine if the model exposure predictions were significantly different from the actual data. Study Sample Measurements were acquired from MSHA through a Freedom of Information Act request. Results From 1979 to 2014 the average noise measurement has decreased. Measurements taken before the implementation of MSHA’s revised noise regulation in 2000 were on average 4.5 dBA higher than after the law came in to effect. Both models produced mean exposure predictions that were less than 1 dBA different compared to the holdout data. Conclusion Overall noise levels in mines have been decreasing. However, this decrease has not been uniform across all mining sectors. The exposure predictions from the model will be useful to help predict hearing loss in workers from the mining industry. PMID:27871188

  10. Projecting wildfire area burned in the south-eastern United States, 2011-60

    Treesearch

    Jeffrey P. Prestemon; Uma Shankar; Aijun Xiu; K. Talgo; D. Yang; Ernest Dixon; Donald McKenzie; Karen L. Abt

    2016-01-01

    Future changes in society and climate are expected to affect wildfire activity in the south-eastern United States. The objective of this research was to understand how changes in both climate and society may affect wildfire in the coming decades.We estimated a three-stage statistical model of wildfire area burned by ecoregion province for lightning and human causes (...

  11. Health Service Access across Racial/Ethnic Groups of Children in the Child Welfare System

    ERIC Educational Resources Information Center

    Wells, Rebecca; Hillemeier, Marianne M.; Bai, Yu; Belue, Rhonda

    2009-01-01

    Objective: This study examined health service access among children of different racial/ethnic groups in the child welfare system in an attempt to identify and explain disparities. Methods: Data were from the National Survey of Child and Adolescent Well-Being (NSCAW). N for descriptive statistics = 2,505. N for multiple regression model = 537.…

  12. Genethics: project accountability via evaluation of teacher and student growth.

    PubMed

    Hendrix, J R; Mertens, T R

    1992-10-01

    Accountability through demonstrated learning is increasingly being demanded by agencies funding science education projects. For example, the National Science Foundation requires evidence of the educational impact of programs designed to increase the scientific understanding and competencies of teachers and their students. The purpose of this paper is to share our human genetics educational experiences and accountability model with colleagues interested in serving the genetics educational needs of in-service secondary school science teachers and their students. Our accountability model is facilitated through (1) identifying the educational needs of the population of teachers to be served, (2) articulating goals and measurable objectives to meet these needs, and (3) then designing and implementing pretest/posttest questions to measure whether the objectives have been achieved. Comparison of entry and exit levels of performance on a 50-item test showed that teacher-participants learned a statistically significant amount of genetics content in our NSF-funded workshops. Teachers, in turn, administered a 25-item pretest/posttest to their secondary school students, and collective data from 121 classrooms across the United States revealed statistically significant increases in student knowledge of genetics content. Methods describing our attempts to evaluate teachers' use of pedagogical techniques and bioethical decision-making skills are briefly addressed.

  13. The role of shape complexity in the detection of closed contours.

    PubMed

    Wilder, John; Feldman, Jacob; Singh, Manish

    2016-09-01

    The detection of contours in noise has been extensively studied, but the detection of closed contours, such as the boundaries of whole objects, has received relatively little attention. Closed contours pose substantial challenges not present in the simple (open) case, because they form the outlines of whole shapes and thus take on a range of potentially important configural properties. In this paper we consider the detection of closed contours in noise as a probabilistic decision problem. Previous work on open contours suggests that contour complexity, quantified as the negative log probability (Description Length, DL) of the contour under a suitably chosen statistical model, impairs contour detectability; more complex (statistically surprising) contours are harder to detect. In this study we extended this result to closed contours, developing a suitable probabilistic model of whole shapes that gives rise to several distinct though interrelated measures of shape complexity. We asked subjects to detect either natural shapes (Exp. 1) or experimentally manipulated shapes (Exp. 2) embedded in noise fields. We found systematic effects of global shape complexity on detection performance, demonstrating how aspects of global shape and form influence the basic process of object detection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Validation of sea ice models using an uncertainty-based distance metric for multiple model variables: NEW METRIC FOR SEA ICE MODEL VALIDATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.

    Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less

  15. Validation of sea ice models using an uncertainty-based distance metric for multiple model variables: NEW METRIC FOR SEA ICE MODEL VALIDATION

    DOE PAGES

    Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...

    2017-04-01

    Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less

  16. Summary of hydrologic modeling for the Delaware River Basin using the Water Availability Tool for Environmental Resources (WATER)

    USGS Publications Warehouse

    Williamson, Tanja N.; Lant, Jeremiah G.; Claggett, Peter; Nystrom, Elizabeth A.; Milly, Paul C.D.; Nelson, Hugh L.; Hoffman, Scott A.; Colarullo, Susan J.; Fischer, Jeffrey M.

    2015-11-18

    The Water Availability Tool for Environmental Resources (WATER) is a decision support system for the nontidal part of the Delaware River Basin that provides a consistent and objective method of simulating streamflow under historical, forecasted, and managed conditions. In order to quantify the uncertainty associated with these simulations, however, streamflow and the associated hydroclimatic variables of potential evapotranspiration, actual evapotranspiration, and snow accumulation and snowmelt must be simulated and compared to long-term, daily observations from sites. This report details model development and optimization, statistical evaluation of simulations for 57 basins ranging from 2 to 930 km2 and 11.0 to 99.5 percent forested cover, and how this statistical evaluation of daily streamflow relates to simulating environmental changes and management decisions that are best examined at monthly time steps normalized over multiple decades. The decision support system provides a database of historical spatial and climatic data for simulating streamflow for 2001–11, in addition to land-cover and general circulation model forecasts that focus on 2030 and 2060. WATER integrates geospatial sampling of landscape characteristics, including topographic and soil properties, with a regionally calibrated hillslope-hydrology model, an impervious-surface model, and hydroclimatic models that were parameterized by using three hydrologic response units: forested, agricultural, and developed land cover. This integration enables the regional hydrologic modeling approach used in WATER without requiring site-specific optimization or those stationary conditions inferred when using a statistical model.

  17. Using decision trees to understand structure in missing data

    PubMed Central

    Tierney, Nicholas J; Harden, Fiona A; Harden, Maurice J; Mengersen, Kerrie L

    2015-01-01

    Objectives Demonstrate the application of decision trees—classification and regression trees (CARTs), and their cousins, boosted regression trees (BRTs)—to understand structure in missing data. Setting Data taken from employees at 3 different industrial sites in Australia. Participants 7915 observations were included. Materials and methods The approach was evaluated using an occupational health data set comprising results of questionnaires, medical tests and environmental monitoring. Statistical methods included standard statistical tests and the ‘rpart’ and ‘gbm’ packages for CART and BRT analyses, respectively, from the statistical software ‘R’. A simulation study was conducted to explore the capability of decision tree models in describing data with missingness artificially introduced. Results CART and BRT models were effective in highlighting a missingness structure in the data, related to the type of data (medical or environmental), the site in which it was collected, the number of visits, and the presence of extreme values. The simulation study revealed that CART models were able to identify variables and values responsible for inducing missingness. There was greater variation in variable importance for unstructured as compared to structured missingness. Discussion Both CART and BRT models were effective in describing structural missingness in data. CART models may be preferred over BRT models for exploratory analysis of missing data, and selecting variables important for predicting missingness. BRT models can show how values of other variables influence missingness, which may prove useful for researchers. Conclusions Researchers are encouraged to use CART and BRT models to explore and understand missing data. PMID:26124509

  18. Analysis of Variance in Statistical Image Processing

    NASA Astrophysics Data System (ADS)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  19. The improved business valuation model for RFID company based on the community mining method.

    PubMed

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.

  20. Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2000-01-01

    The Applied Meteorology Unit is conducting an evaluation of the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for operations at the Cape Canaveral Air Force Station and the Kennedy Space Center in the event of an accidental hazardous material release or aborted vehicle launch. The prognostic data from RAMS is available to ERDAS for display and is used to initialize the 45th Range Safety (45 SW/SE) dispersion model. Thus, the accuracy of the 45 SW/SE dispersion model is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation task consists of an objective and subjective component for the Florida warm and cool seasons of 1999-2000. The objective evaluation includes gridded and point error statistics at surface and upper-level observational sites, a comparison of the model errors to a coarser grid configuration of RAMS, and a benchmark of RAMS against the widely accepted Eta model. The warm-season subjective evaluation involves a verification of the onset and movement of the Florida east coast sea breeze and RAMS forecast precipitation. This interim report provides a summary of the RAMS objective and subjective evaluation for the 1999 Florida warm season only.

  1. The improved business valuation model for RFID company based on the community mining method

    PubMed Central

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company’s net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies. PMID:28459815

  2. A Pre-Screening Questionnaire to Predict Non-24-Hour Sleep-Wake Rhythm Disorder (N24HSWD) among the Blind

    PubMed Central

    Flynn-Evans, Erin E.; Lockley, Steven W.

    2016-01-01

    Study Objectives: There is currently no questionnaire-based pre-screening tool available to detect non-24-hour sleep-wake rhythm disorder (N24HSWD) among blind patients. Our goal was to develop such a tool, derived from gold standard, objective hormonal measures of circadian entrainment status, for the detection of N24HSWD among those with visual impairment. Methods: We evaluated the contribution of 40 variables in their ability to predict N24HSWD among 127 blind women, classified using urinary 6-sulfatoxymelatonin period, an objective marker of circadian entrainment status in this population. We subjected the 40 candidate predictors to 1,000 bootstrapped iterations of a logistic regression forward selection model to predict N24HSWD, with model inclusion set at the p < 0.05 level. We removed any predictors that were not selected at least 1% of the time in the 1,000 bootstrapped models and applied a second round of 1,000 bootstrapped logistic regression forward selection models to the remaining 23 candidate predictors. We included all questions that were selected at least 10% of the time in the final model. We subjected the selected predictors to a final logistic regression model to predict N24SWD over 1,000 bootstrapped models to calculate the concordance statistic and adjusted optimism of the final model. We used this information to generate a predictive model and determined the sensitivity and specificity of the model. Finally, we applied the model to a cohort of 1,262 blind women who completed the survey, but did not collect urine samples. Results: The final model consisted of eight questions. The concordance statistic, adjusted for bootstrapping, was 0.85. The positive predictive value was 88%, the negative predictive value was 79%. Applying this model to our larger dataset of women, we found that 61% of those without light perception, and 27% with some degree of light perception, would be referred for further screening for N24HSWD. Conclusions: Our model has predictive utility sufficient to serve as a pre-screening questionnaire for N24HSWD among the blind. Citation: Flynn-Evans EE, Lockley SW. A pre-screening questionnaire to predict non-24-hour sleep-wake rhythm disorder (N24HSWD) among the blind. J Clin Sleep Med 2016;12(5):703–710. PMID:26951421

  3. Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows

    NASA Astrophysics Data System (ADS)

    Srivastav, R. K.; Srinivasan, K.; Sudheer, K.

    2009-05-01

    Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.

  4. Objective scoring of transformed foci in BALB/c 3T3 cell transformation assay by statistical image descriptors.

    PubMed

    Urani, C; Corvi, R; Callegaro, G; Stefanini, F M

    2013-09-01

    In vitro cell transformation assays (CTAs) have been shown to model important stages of in vivo carcinogenesis and have the potential to predict carcinogenicity in humans. Advantages of CTAs are their ability of revealing both genotoxic and non-genotoxic carcinogens while reducing both experimental costs and the number of animals used. The endpoint of the CTA is foci formation, and requires classification under light microscopy based on morphology. Thus current limitations for the wide adoption of the assay partially depend on a fair degree of subjectivity in foci scoring. An objective evaluation may be obtained after separating foci from background monolayer in the digital image, and quantifying values of statistical descriptors which are selected to capture eye-scored morphological features. The aim of this study was to develop statistical descriptors to be applied to transformed foci of BALB/c 3T3, which cover foci size, multilayering and invasive cell growth into the background monolayer. Proposed descriptors were applied to a database of 407 foci images to explore the numerical features, and to illustrate open problems and potential solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    2013-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  6. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  7. Geospatial clustering in sugar-sweetened beverage consumption among Boston youth.

    PubMed

    Tamura, Kosuke; Duncan, Dustin T; Athens, Jessica K; Bragg, Marie A; Rienti, Michael; Aldstadt, Jared; Scott, Marc A; Elbel, Brian

    2017-09-01

    The objective was to detect geospatial clustering of sugar-sweetened beverage (SSB) intake in Boston adolescents (age = 16.3 ± 1.3 years [range: 13-19]; female = 56.1%; White = 10.4%, Black = 42.6%, Hispanics = 32.4%, and others = 14.6%) using spatial scan statistics. We used data on self-reported SSB intake from the 2008 Boston Youth Survey Geospatial Dataset (n = 1292). Two binary variables were created: consumption of SSB (never versus any) on (1) soda and (2) other sugary drinks (e.g., lemonade). A Bernoulli spatial scan statistic was used to identify geospatial clusters of soda and other sugary drinks in unadjusted models and models adjusted for age, gender, and race/ethnicity. There was no statistically significant clustering of soda consumption in the unadjusted model. In contrast, a cluster of non-soda SSB consumption emerged in the middle of Boston (relative risk = 1.20, p = .005), indicating that adolescents within the cluster had a 20% higher probability of reporting non-soda SSB intake than outside the cluster. The cluster was no longer significant in the adjusted model, suggesting spatial variation in non-soda SSB drink intake correlates with the geographic distribution of students by race/ethnicity, age, and gender.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less

  9. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  10. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  11. Statistical and fractal analysis of autofluorescent myocardium images in posthumous diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.

    2014-08-01

    This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.

  12. A Severe Sepsis Mortality Prediction Model and Score for Use with Administrative Data

    PubMed Central

    Ford, Dee W.; Goodwin, Andrew J.; Simpson, Annie N.; Johnson, Emily; Nadig, Nandita; Simpson, Kit N.

    2016-01-01

    Objective Administrative data is used for research, quality improvement, and health policy in severe sepsis. However, there is not a sepsis-specific tool applicable to administrative data with which to adjust for illness severity. Our objective was to develop, internally validate, and externally validate a severe sepsis mortality prediction model and associated mortality prediction score. Design Retrospective cohort study using 2012 administrative data from five US states. Three cohorts of patients with severe sepsis were created: 1) ICD-9-CM codes for severe sepsis/septic shock, 2) ‘Martin’ approach, and 3) ‘Angus’ approach. The model was developed and internally validated in ICD-9-CM cohort and externally validated in other cohorts. Integer point values for each predictor variable were generated to create a sepsis severity score. Setting Acute care, non-federal hospitals in NY, MD, FL, MI, and WA Subjects Patients in one of three severe sepsis cohorts: 1) explicitly coded (n=108,448), 2) Martin cohort (n=139,094), and 3) Angus cohort (n=523,637) Interventions None Measurements and Main Results Maximum likelihood estimation logistic regression to develop a predictive model for in-hospital mortality. Model calibration and discrimination assessed via Hosmer-Lemeshow goodness-of-fit (GOF) and C-statistics respectively. Primary cohort subset into risk deciles and observed versus predicted mortality plotted. GOF demonstrated p>0.05 for each cohort demonstrating sound calibration. C-statistic ranged from low of 0.709 (sepsis severity score) to high of 0.838 (Angus cohort) suggesting good to excellent model discrimination. Comparison of observed versus expected mortality was robust although accuracy decreased in highest risk decile. Conclusions Our sepsis severity model and score is a tool that provides reliable risk adjustment for administrative data. PMID:26496452

  13. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption.

    PubMed

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Mardor, Yael; Miklavcic, Damijan

    2016-03-01

    Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r(2) = 0.79; p < 0.008, r(2) = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup.

  14. Journal selection decisions: a biomedical library operations research model. I. The framework.

    PubMed Central

    Kraft, D H; Polacsek, R A; Soergel, L; Burns, K; Klair, A

    1976-01-01

    The problem of deciding which journal titles to select for acquisition in a biomedical library is modeled. The approach taken is based on cost/benefit ratios. Measures of journal worth, methods of data collection, and journal cost data are considered. The emphasis is on the development of a practical process for selecting journal titles, based on the objectivity and rationality of the model; and on the collection of the approprate data and library statistics in a reasonable manner. The implications of this process towards an overall management information system (MIS) for biomedical serials handling are discussed. PMID:820391

  15. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    NASA Astrophysics Data System (ADS)

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  16. A statistical method for predicting seizure onset zones from human single-neuron recordings

    NASA Astrophysics Data System (ADS)

    Valdez, André B.; Hickman, Erin N.; Treiman, David M.; Smith, Kris A.; Steinmetz, Peter N.

    2013-02-01

    Objective. Clinicians often use depth-electrode recordings to localize human epileptogenic foci. To advance the diagnostic value of these recordings, we applied logistic regression models to single-neuron recordings from depth-electrode microwires to predict seizure onset zones (SOZs). Approach. We collected data from 17 epilepsy patients at the Barrow Neurological Institute and developed logistic regression models to calculate the odds of observing SOZs in the hippocampus, amygdala and ventromedial prefrontal cortex, based on statistics such as the burst interspike interval (ISI). Main results. Analysis of these models showed that, for a single-unit increase in burst ISI ratio, the left hippocampus was approximately 12 times more likely to contain a SOZ; and the right amygdala, 14.5 times more likely. Our models were most accurate for the hippocampus bilaterally (at 85% average sensitivity), and performance was comparable with current diagnostics such as electroencephalography. Significance. Logistic regression models can be combined with single-neuron recording to predict likely SOZs in epilepsy patients being evaluated for resective surgery, providing an automated source of clinically useful information.

  17. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    NASA Astrophysics Data System (ADS)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  18. Production of biofuels and biochemicals: in need of an ORACLE.

    PubMed

    Miskovic, Ljubisa; Hatzimanikatis, Vassily

    2010-08-01

    The engineering of cells for the production of fuels and chemicals involves simultaneous optimization of multiple objectives, such as specific productivity, extended substrate range and improved tolerance - all under a great degree of uncertainty. The achievement of these objectives under physiological and process constraints will be impossible without the use of mathematical modeling. However, the limited information and the uncertainty in the available information require new methods for modeling and simulation that will characterize the uncertainty and will quantify, in a statistical sense, the expectations of success of alternative metabolic engineering strategies. We discuss these considerations toward developing a framework for the Optimization and Risk Analysis of Complex Living Entities (ORACLE) - a computational method that integrates available information into a mathematical structure to calculate control coefficients. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Performance Analysis of Garbage Collection and Dynamic Reordering in a Lisp System. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Llames, Rene Lim

    1991-01-01

    Generation based garbage collection and dynamic reordering of objects are two techniques for improving the efficiency of memory management in Lisp and similar dynamic language systems. An analysis of the effect of generation configuration is presented, focusing on the effect of a number of generations and generation capabilities. Analytic timing and survival models are used to represent garbage collection runtime and to derive structural results on its behavior. The survival model provides bounds on the age of objects surviving a garbage collection at a particular level. Empirical results show that execution time is most sensitive to the capacity of the youngest generation. A technique called scanning for transport statistics, for evaluating the effectiveness of reordering independent of main memory size, is presented.

  20. Non-linear learning in online tutorial to enhance students’ knowledge on normal distribution application topic

    NASA Astrophysics Data System (ADS)

    Kartono; Suryadi, D.; Herman, T.

    2018-01-01

    This study aimed to analyze the enhancement of non-linear learning (NLL) in the online tutorial (OT) content to students’ knowledge of normal distribution application (KONDA). KONDA is a competence expected to be achieved after students studied the topic of normal distribution application in the course named Education Statistics. The analysis was performed by quasi-experiment study design. The subject of the study was divided into an experimental class that was given OT content in NLL model and a control class which was given OT content in conventional learning (CL) model. Data used in this study were the results of online objective tests to measure students’ statistical prior knowledge (SPK) and students’ pre- and post-test of KONDA. The statistical analysis test of a gain score of KONDA of students who had low and moderate SPK’s scores showed students’ KONDA who learn OT content with NLL model was better than students’ KONDA who learn OT content with CL model. Meanwhile, for students who had high SPK’s scores, the gain score of students who learn OT content with NLL model had relatively similar with the gain score of students who learn OT content with CL model. Based on those findings it could be concluded that the NLL model applied to OT content could enhance KONDA of students in low and moderate SPK’s levels. Extra and more challenging didactical situation was needed for students in high SPK’s level to achieve the significant gain score.

  1. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    PubMed

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    NASA Astrophysics Data System (ADS)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  3. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  4. Application of statistical shape analysis for the estimation of bone and forensic age using the shapes of the 2nd, 3rd, and 4th cervical vertebrae in a young Japanese population.

    PubMed

    Rhee, Chang-Hoon; Shin, Sang Min; Choi, Yong-Seok; Yamaguchi, Tetsutaro; Maki, Koutaro; Kim, Yong-Il; Kim, Seong-Sik; Park, Soo-Byung; Son, Woo-Sung

    2015-12-01

    From computed tomographic images, the dentocentral synchondrosis can be identified in the second cervical vertebra. This can demarcate the border between the odontoid process and the body of the 2nd cervical vertebra and serve as a good model for the prediction of bone and forensic age. Nevertheless, until now, there has been no application of the 2nd cervical vertebra based on the dentocentral synchondrosis. In this study, statistical shape analysis was used to build bone and forensic age estimation regression models. Following the principles of statistical shape analysis and principal components analysis, we used cone-beam computed tomography (CBCT) to evaluate a Japanese population (35 males and 45 females, from 5 to 19 years old). The narrowest prediction intervals among the multivariate regression models were 19.63 for bone age and 2.99 for forensic age. There was no significant difference between form space and shape space in the bone and forensic age estimation models. However, for gender comparison, the bone and forensic age estimation models for males had the higher explanatory power. This study derived an improved objective and quantitative method for bone and forensic age estimation based on only the 2nd, 3rd and 4th cervical vertebral shapes. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. An index-flood model for deficit volumes assessment

    NASA Astrophysics Data System (ADS)

    Strnad, Filip; Moravec, Vojtěch; Hanel, Martin

    2017-04-01

    The estimation of return periods of hydrological extreme events and the evaluation of risks related to such events are objectives of many water resources studies. The aim of this study is to develop statistical model for drought indices using extreme value theory and index-flood method and to use this model for estimation of return levels of maximum deficit volumes of total runoff and baseflow. Deficit volumes for hundred and thirty-three catchments in the Czech Republic for the period 1901-2015 simulated by a hydrological model Bilan are considered. The characteristics of simulated deficit periods (severity, intensity and length) correspond well to those based on observed data. It is assumed that annual maximum deficit volumes in each catchment follow the generalized extreme value (GEV) distribution. The catchments are divided into three homogeneous regions considering long term mean runoff, potential evapotranspiration and base flow. In line with the index-flood method it is further assumed that the deficit volumes within each homogeneous region are identically distributed after scaling with a site-specific factor. The goodness-of-fit of the statistical model is assessed by Anderson-Darling statistics. For the estimation of critical values of the test several resampling strategies allowing for appropriate handling of years without drought are presented. Finally the significance of the trends in the deficit volumes is assessed by a likelihood ratio test.

  6. Ultrasound detection of simulated intra-ocular foreign bodies by minimally trained personnel.

    PubMed

    Sargsyan, Ashot E; Dulchavsky, Alexandria G; Adams, James; Melton, Shannon; Hamilton, Douglas R; Dulchavsky, Scott A

    2008-01-01

    To test the ability of non-expert ultrasound operators of divergent backgrounds to detect the presence, size, location, and composition of foreign bodies in an ocular model. High school students (N = 10) and NASA astronauts (N = 4) completed a brief ultrasound training session which focused on basic ultrasound principles and the detection of foreign bodies. The operators used portable ultrasound devices to detect foreign objects of varying location, size (0.5-2 mm), and material (glass, plastic, metal) in a gelatinous ocular model. Operator findings were compared to known foreign object parameters and ultrasound experts (N = 2) to determine accuracy across and between groups. Ultrasound had high sensitivity (astronauts 85%, students 87%, and experts 100%) and specificity (astronauts 81%, students 83%, and experts 95%) for the detection of foreign bodies. All user groups were able to accurately detect the presence of foreign bodies in this model (astronauts 84%, students 81%, and experts 97%). Astronaut and student sensitivity results for material (64% vs. 48%), size (60% vs. 46%), and position (77% vs. 64%) were not statistically different. Experts' results for material (85%), size (90%), and position (98%) were higher; however, the small sample size precluded statistical conclusions. Ultrasound can be used by operators with varying training to detect the presence, location, and composition of intraocular foreign bodies with high sensitivity, specificity, and accuracy.

  7. Validity of Models for Predicting BRCA1 and BRCA2 Mutations

    PubMed Central

    Parmigiani, Giovanni; Chen, Sining; Iversen, Edwin S.; Friebel, Tara M.; Finkelstein, Dianne M.; Anton-Culver, Hoda; Ziogas, Argyrios; Weber, Barbara L.; Eisen, Andrea; Malone, Kathleen E.; Daling, Janet R.; Hsu, Li; Ostrander, Elaine A.; Peterson, Leif E.; Schildkraut, Joellen M.; Isaacs, Claudine; Corio, Camille; Leondaridis, Leoni; Tomlinson, Gail; Amos, Christopher I.; Strong, Louise C.; Berry, Donald A.; Weitzel, Jeffrey N.; Sand, Sharon; Dutson, Debra; Kerber, Rich; Peshkin, Beth N.; Euhus, David M.

    2008-01-01

    Background Deleterious mutations of the BRCA1 and BRCA2 genes confer susceptibility to breast and ovarian cancer. At least 7 models for estimating the probabilities of having a mutation are used widely in clinical and scientific activities; however, the merits and limitations of these models are not fully understood. Objective To systematically quantify the accuracy of the following publicly available models to predict mutation carrier status: BRCAPRO, family history assessment tool, Finnish, Myriad, National Cancer Institute, University of Pennsylvania, and Yale University. Design Cross-sectional validation study, using model predictions and BRCA1 or BRCA2 mutation status of patients different from those used to develop the models. Setting Multicenter study across Cancer Genetics Network participating centers. Patients 3 population-based samples of participants in research studies and 8 samples from genetic counseling clinics. Measurements Discrimination between individuals testing positive for a mutation in BRCA1 or BRCA2 from those testing negative, as measured by the c-statistic, and sensitivity and specificity of model predictions. Results The 7 models differ in their predictions. The better-performing models have a c-statistic around 80%. BRCAPRO has the largest c-statistic overall and in all but 2 patient subgroups, although the margin over other models is narrow in many strata. Outside of high-risk populations, all models have high false-negative and false-positive rates across a range of probability thresholds used to refer for mutation testing. Limitation Three recently published models were not included. Conclusions All models identify women who probably carry a deleterious mutation of BRCA1 or BRCA2 with adequate discrimination to support individualized genetic counseling, although discrimination varies across models and populations. PMID:17909205

  8. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography.

    PubMed

    Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-12-01

    Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P  = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P  < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.

  9. Evaluating the Information Content of Newly Retrieved SAGEII NO2 Measurements in the Lower Stratosphere

    NASA Technical Reports Server (NTRS)

    Newchurch, Michael J.; Cunnold, Derek M.; Zawodny, Joseph M.

    2000-01-01

    The objective of this research project is: to calculate ozone trends in the stratosphere from Dobson Umkehr measurements, to determine the vertical profile of trends at Arosa by using a sophisticated statistical model (MARCH) to separate solar, aerosol, and QBO effects on Dobson Umkehr measurements, and to compare Umkehr trends with SBUV and SAGE I/II trends in the stratosphere.

  10. Manifold Matching: Joint Optimization of Fidelity and Commensurability

    DTIC Science & Technology

    2011-11-12

    identified separately in p◦m, will be geometrically incommensurate (see Figure 7). Thus the null distribution of the test statistic will be inflated...into the objective function obviates the geometric incommensurability phenomenon. Thus we can es- tablish that, for a range of Dirichlet product model...from the geometric incommensu- rability phenomenon. Then q p implies that cca suffers from the spurious correlation phe- nomenon with high probability

  11. APL-UW Deep Water Propagation 2015-2017: Philippine Sea Data Analysis

    DTIC Science & Technology

    2015-09-30

    DISTRIBUTION STATEMENT A: Approved for public release: distribution is unlimited APL-UW Deep Water Propagation 2015-2017: Philippine Sea Data...the fundamental statistics of broadband low-frequency acoustical signals evolve during propagation through a dynamically-varying deep ocean. OBJECTIVES...Current models of signal randomization over long ranges in the deep ocean were developed for and tested in the North Pacific Ocean gyre. The

  12. Content-Aware Video Adaptation under Low-Bitrate Constraint

    NASA Astrophysics Data System (ADS)

    Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin

    2007-12-01

    With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  13. The use of subjective expert opinions in cost optimum design of aerospace structures. [probabilistic failure models

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1975-01-01

    The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.

  14. Forecasting experiments of a dynamical-statistical model of the sea surface temperature anomaly field based on the improved self-memorization principle

    NASA Astrophysics Data System (ADS)

    Hong, Mei; Chen, Xi; Zhang, Ren; Wang, Dong; Shen, Shuanghe; Singh, Vijay P.

    2018-04-01

    With the objective of tackling the problem of inaccurate long-term El Niño-Southern Oscillation (ENSO) forecasts, this paper develops a new dynamical-statistical forecast model of the sea surface temperature anomaly (SSTA) field. To avoid single initial prediction values, a self-memorization principle is introduced to improve the dynamical reconstruction model, thus making the model more appropriate for describing such chaotic systems as ENSO events. The improved dynamical-statistical model of the SSTA field is used to predict SSTA in the equatorial eastern Pacific and during El Niño and La Niña events. The long-term step-by-step forecast results and cross-validated retroactive hindcast results of time series T1 and T2 are found to be satisfactory, with a Pearson correlation coefficient of approximately 0.80 and a mean absolute percentage error (MAPE) of less than 15 %. The corresponding forecast SSTA field is accurate in that not only is the forecast shape similar to the actual field but also the contour lines are essentially the same. This model can also be used to forecast the ENSO index. The temporal correlation coefficient is 0.8062, and the MAPE value of 19.55 % is small. The difference between forecast results in spring and those in autumn is not high, indicating that the improved model can overcome the spring predictability barrier to some extent. Compared with six mature models published previously, the present model has an advantage in prediction precision and length, and is a novel exploration of the ENSO forecast method.

  15. Detecting influential observations in nonlinear regression modeling of groundwater flow

    USGS Publications Warehouse

    Yager, Richard M.

    1998-01-01

    Nonlinear regression is used to estimate optimal parameter values in models of groundwater flow to ensure that differences between predicted and observed heads and flows do not result from nonoptimal parameter values. Parameter estimates can be affected, however, by observations that disproportionately influence the regression, such as outliers that exert undue leverage on the objective function. Certain statistics developed for linear regression can be used to detect influential observations in nonlinear regression if the models are approximately linear. This paper discusses the application of Cook's D, which measures the effect of omitting a single observation on a set of estimated parameter values, and the statistical parameter DFBETAS, which quantifies the influence of an observation on each parameter. The influence statistics were used to (1) identify the influential observations in the calibration of a three-dimensional, groundwater flow model of a fractured-rock aquifer through nonlinear regression, and (2) quantify the effect of omitting influential observations on the set of estimated parameter values. Comparison of the spatial distribution of Cook's D with plots of model sensitivity shows that influential observations correspond to areas where the model heads are most sensitive to certain parameters, and where predicted groundwater flow rates are largest. Five of the six discharge observations were identified as influential, indicating that reliable measurements of groundwater flow rates are valuable data in model calibration. DFBETAS are computed and examined for an alternative model of the aquifer system to identify a parameterization error in the model design that resulted in overestimation of the effect of anisotropy on horizontal hydraulic conductivity.

  16. Objective Lightning Forecasting at Kennedy Space Center and Cape Canaveral Air Force Station using Cloud-to-Ground Lightning Surveillance System Data

    NASA Technical Reports Server (NTRS)

    Lambert, Winfred; Wheeler, Mark; Roeder, William

    2005-01-01

    The 45th Weather Squadron (45 WS) at Cape Canaveral Air-Force Station (CCAFS)ln Florida issues a probability of lightning occurrence in their daily 24-hour and weekly planning forecasts. This information is used for general planning of operations at CCAFS and Kennedy Space Center (KSC). These facilities are located in east-central Florida at the east end of a corridor known as 'Lightning Alley', an indication that lightning has a large impact on space-lift operations. Much of the current lightning probability forecast is based on a subjective analysis of model and observational data and an objective forecast tool developed over 30 years ago. The 45 WS requested that a new lightning probability forecast tool based on statistical analysis of more recent historical warm season (May-September) data be developed in order to increase the objectivity of the daily thunderstorm probability forecast. The resulting tool is a set of statistical lightning forecast equations, one for each month of the warm season, that provide a lightning occurrence probability for the day by 1100 UTC (0700 EDT) during the warm season.

  17. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  18. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  19. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  20. The Evaluation of the Regional Atmospheric Modeling System in the Eastern Range Dispersion Assessment System

    NASA Technical Reports Server (NTRS)

    Case, Jonathan

    2001-01-01

    The Applied Meteorology Unit (AMU) evaluated the Regional Atmospheric Modeling System (RAMS) contained within the Eastern Range Dispersion Assessment System (ERDAS). ERDAS provides emergency response guidance for Cape Canaveral Air Force Station and Kennedy Space Center operations in the event of an accidental hazardous material release or aborted vehicle launch. The RAMS prognostic data are available to ERDAS for display and are used to initialize the 45th Space Wing/Range Safety dispersion model. Thus, the accuracy of the dispersion predictions is dependent upon the accuracy of RAMS forecasts. The RAMS evaluation consisted of an objective and subjective component for the 1999 and 2000 Florida warm seasons, and the 1999-2000 cool season. In the objective evaluation, the AMU generated model error statistics at surface and upper-level observational sites, compared RAMS errors to a coarser RAMS grid configuration, and benchmarked RAMS against the nationally-used Eta model. In the subjective evaluation, the AMU compared forecast cold fronts, low-level temperature inversions, and precipitation to observations during the 1999-2000 cool season, verified the development of the RAMS forecast east coast sea breeze during both warm seasons, and examined the RAMS daily thunderstorm initiation and precipitation patterns during the 2000 warm season. This report summarizes the objective and subjective verification for all three seasons.

  1. Dissociation kinetics of metal clusters on multiple electronic states including electronic level statistics into the vibronic soup

    NASA Astrophysics Data System (ADS)

    Shvartsburg, Alexandre A.; Siu, K. W. Michael

    2001-06-01

    Modeling the delayed dissociation of clusters had been over the last decade a frontline development area in chemical physics. It is of fundamental interest how statistical kinetics methods previously validated for regular molecules and atomic nuclei may apply to clusters, as this would help to understand the transferability of statistical models for disintegration of complex systems across various classes of physical objects. From a practical perspective, accurate simulation of unimolecular decomposition is critical for the extraction of true thermochemical values from measurements on the decay of energized clusters. Metal clusters are particularly challenging because of the multitude of low-lying electronic states that are coupled to vibrations. This has previously been accounted for assuming the average electronic structure of a conducting cluster approximated by the levels of electron in a cavity. While this provides a reasonable time-averaged description, it ignores the distribution of instantaneous electronic structures in a "boiling" cluster around that average. Here we set up a new treatment that incorporates the statistical distribution of electronic levels around the average picture using random matrix theory. This approach faithfully reflects the completely chaotic "vibronic soup" nature of hot metal clusters. We found that the consideration of electronic level statistics significantly promotes electronic excitation and thus increases the magnitude of its effect. As this excitation always depresses the decay rates, the inclusion of level statistics results in slower dissociation of metal clusters.

  2. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    PubMed

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  3. The Value of a Well-Being Improvement Strategy: Longitudinal Success across Subjective and Objective Measures Observed in a Firm Adopting a Consumer-Driven Health Plan.

    PubMed

    Guo, Xiaobo; Coberley, Carter; Pope, James E; Wells, Aaron

    2015-10-01

    The objective of this study is to evaluate effectiveness of a firm's 5-year strategy toward improving well-being while lowering health care costs amidst adoption of a Consumer-Driven Health Plan. Repeated measures statistical models were employed to test and quantify association between key demographic factors, employment type, year, individual well-being, and outcomes of health care costs, obesity, smoking, absence, and performance. Average individual well-being trended upward by 13.5% over 5 years, monthly allowed amount health care costs declined 5.2% on average per person per year, and obesity and smoking rates declined by 4.8 and 9.7%, respectively, on average each year. The results show that individual well-being was significantly associated with each outcome and in the expected direction. The firm's strategy was successful in driving statistically significant, longitudinal well-being, biometric and productivity improvements, and health care cost reduction.

  4. Optimization of the p-xylene oxidation process by a multi-objective differential evolution algorithm with adaptive parameters co-derived with the population-based incremental learning algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Zhan; Yan, Xuefeng

    2018-04-01

    Different operating conditions of p-xylene oxidation have different influences on the product, purified terephthalic acid. It is necessary to obtain the optimal combination of reaction conditions to ensure the quality of the products, cut down on consumption and increase revenues. A multi-objective differential evolution (MODE) algorithm co-evolved with the population-based incremental learning (PBIL) algorithm, called PBMODE, is proposed. The PBMODE algorithm was designed as a co-evolutionary system. Each individual has its own parameter individual, which is co-evolved by PBIL. PBIL uses statistical analysis to build a model based on the corresponding symbiotic individuals of the superior original individuals during the main evolutionary process. The results of simulations and statistical analysis indicate that the overall performance of the PBMODE algorithm is better than that of the compared algorithms and it can be used to optimize the operating conditions of the p-xylene oxidation process effectively and efficiently.

  5. MANCOVA for one way classification with homogeneity of regression coefficient vectors

    NASA Astrophysics Data System (ADS)

    Mokesh Rayalu, G.; Ravisankar, J.; Mythili, G. Y.

    2017-11-01

    The MANOVA and MANCOVA are the extensions of the univariate ANOVA and ANCOVA techniques to multidimensional or vector valued observations. The assumption of a Gaussian distribution has been replaced with the Multivariate Gaussian distribution for the vectors data and residual term variables in the statistical models of these techniques. The objective of MANCOVA is to determine if there are statistically reliable mean differences that can be demonstrated between groups later modifying the newly created variable. When randomization assignment of samples or subjects to groups is not possible, multivariate analysis of covariance (MANCOVA) provides statistical matching of groups by adjusting dependent variables as if all subjects scored the same on the covariates. In this research article, an extension has been made to the MANCOVA technique with more number of covariates and homogeneity of regression coefficient vectors is also tested.

  6. The probability of object-scene co-occurrence influences object identification processes.

    PubMed

    Sauvé, Geneviève; Harmand, Mariane; Vanni, Léa; Brodeur, Mathieu B

    2017-07-01

    Contextual information allows the human brain to make predictions about the identity of objects that might be seen and irregularities between an object and its background slow down perception and identification processes. Bar and colleagues modeled the mechanisms underlying this beneficial effect suggesting that the brain stocks information about the statistical regularities of object and scene co-occurrence. Their model suggests that these recurring regularities could be conceptualized along a continuum in which the probability of seeing an object within a given scene can be high (probable condition), moderate (improbable condition) or null (impossible condition). In the present experiment, we propose to disentangle the electrophysiological correlates of these context effects by directly comparing object-scene pairs found along this continuum. We recorded the event-related potentials of 30 healthy participants (18-34 years old) and analyzed their brain activity in three time windows associated with context effects. We observed anterior negativities between 250 and 500 ms after object onset for the improbable and impossible conditions (improbable more negative than impossible) compared to the probable condition as well as a parieto-occipital positivity (improbable more positive than impossible). The brain may use different processing pathways to identify objects depending on whether the probability of co-occurrence with the scene is moderate (rely more on top-down effects) or null (rely more on bottom-up influences). The posterior positivity could index error monitoring aimed to ensure that no false information is integrated into mental representations of the world.

  7. Adolescents' Sedentary Behaviors in Two European Cities.

    PubMed

    Aibar Solana, Alberto; Bois, Julien E; Zaragoza, Javier; Bru, Noëlle; Paillard, Thierry; Generelo, Eduardo

    2015-01-01

    The aim of this study was to determine and compare the correlates of objective sedentary behavior (SB) and nonschool self-reported SB in adolescents from 2 midsized cities, 1 in France (Tarbes) and 1 in Spain (Huesca). Stability of objective SB and nonschool self-reported SB were also assessed at different time points during 1 academic year. Starting with a total of 829 participants and after applying inclusion criteria, objective SB was assessed for 646 adolescents (Mage = 14.30 ± 0.71 years) with GT3X accelerometers for 7 days at 2 time points. Nonschool self-reported SB was measured for 781 adolescents (Mage = 14.46 ± 0.76 years) at 3 time points by means of a questionnaire. Data were analyzed using multiple regression analysis. Gender and ambient temperature emerged as the main statistically significant correlates in all objective SB models, showing higher objective SB levels in girls and lower objective SB levels when ambient temperature was higher. According to nonschool self-reported SB, a gender effect was found in almost all behaviors. Whereas boys spent more time playing with video games as well as games on their mobile phones, girls spent more time studying and using their computers and mobile phones to communicate with each other. The findings showed a statistically significant city effect on study time (Huesca > Tarbes) and video games and telephone communication time (Tarbes > Huesca). Nonschool self-reported SB patterns were different in Huesca and Tarbes. Intervention programs should be adapted to target the reduction of adolescents' SB according to different contexts.

  8. The influence of contextual reward statistics on risk preference

    PubMed Central

    Rigoli, Francesco; Rutledge, Robb B.; Dayan, Peter; Dolan, Raymond J.

    2016-01-01

    Decision theories mandate that organisms should adjust their behaviour in the light of the contextual reward statistics. We tested this notion using a gambling choice task involving distinct contexts with different reward distributions. The best fitting model of subjects' behaviour indicated that the subjective values of options depended on several factors, including a baseline gambling propensity, a gambling preference dependent on reward amount, and a contextual reward adaptation factor. Combining this behavioural model with simultaneous functional magnetic resonance imaging we probed neural responses in three key regions linked to reward and value, namely ventral tegmental area/substantia nigra (VTA/SN), ventromedial prefrontal cortex (vmPFC) and ventral striatum (VST). We show that activity in the VTA/SN reflected contextual reward statistics to the extent that context affected behaviour, activity in the vmPFC represented a value difference between chosen and unchosen options while VST responses reflected a non-linear mapping between the actual objective rewards and their subjective value. The findings highlight a multifaceted basis for choice behaviour with distinct mappings between components of this behaviour and value sensitive brain regions. PMID:26707890

  9. Modelling a real-world buried valley system with vertical non-stationarity using multiple-point statistics

    NASA Astrophysics Data System (ADS)

    He, Xiulan; Sonnenborg, Torben O.; Jørgensen, Flemming; Jensen, Karsten H.

    2017-03-01

    Stationarity has traditionally been a requirement of geostatistical simulations. A common way to deal with non-stationarity is to divide the system into stationary sub-regions and subsequently merge the realizations for each region. Recently, the so-called partition approach that has the flexibility to model non-stationary systems directly was developed for multiple-point statistics simulation (MPS). The objective of this study is to apply the MPS partition method with conventional borehole logs and high-resolution airborne electromagnetic (AEM) data, for simulation of a real-world non-stationary geological system characterized by a network of connected buried valleys that incise deeply into layered Miocene sediments (case study in Denmark). The results show that, based on fragmented information of the formation boundaries, the MPS partition method is able to simulate a non-stationary system including valley structures embedded in a layered Miocene sequence in a single run. Besides, statistical information retrieved from the AEM data improved the simulation of the geology significantly, especially for the deep-seated buried valley sediments where borehole information is sparse.

  10. Examination of the Ovarian Reserve after Generation of Unilateral Rudimentary Uterine Horns in Rats

    PubMed Central

    Toyganözü, Hasan; Nazik, Hakan; Narin, Raziye; Satar, Deniz; Narin, Mehmet Ali; Büyüknacar, Sinem; Api, Murat; Aytan, Hakan

    2014-01-01

    Objective. The purpose of this experimental rat model study is to evaluate the changes in the ovarian environment after excision of the rudimentary horn. Methods. Ten female Wistar albino rats were used in this study. One cm of right uterine horn length was excised in the first operation. Two months after the first operation, all animals were sacrificed to obtain ovaries for histological examination. Mann-Whitney U test and Student's t-test were used for statistical analysis purposes. Statistical significance was defined as P < 0.005. Results. The number of primordial follicles (P = 0.415), primary follicles (P = 0.959), preantral follicles (P = 0.645), antral follicles (P = 0.328), and Graafian follicles (P = 0.721) was decreased and the number of atretic follicles (P = 0.374) increased in the right ovarian side. Howeve,r this difference was not found to be statistically significant. Conclusion. The results of this experimental rat model study suggest that the excision of rudimentary horn could have negative effects on ipsilateral ovarian functions. PMID:24672393

  11. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  12. T2* Mapping Provides Information That Is Statistically Comparable to an Arthroscopic Evaluation of Acetabular Cartilage.

    PubMed

    Morgan, Patrick; Nissi, Mikko J; Hughes, John; Mortazavi, Shabnam; Ellerman, Jutta

    2017-07-01

    Objectives The purpose of this study was to validate T2* mapping as an objective, noninvasive method for the prediction of acetabular cartilage damage. Methods This is the second step in the validation of T2*. In a previous study, we established a quantitative predictive model for identifying and grading acetabular cartilage damage. In this study, the model was applied to a second cohort of 27 consecutive hips to validate the model. A clinical 3.0-T imaging protocol with T2* mapping was used. Acetabular regions of interest (ROI) were identified on magnetic resonance and graded using the previously established model. Each ROI was then graded in a blinded fashion by arthroscopy. Accurate surgical location of ROIs was facilitated with a 2-dimensional map projection of the acetabulum. A total of 459 ROIs were studied. Results When T2* mapping and arthroscopic assessment were compared, 82% of ROIs were within 1 Beck group (of a total 6 possible) and 32% of ROIs were classified identically. Disease prediction based on receiver operating characteristic curve analysis demonstrated a sensitivity of 0.713 and a specificity of 0.804. Model stability evaluation required no significant changes to the predictive model produced in the initial study. Conclusions These results validate that T2* mapping provides statistically comparable information regarding acetabular cartilage when compared to arthroscopy. In contrast to arthroscopy, T2* mapping is quantitative, noninvasive, and can be used in follow-up. Unlike research quantitative magnetic resonance protocols, T2* takes little time and does not require a contrast agent. This may facilitate its use in the clinical sphere.

  13. Statistical Analysis of CO 2 Exposed Wells to Predict Long Term Leakage through the Development of an Integrated Neural-Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Boyun; Duguid, Andrew; Nygaard, Ronar

    The objective of this project is to develop a computerized statistical model with the Integrated Neural-Genetic Algorithm (INGA) for predicting the probability of long-term leak of wells in CO 2 sequestration operations. This object has been accomplished by conducting research in three phases: 1) data mining of CO 2-explosed wells, 2) INGA computer model development, and 3) evaluation of the predictive performance of the computer model with data from field tests. Data mining was conducted for 510 wells in two CO 2 sequestration projects in the Texas Gulf Coast region. They are the Hasting West field and Oyster Bayou fieldmore » in the Southern Texas. Missing wellbore integrity data were estimated using an analytical and Finite Element Method (FEM) model. The INGA was first tested for performances of convergence and computing efficiency with the obtained data set of high dimension. It was concluded that the INGA can handle the gathered data set with good accuracy and reasonable computing time after a reduction of dimension with a grouping mechanism. A computerized statistical model with the INGA was then developed based on data pre-processing and grouping. Comprehensive training and testing of the model were carried out to ensure that the model is accurate and efficient enough for predicting the probability of long-term leak of wells in CO 2 sequestration operations. The Cranfield in the southern Mississippi was select as the test site. Observation wells CFU31F2 and CFU31F3 were used for pressure-testing, formation-logging, and cement-sampling. Tools run in the wells include Isolation Scanner, Slim Cement Mapping Tool (SCMT), Cased Hole Formation Dynamics Tester (CHDT), and Mechanical Sidewall Coring Tool (MSCT). Analyses of the obtained data indicate no leak of CO 2 cross the cap zone while it is evident that the well cement sheath was invaded by the CO 2 from the storage zone. This observation is consistent with the result predicted by the INGA model which indicates the well has a CO 2 leak-safe probability of 72%. This comparison implies that the developed INGA model is valid for future use in predicting well leak probability.« less

  14. Real-time human versus animal classification using pyro-electric sensor array and Hidden Markov Model

    NASA Astrophysics Data System (ADS)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2014-03-01

    In this paper, we propose a real-time human versus animal classification technique using a pyro-electric sensor array and Hidden Markov Model. The technique starts with the variational energy functional level set segmentation technique to separate the object from background. After segmentation, we convert the segmented object to a signal by considering column-wise pixel values and then finding the wavelet coefficients of the signal. HMMs are trained to statistically model the wavelet features of individuals through an expectation-maximization learning process. Human versus animal classifications are made by evaluating a set of new wavelet feature data against the trained HMMs using the maximum-likelihood criterion. Human and animal data acquired-using a pyro-electric sensor in different terrains are used for performance evaluation of the algorithms. Failures of the computationally effective SURF feature based approach that we develop in our previous research are because of distorted images produced when the object runs very fast or if the temperature difference between target and background is not sufficient to accurately profile the object. We show that wavelet based HMMs work well for handling some of the distorted profiles in the data set. Further, HMM achieves improved classification rate over the SURF algorithm with almost the same computational time.

  15. Whole vertebral bone segmentation method with a statistical intensity-shape model based approach

    NASA Astrophysics Data System (ADS)

    Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer

    2011-03-01

    An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.

  16. PyEvolve: a toolkit for statistical modelling of molecular evolution.

    PubMed

    Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A

    2004-01-05

    Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.

  17. Predicting network modules of cell cycle regulators using relative protein abundance statistics.

    PubMed

    Oguz, Cihan; Watson, Layne T; Baumann, William T; Tyson, John J

    2017-02-28

    Parameter estimation in systems biology is typically done by enforcing experimental observations through an objective function as the parameter space of a model is explored by numerical simulations. Past studies have shown that one usually finds a set of "feasible" parameter vectors that fit the available experimental data equally well, and that these alternative vectors can make different predictions under novel experimental conditions. In this study, we characterize the feasible region of a complex model of the budding yeast cell cycle under a large set of discrete experimental constraints in order to test whether the statistical features of relative protein abundance predictions are influenced by the topology of the cell cycle regulatory network. Using differential evolution, we generate an ensemble of feasible parameter vectors that reproduce the phenotypes (viable or inviable) of wild-type yeast cells and 110 mutant strains. We use this ensemble to predict the phenotypes of 129 mutant strains for which experimental data is not available. We identify 86 novel mutants that are predicted to be viable and then rank the cell cycle proteins in terms of their contributions to cumulative variability of relative protein abundance predictions. Proteins involved in "regulation of cell size" and "regulation of G1/S transition" contribute most to predictive variability, whereas proteins involved in "positive regulation of transcription involved in exit from mitosis," "mitotic spindle assembly checkpoint" and "negative regulation of cyclin-dependent protein kinase by cyclin degradation" contribute the least. These results suggest that the statistics of these predictions may be generating patterns specific to individual network modules (START, S/G2/M, and EXIT). To test this hypothesis, we develop random forest models for predicting the network modules of cell cycle regulators using relative abundance statistics as model inputs. Predictive performance is assessed by the areas under receiver operating characteristics curves (AUC). Our models generate an AUC range of 0.83-0.87 as opposed to randomized models with AUC values around 0.50. By using differential evolution and random forest modeling, we show that the model prediction statistics generate distinct network module-specific patterns within the cell cycle network.

  18. Indiana chronic disease management program risk stratification analysis.

    PubMed

    Li, Jingjin; Holmes, Ann M; Rosenman, Marc B; Katz, Barry P; Downs, Stephen M; Murray, Michael D; Ackermann, Ronald T; Inui, Thomas S

    2005-10-01

    The objective of this study was to compare the ability of risk stratification models derived from administrative data to classify groups of patients for enrollment in a tailored chronic disease management program. This study included 19,548 Medicaid patients with chronic heart failure or diabetes in the Indiana Medicaid data warehouse during 2001 and 2002. To predict costs (total claims paid) in FY 2002, we considered candidate predictor variables available in FY 2001, including patient characteristics, the number and type of prescription medications, laboratory tests, pharmacy charges, and utilization of primary, specialty, inpatient, emergency department, nursing home, and home health care. We built prospective models to identify patients with different levels of expenditure. Model fit was assessed using R statistics, whereas discrimination was assessed using the weighted kappa statistic, predictive ratios, and the area under the receiver operating characteristic curve. We found a simple least-squares regression model in which logged total charges in FY 2002 were regressed on the log of total charges in FY 2001, the number of prescriptions filled in FY 2001, and the FY 2001 eligibility category, performed as well as more complex models. This simple 3-parameter model had an R of 0.30 and, in terms in classification efficiency, had a sensitivity of 0.57, a specificity of 0.90, an area under the receiver operator curve of 0.80, and a weighted kappa statistic of 0.51. This simple model based on readily available administrative data stratified Medicaid members according to predicted future utilization as well as more complicated models.

  19. Collaborative Project: The problem of bias in defining uncertainty in computationally enabled strategies for data-driven climate model development. Final Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huerta, Gabriel

    The objective of the project is to develop strategies for better representing scientific sensibilities within statistical measures of model skill that then can be used within a Bayesian statistical framework for data-driven climate model development and improved measures of model scientific uncertainty. One of the thorny issues in model evaluation is quantifying the effect of biases on climate projections. While any bias is not desirable, only those biases that affect feedbacks affect scatter in climate projections. The effort at the University of Texas is to analyze previously calculated ensembles of CAM3.1 with perturbed parameters to discover how biases affect projectionsmore » of global warming. The hypothesis is that compensating errors in the control model can be identified by their effect on a combination of processes and that developing metrics that are sensitive to dependencies among state variables would provide a way to select version of climate models that may reduce scatter in climate projections. Gabriel Huerta at the University of New Mexico is responsible for developing statistical methods for evaluating these field dependencies. The UT effort will incorporate these developments into MECS, which is a set of python scripts being developed at the University of Texas for managing the workflow associated with data-driven climate model development over HPC resources. This report reflects the main activities at the University of New Mexico where the PI (Huerta) and the Postdocs (Nosedal, Hattab and Karki) worked on the project.« less

  20. Harmonic statistics

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  1. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different methods on a fair basis. We developed and present pySTAN (python framework for STructure Analysis), a python package containing a set of functions for model structure evaluation to provide the analysis of (hydrological) model structures. A selected set of algorithms for optimization, uncertainty and sensitivity analysis is currently available, together with a set of evaluation (objective) functions and input distributions to sample from. The methods are implemented model-independent and the python language provides the wrapper functions to apply administer external model codes. Different objective functions can be considered simultaneously with both statistical metrics and more hydrology specific metrics. By using so-called reStructuredText (sphinx documentation generator) and Python documentation strings (docstrings), the generation of manual pages is semi-automated and a specific environment is available to enhance both the readability and transparency of the code. It thereby enables a larger group of users to apply and compare these methods and to extend the functionalities.

  2. Gas Hydrate Stability at Low Temperatures and High Pressures with Applications to Mars and Europa

    NASA Technical Reports Server (NTRS)

    Marion, G. M.; Kargel, J. S.; Catling, D. C.

    2004-01-01

    Gas hydrates are implicated in the geochemical evolution of both Mars and Europa [1- 3]. Most models developed for gas hydrate chemistry are based on the statistical thermodynamic model of van der Waals and Platteeuw [4] with subsequent modifications [5-8]. None of these models are, however, state-of-the-art with respect to gas hydrate/electrolyte interactions, which is particularly important for planetary applications where solution chemistry may be very different from terrestrial seawater. The objectives of this work were to add gas (carbon dioxide and methane) hydrate chemistries into an electrolyte model parameterized for low temperatures and high pressures (the FREZCHEM model) and use the model to examine controls on gas hydrate chemistries for Mars and Europa.

  3. Toward Global Comparability of Sexual Orientation Data in Official Statistics: A Conceptual Framework of Sexual Orientation for Health Data Collection in New Zealand's Official Statistics System

    PubMed Central

    Gray, Alistair; Veale, Jaimie F.; Binson, Diane; Sell, Randell L.

    2013-01-01

    Objective. Effectively addressing health disparities experienced by sexual minority populations requires high-quality official data on sexual orientation. We developed a conceptual framework of sexual orientation to improve the quality of sexual orientation data in New Zealand's Official Statistics System. Methods. We reviewed conceptual and methodological literature, culminating in a draft framework. To improve the framework, we held focus groups and key-informant interviews with sexual minority stakeholders and producers and consumers of official statistics. An advisory board of experts provided additional guidance. Results. The framework proposes working definitions of the sexual orientation topic and measurement concepts, describes dimensions of the measurement concepts, discusses variables framing the measurement concepts, and outlines conceptual grey areas. Conclusion. The framework proposes standard definitions and concepts for the collection of official sexual orientation data in New Zealand. It presents a model for producers of official statistics in other countries, who wish to improve the quality of health data on their citizens. PMID:23840231

  4. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    PubMed Central

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  5. Guidelines for Assessment and Instruction in Statistics Education (GAISE): extending GAISE into nursing education.

    PubMed

    Hayat, Matthew J

    2014-04-01

    Statistics coursework is usually a core curriculum requirement for nursing students at all degree levels. The American Association of Colleges of Nursing (AACN) establishes curriculum standards for academic nursing programs. However, the AACN provides little guidance on statistics education and does not offer standardized competency guidelines or recommendations about course content or learning objectives. Published standards may be used in the course development process to clarify course content and learning objectives. This article includes suggestions for implementing and integrating recommendations given in the Guidelines for Assessment and Instruction in Statistics Education (GAISE) report into statistics education for nursing students. Copyright 2014, SLACK Incorporated.

  6. A diagnostic model for chronic hypersensitivity pneumonitis.

    PubMed

    Johannson, Kerri A; Elicker, Brett M; Vittinghoff, Eric; Assayag, Deborah; de Boer, Kaïssa; Golden, Jeffrey A; Jones, Kirk D; King, Talmadge E; Koth, Laura L; Lee, Joyce S; Ley, Brett; Wolters, Paul J; Collard, Harold R

    2016-10-01

    The objective of this study was to develop a diagnostic model that allows for a highly specific diagnosis of chronic hypersensitivity pneumonitis using clinical and radiological variables alone. Chronic hypersensitivity pneumonitis and other interstitial lung disease cases were retrospectively identified from a longitudinal database. High-resolution CT scans were blindly scored for radiographic features (eg, ground-glass opacity, mosaic perfusion) as well as the radiologist's diagnostic impression. Candidate models were developed then evaluated using clinical and radiographic variables and assessed by the cross-validated C-statistic. Forty-four chronic hypersensitivity pneumonitis and eighty other interstitial lung disease cases were identified. Two models were selected based on their statistical performance, clinical applicability and face validity. Key model variables included age, down feather and/or bird exposure, radiographic presence of ground-glass opacity and mosaic perfusion and moderate or high confidence in the radiographic impression of chronic hypersensitivity pneumonitis. Models were internally validated with good performance, and cut-off values were established that resulted in high specificity for a diagnosis of chronic hypersensitivity pneumonitis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. The Active Side of Stereopsis: Fixation Strategy and Adaptation to Natural Environments.

    PubMed

    Gibaldi, Agostino; Canessa, Andrea; Sabatini, Silvio P

    2017-03-20

    Depth perception in near viewing strongly relies on the interpretation of binocular retinal disparity to obtain stereopsis. Statistical regularities of retinal disparities have been claimed to greatly impact on the neural mechanisms that underlie binocular vision, both to facilitate perceptual decisions and to reduce computational load. In this paper, we designed a novel and unconventional approach in order to assess the role of fixation strategy in conditioning the statistics of retinal disparity. We integrated accurate realistic three-dimensional models of natural scenes with binocular eye movement recording, to obtain accurate ground-truth statistics of retinal disparity experienced by a subject in near viewing. Our results evidence how the organization of human binocular visual system is finely adapted to the disparity statistics characterizing actual fixations, thus revealing a novel role of the active fixation strategy over the binocular visual functionality. This suggests an ecological explanation for the intrinsic preference of stereopsis for a close central object surrounded by a far background, as an early binocular aspect of the figure-ground segregation process.

  8. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  9. Identifying On-Orbit Test Targets for Space Fence Operational Testing

    NASA Astrophysics Data System (ADS)

    Pechkis, D.; Pacheco, N.; Botting, T.

    2014-09-01

    Space Fence will be an integrated system of two ground-based, S-band (2 to 4 GHz) phased-array radars located in Kwajalein and perhaps Western Australia [1]. Space Fence will cooperate with other Space Surveillance Network sensors to provide space object tracking and radar characterization data to support U.S. Strategic Command space object catalog maintenance and other space situational awareness needs. We present a rigorous statistical test design intended to test Space Fence to the letter of the program requirements as well as to characterize the system performance across the entire operational envelope. The design uses altitude, size, and inclination as independent factors in statistical tests of dependent variables (e.g., observation accuracy) linked to requirements. The analysis derives the type and number of necessary test targets. Comparing the resulting sample sizes with the number of currently known targets, we identify those areas where modelling and simulation methods are needed. Assuming hypothetical Kwajalein radar coverage and a conservative number of radar passes per object per day, we conclude that tests involving real-world space objects should take no more than 25 days to evaluate all operational requirements; almost 60 percent of the requirements can be tested in a single day and nearly 90 percent can be tested in one week or less. Reference: [1] L. Haines and P. Phu, Space Fence PDR Concept Development Phase, 2011 AMOS Conference Technical Papers.

  10. Derivation and Validation of the Surgical Site Infections Risk Model Using Health Administrative Data.

    PubMed

    van Walraven, Carl; Jackson, Timothy D; Daneman, Nick

    2016-04-01

    OBJECTIVE Surgical site infections (SSIs) are common hospital-acquired infections. Tracking SSIs is important to monitor their incidence, and this process requires primary data collection. In this study, we derived and validated a method using health administrative data to predict the probability that a person who had surgery would develop an SSI within 30 days. METHODS All patients enrolled in the National Surgical Quality Improvement Program (NSQIP) from 2 sites were linked to population-based administrative datasets in Ontario, Canada. We derived a multivariate model, stratified by surgical specialty, to determine the independent association of SSI status with patient and hospitalization covariates as well as physician claim codes. This SSI risk model was validated in 2 cohorts. RESULTS The derivation cohort included 5,359 patients with a 30-day SSI incidence of 6.0% (n=118). The SSI risk model predicted the probability that a person had an SSI based on 7 covariates: index hospitalization diagnostic score; physician claims score; emergency visit diagnostic score; operation duration; surgical service; and potential SSI codes. More than 90% of patients had predicted SSI risks lower than 10%. In the derivation group, model discrimination and calibration was excellent (C statistic, 0.912; Hosmer-Lemeshow [H-L] statistic, P=.47). In the 2 validation groups, performance decreased slightly (C statistics, 0.853 and 0.812; H-L statistics, 26.4 [P=.0009] and 8.0 [P=.42]), but low-risk patients were accurately identified. CONCLUSION Health administrative data can effectively identify postoperative patients with a very low risk of surgical site infection within 30 days of their procedure. Records of higher-risk patients can be reviewed to confirm SSI status.

  11. Quantifying and Generalizing Hydrologic Responses to Dam Regulation using a Statistical Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A

    2014-01-01

    Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects ofmore » local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at large spatial scales as to generalize the directionality of hydrologic responses.« less

  12. Comparing Self-Reported Health Status and Diagnosis-Based Risk Adjustment to Predict 1- and 2 to 5-Year Mortality

    PubMed Central

    Pietz, Kenneth; Petersen, Laura A

    2007-01-01

    Objectives To compare the ability of two diagnosis-based risk adjustment systems and health self-report to predict short- and long-term mortality. Data Sources/Study Setting Data were obtained from the Department of Veterans Affairs (VA) administrative databases. The study population was 78,164 VA beneficiaries at eight medical centers during fiscal year (FY) 1998, 35,337 of whom completed an 36-Item Short Form Health Survey for veterans (SF-36V) survey. Study Design We tested the ability of Diagnostic Cost Groups (DCGs), Adjusted Clinical Groups (ACGs), SF-36V Physical Component score (PCS) and Mental Component Score (MCS), and eight SF-36V scales to predict 1- and 2–5 year all-cause mortality. The additional predictive value of adding PCS and MCS to ACGs and DCGs was also evaluated. Logistic regression models were compared using Akaike's information criterion, the c-statistic, and the Hosmer–Lemeshow test. Principal Findings The c-statistics for the eight scales combined with age and gender were 0.766 for 1-year mortality and 0.771 for 2–5-year mortality. For DCGs with age and gender the c-statistics for 1- and 2–5-year mortality were 0.778 and 0.771, respectively. Adding PCS and MCS to the DCG model increased the c-statistics to 0.798 for 1-year and 0.784 for 2–5-year mortality. Conclusions The DCG model showed slightly better performance than the eight-scale model in predicting 1-year mortality, but the two models showed similar performance for 2–5-year mortality. Health self-report may add health risk information in addition to age, gender, and diagnosis for predicting longer-term mortality. PMID:17362210

  13. Ultra-low-dose Ultrasound Molecular Imaging for the Detection of Angiogenesis in a Mouse Murine Tumor Model: How Little Can We See?

    PubMed Central

    Wang, Shiying; Herbst, Elizabeth B.; Mauldin, F. William; Diakova, Galina B.; Klibanov, Alexander L.; Hossack, John A.

    2016-01-01

    Objectives The objective of this study is to evaluate the minimum microbubble dose for ultrasound molecular imaging to achieve statistically significant detection of angiogenesis in a mouse model. Materials and Methods The pre-burst minus post-burst method was implemented on a Verasonics ultrasound research scanner using a multi-frame compounding pulse inversion imaging sequence. Biotinylated lipid (distearoyl phosphatidylcholine, DSPC-based) microbubbles that were conjugated with anti-vascular endothelial growth factor 2 (VEGFR2) antibody (MBVEGFR2) or isotype control antibody (MBControl) were injected into mice carrying adenocarcinoma xenografts. Different injection doses ranging from 5 × 104 to 1 × 107 microbubbles per mouse were evaluated to determine the minimum diagnostically effective dose. Results The proposed imaging sequence was able to achieve statistically significant detection (p < 0.05, n = 5) of VEGFR2 in tumors with a minimum MBVEGFR2 injection dose of only 5 × 104 microbubbles per mouse (DSPC at 0.053 ng/g mouse body mass). Non-specific adhesion of MBControl at the same injection dose was negligible. Additionally, the targeted contrast ultrasound signal of MBVEGFR2 decreased with lower microbubble doses, while non-specific adhesion of MBControl increased with higher microbubble doses. Conclusions 5 × 104 microbubbles per animal is now the lowest injection dose on record for ultrasound molecular imaging to achieve statistically significant detection of molecular targets in vivo. Findings in this study provide us with further guidance for future developments of clinically translatable ultrasound molecular imaging applications using a lower dose of microbubbles. PMID:27654582

  14. Contrasting effects of feature-based statistics on the categorisation and identification of visual objects

    PubMed Central

    Taylor, Kirsten I.; Devereux, Barry J.; Acres, Kadia; Randall, Billi; Tyler, Lorraine K.

    2013-01-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. PMID:22137770

  15. An Overview of the Orbital Debris and Meteoroid Environments, Their Effects on Spacecraft, and What Can We Do About It?

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2017-01-01

    Because of the high speeds needed for orbital space flight, hypervelocity impacts with objects in space are a constant risk to spacecraft. This includes natural debris - meteoroids - and the debris remnants of our own activities in space. A number of space surveillance assets are used to measure and track spacecraft, used upper stages, and breakup debris. However, much of the debris and meteoroids encountered by spacecraft in Earth orbit is not easily measured or tracked. For every man-made object that we can track, there are hundreds of small debris that are too small to be tracked but still large enough to damage spacecraft. In addition, even if we knew today's environment with perfect knowledge, the debris environment is dynamic and would change tomorrow. This means that much of the risk from both meteoroids and anthropogenic debris is statistical in nature. NASA uses and maintains a number of instruments to statistically monitor the meteoroid and orbital debris environments, and uses this information to compute statistical models for use by spacecraft designers and operators. Because orbital debris is a result of human activities, NASA has led the US government in formulating national and international strategies that space users can employ to limit the growth of debris in the future. This talk will summarize the history and current state of meteoroid and space debris measurements and modeling, how the environment influences spacecraft design and operations, how we are designing the experiments of tomorrow to improve our knowledge, and how we are working internationally to preserve the space environment for the future.

  16. Evolution of the luminosity function of extragalactic objects

    NASA Technical Reports Server (NTRS)

    Petrosian, V.

    1985-01-01

    A nonparametric procedure for determination of the evolution of the luminosity function of extragalactic objects and use of this for prediction of expected redshift and luminosity distribution of objects is described. The relation between this statistical evolution of the population and their physical evolution, such as the variation with cosmological epoch of their luminosity and formation rate is presented. This procedure when applied to a sample of optically selected quasars with redshifts less than two shows that the luminosity function evolves more strongly for higher luminosities, indicating a larger quasar activity at earlier epochs and a more rapid evolution of the objects during their higher luminosity phases. It is also shown that absence of many quasars at redshifts greater than three implies slowing down of this evolution in the conventional cosmological models, perhaps indicating that this is near the epoch of the birth of the quasar (and galaxies).

  17. X-ray phase contrast imaging of objects with subpixel-size inhomogeneities: a geometrical optics model.

    PubMed

    Gasilov, Sergei V; Coan, Paola

    2012-09-01

    Several x-ray phase contrast extraction algorithms use a set of images acquired along the rocking curve of a perfect flat analyzer crystal to study the internal structure of objects. By measuring the angular shift of the rocking curve peak, one can determine the local deflections of the x-ray beam propagated through a sample. Additionally, some objects determine a broadening of the crystal rocking curve, which can be explained in terms of multiple refraction of x rays by many subpixel-size inhomogeneities contained in the sample. This fact may allow us to differentiate between materials and features characterized by different refraction properties. In the present work we derive an expression for the beam broadening in the form of a linear integral of the quantity related to statistical properties of the dielectric susceptibility distribution function of the object.

  18. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  19. Controlling the motion of multiple objects on a Chladni plate

    NASA Astrophysics Data System (ADS)

    Zhou, Quan; Sariola, Veikko; Latifi, Kourosh; Liimatainen, Ville

    2016-09-01

    The origin of the idea of moving objects by acoustic vibration can be traced back to 1787, when Ernst Chladni reported the first detailed studies on the aggregation of sand onto nodal lines of a vibrating plate. Since then and to this date, the prevailing view has been that the particle motion out of nodal lines is random, implying uncontrollability. But how random really is the out-of-nodal-lines motion on a Chladni plate? Here we show that the motion is sufficiently regular to be statistically modelled, predicted and controlled. By playing carefully selected musical notes, we can control the position of multiple objects simultaneously and independently using a single acoustic actuator. Our method allows independent trajectory following, pattern transformation and sorting of multiple miniature objects in a wide range of materials, including electronic components, water droplets loaded on solid carriers, plant seeds, candy balls and metal parts.

  20. Study on inverse estimation of radiative properties from directional radiances by using statistical RPSO algorithm

    NASA Astrophysics Data System (ADS)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk; Shin, Jong-Jin

    2016-09-01

    Infrared signals are widely used to discriminate objects against the background. Prediction of infrared signal from an object surface is essential in evaluating the detectability of the object. Appropriate and easy method of procurement of the radiative properties such as the surface emissivity, bidirectional reflectivity is important in estimating infrared signals. Direct measurement can be a good choice but a costly and time consuming way of obtaining the radiative properties for surfaces coated with many different newly developed paints. Especially measurement of the bidirectional reflectivity usually expressed by the bidirectional reflectance distribution function (BRDF) is the most costly job. In this paper we are presenting an inverse estimation method of the radiative properties by using the directional radiances from the surface of concern. The inverse estimation method used in this study is the statistical repulsive particle swarm optimization (RPSO) algorithm which uses the randomly picked directional radiance data emitted and reflected from the surface. In this paper, we test the proposed inverse method by considering the radiation from a steel plate surface coated with different paints at a clear sunny day condition. For convenience, the directional radiance data from the steel plate within a spectral band of concern are obtained from the simulation using the commercial software, RadthermIR, instead of the field measurement. A widely used BRDF model called as the Sandford-Robertson(S-R) model is considered and the RPSO process is then used to find the best fitted model parameters for the S-R model. The results obtained from this study show an excellent agreement with the reference property data used for the simulation for directional radiances. The proposed process can be a useful way of obtaining the radiative properties from field measured directional radiance data for surfaces coated with or without various kinds of paints of unknown radiative properties.

  1. Performance analysis of different tuning rules for an isothermal CSTR using integrated EPC and SPC

    NASA Astrophysics Data System (ADS)

    Roslan, A. H.; Karim, S. F. Abd; Hamzah, N.

    2018-03-01

    This paper demonstrates the integration of Engineering Process Control (EPC) and Statistical Process Control (SPC) for the control of product concentration of an isothermal CSTR. The objectives of this study are to evaluate the performance of Ziegler-Nichols (Z-N), Direct Synthesis, (DS) and Internal Model Control (IMC) tuning methods and determine the most effective method for this process. The simulation model was obtained from past literature and re-constructed using SIMULINK MATLAB to evaluate the process response. Additionally, the process stability, capability and normality were analyzed using Process Capability Sixpack reports in Minitab. Based on the results, DS displays the best response for having the smallest rise time, settling time, overshoot, undershoot, Integral Time Absolute Error (ITAE) and Integral Square Error (ISE). Also, based on statistical analysis, DS yields as the best tuning method as it exhibits the highest process stability and capability.

  2. DATA QUALITY OBJECTIVES AND STATISTICAL DESIGN SUPPORT FOR DEVELOPMENT OF A MONITORING PROTOCOL FOR RECREATIONAL WATERS

    EPA Science Inventory

    The purpose of this report is to describe the outputs of the Data Quality Objectives (DQOs) Process and discussions about developing a statistical design that will be used to implement the research study of recreational beach waters.

  3. Risk Decision Making Model for Reservoir Floodwater resources Utilization

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2017-12-01

    Floodwater resources utilization(FRU) can alleviate the shortage of water resources, but there are risks. In order to safely and efficiently utilize the floodwater resources, it is necessary to study the risk of reservoir FRU. In this paper, the risk rate of exceeding the design flood water level and the risk rate of exceeding safety discharge are estimated. Based on the principle of the minimum risk and the maximum benefit of FRU, a multi-objective risk decision making model for FRU is constructed. Probability theory and mathematical statistics method is selected to calculate the risk rate; C-D production function method and emergy analysis method is selected to calculate the risk benefit; the risk loss is related to flood inundation area and unit area loss; the multi-objective decision making problem of the model is solved by the constraint method. Taking the Shilianghe reservoir in Jiangsu Province as an example, the optimal equilibrium solution of FRU of the Shilianghe reservoir is found by using the risk decision making model, and the validity and applicability of the model are verified.

  4. A Statistical Model for Multilingual Entity Detection and Tracking

    DTIC Science & Technology

    2004-01-01

    tomatic Content Extraction ( ACE ) evaluation achieved top-tier results in all three evaluation languages. 1 Introduction Detecting entities, whether named...of com- bining the detected mentions into groups of references to the same object. The work presented here is motivated by the ACE eval- uation...Entropy (MaxEnt henceforth) (Berger et al., 1996) and Robust Risk Minimization (RRM henceforth) 1For a description of the ACE program see http

  5. Projecting wildfire area burned in the south-eastern United States, 2011-60

    Treesearch

    Jeff Prestemon; Uma Shankar; Aijun Xiu; K. Talgo; D. Yang; Ernest Dixon IV; Donald McKenzie; Karen L. Abt

    2016-01-01

    Future changes in society and climate are expected to affect wildfire activity in the south-eastern United States. The objective of this research was to understand how changes in both climate and society may affect wildfire in the coming decades.Weestimated a three-stage statistical model of wildfire area burned by ecoregion province for lightning and human causes (...

  6. A statistical model for determining impact of wildland fires on Particulate Matter (PM2.5) in Central California aided by satellite imagery of smoke

    Treesearch

    Haiganoush K. Preisler; Donald Schweizer; Ricardo Cisneros; Trent Procter; Mark Ruminski; Leland Tarnay

    2015-01-01

    As the climate in California warms and wildfires become larger and more severe, satellite-based observational tools are frequently used for studying impact of those fires on air quality. However little objective work has been done to quantify the skill these satellite observations of smoke plumes have in predicting impacts to PM2.5 concentrations...

  7. BagMOOV: A novel ensemble for heart disease prediction bootstrap aggregation with multi-objective optimized voting.

    PubMed

    Bashir, Saba; Qamar, Usman; Khan, Farhan Hassan

    2015-06-01

    Conventional clinical decision support systems are based on individual classifiers or simple combination of these classifiers which tend to show moderate performance. This research paper presents a novel classifier ensemble framework based on enhanced bagging approach with multi-objective weighted voting scheme for prediction and analysis of heart disease. The proposed model overcomes the limitations of conventional performance by utilizing an ensemble of five heterogeneous classifiers: Naïve Bayes, linear regression, quadratic discriminant analysis, instance based learner and support vector machines. Five different datasets are used for experimentation, evaluation and validation. The datasets are obtained from publicly available data repositories. Effectiveness of the proposed ensemble is investigated by comparison of results with several classifiers. Prediction results of the proposed ensemble model are assessed by ten fold cross validation and ANOVA statistics. The experimental evaluation shows that the proposed framework deals with all type of attributes and achieved high diagnosis accuracy of 84.16 %, 93.29 % sensitivity, 96.70 % specificity, and 82.15 % f-measure. The f-ratio higher than f-critical and p value less than 0.05 for 95 % confidence interval indicate that the results are extremely statistically significant for most of the datasets.

  8. Trackable life: Data, sequence, and organism in movement ecology.

    PubMed

    Benson, Etienne S

    2016-06-01

    Over the past decade an increasing number of ecologists have begun to frame their work as a contribution to the emerging research field of movement ecology. This field's primary object of research is the movement track, which is usually operationalized as a series of discrete "steps and stops" that represent a portion of an animal's "lifetime track." Its practitioners understand their field as dependent on recent technical advances in tracking organisms and analyzing their movements. By making movement their primary object of research, rather than simply an expression of deeper biological phenomena, movement ecologists are able to generalize across the movement patterns of a wide variety of species and to draw on statistical techniques developed to model the movements of non-living things. Although it can trace its roots back to a long tradition of statistical models of movement, the field relies heavily on metaphors from genomics; in particular, movement tracks have been seen as similar to DNA sequences. Though this has helped movement ecology consolidate around a shared understanding of movement, the field may need to broaden its understanding of movement beyond the sequence if it is to realize its potential to address urgent concerns such as biodiversity loss. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A statistical model describing combined irreversible electroporation and electroporation-induced blood-brain barrier disruption

    PubMed Central

    Sharabi, Shirley; Kos, Bor; Last, David; Guez, David; Daniels, Dianne; Harnof, Sagi; Miklavcic, Damijan

    2016-01-01

    Background Electroporation-based therapies such as electrochemotherapy (ECT) and irreversible electroporation (IRE) are emerging as promising tools for treatment of tumors. When applied to the brain, electroporation can also induce transient blood-brain-barrier (BBB) disruption in volumes extending beyond IRE, thus enabling efficient drug penetration. The main objective of this study was to develop a statistical model predicting cell death and BBB disruption induced by electroporation. This model can be used for individual treatment planning. Material and methods Cell death and BBB disruption models were developed based on the Peleg-Fermi model in combination with numerical models of the electric field. The model calculates the electric field thresholds for cell kill and BBB disruption and describes the dependence on the number of treatment pulses. The model was validated using in vivo experimental data consisting of rats brains MRIs post electroporation treatments. Results Linear regression analysis confirmed that the model described the IRE and BBB disruption volumes as a function of treatment pulses number (r2 = 0.79; p < 0.008, r2 = 0.91; p < 0.001). The results presented a strong plateau effect as the pulse number increased. The ratio between complete cell death and no cell death thresholds was relatively narrow (between 0.88-0.91) even for small numbers of pulses and depended weakly on the number of pulses. For BBB disruption, the ratio increased with the number of pulses. BBB disruption radii were on average 67% ± 11% larger than IRE volumes. Conclusions The statistical model can be used to describe the dependence of treatment-effects on the number of pulses independent of the experimental setup. PMID:27069447

  10. Real-time monitoring of a coffee roasting process with near infrared spectroscopy using multivariate statistical analysis: A feasibility study.

    PubMed

    Catelani, Tiago A; Santos, João Rodrigo; Páscoa, Ricardo N M J; Pezza, Leonardo; Pezza, Helena R; Lopes, João A

    2018-03-01

    This work proposes the use of near infrared (NIR) spectroscopy in diffuse reflectance mode and multivariate statistical process control (MSPC) based on principal component analysis (PCA) for real-time monitoring of the coffee roasting process. The main objective was the development of a MSPC methodology able to early detect disturbances to the roasting process resourcing to real-time acquisition of NIR spectra. A total of fifteen roasting batches were defined according to an experimental design to develop the MSPC models. This methodology was tested on a set of five batches where disturbances of different nature were imposed to simulate real faulty situations. Some of these batches were used to optimize the model while the remaining was used to test the methodology. A modelling strategy based on a time sliding window provided the best results in terms of distinguishing batches with and without disturbances, resourcing to typical MSPC charts: Hotelling's T 2 and squared predicted error statistics. A PCA model encompassing a time window of four minutes with three principal components was able to efficiently detect all disturbances assayed. NIR spectroscopy combined with the MSPC approach proved to be an adequate auxiliary tool for coffee roasters to detect faults in a conventional roasting process in real-time. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Penalized likelihood and multi-objective spatial scans for the detection and inference of irregular clusters

    PubMed Central

    2010-01-01

    Background Irregularly shaped spatial clusters are difficult to delineate. A cluster found by an algorithm often spreads through large portions of the map, impacting its geographical meaning. Penalized likelihood methods for Kulldorff's spatial scan statistics have been used to control the excessive freedom of the shape of clusters. Penalty functions based on cluster geometry and non-connectivity have been proposed recently. Another approach involves the use of a multi-objective algorithm to maximize two objectives: the spatial scan statistics and the geometric penalty function. Results & Discussion We present a novel scan statistic algorithm employing a function based on the graph topology to penalize the presence of under-populated disconnection nodes in candidate clusters, the disconnection nodes cohesion function. A disconnection node is defined as a region within a cluster, such that its removal disconnects the cluster. By applying this function, the most geographically meaningful clusters are sifted through the immense set of possible irregularly shaped candidate cluster solutions. To evaluate the statistical significance of solutions for multi-objective scans, a statistical approach based on the concept of attainment function is used. In this paper we compared different penalized likelihoods employing the geometric and non-connectivity regularity functions and the novel disconnection nodes cohesion function. We also build multi-objective scans using those three functions and compare them with the previous penalized likelihood scans. An application is presented using comprehensive state-wide data for Chagas' disease in puerperal women in Minas Gerais state, Brazil. Conclusions We show that, compared to the other single-objective algorithms, multi-objective scans present better performance, regarding power, sensitivity and positive predicted value. The multi-objective non-connectivity scan is faster and better suited for the detection of moderately irregularly shaped clusters. The multi-objective cohesion scan is most effective for the detection of highly irregularly shaped clusters. PMID:21034451

  12. Mediation Analysis with Survival Outcomes: Accelerated Failure Time vs. Proportional Hazards Models

    PubMed Central

    Gelfand, Lois A.; MacKinnon, David P.; DeRubeis, Robert J.; Baraldi, Amanda N.

    2016-01-01

    Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration. Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings. Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG. Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results. PMID:27065906

  13. Statistical Reference Datasets

    National Institute of Standards and Technology Data Gateway

    Statistical Reference Datasets (Web, free access)   The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.

  14. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324

  15. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes

    PubMed Central

    2017-01-01

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274, 1926–1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105, 2745–2750; Thiessen & Yee 2010 Child Development 81, 1287–1303; Saffran 2002 Journal of Memory and Language 47, 172–196; Misyak & Christiansen 2012 Language Learning 62, 302–331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39, 246–263; Thiessen et al. 2013 Psychological Bulletin 139, 792–814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37, 310–343). This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences'. PMID:27872374

  16. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    PubMed

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37: , 310-343).This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  17. Predicting human skin absorption of chemicals: development of a novel quantitative structure activity relationship.

    PubMed

    Luo, Wen; Medrek, Sarah; Misra, Jatin; Nohynek, Gerhard J

    2007-02-01

    The objective of this study was to construct and validate a quantitative structure-activity relationship model for skin absorption. Such models are valuable tools for screening and prioritization in safety and efficacy evaluation, and risk assessment of drugs and chemicals. A database of 340 chemicals with percutaneous absorption was assembled. Two models were derived from the training set consisting 306 chemicals (90/10 random split). In addition to the experimental K(ow) values, over 300 2D and 3D atomic and molecular descriptors were analyzed using MDL's QsarIS computer program. Subsequently, the models were validated using both internal (leave-one-out) and external validation (test set) procedures. Using the stepwise regression analysis, three molecular descriptors were determined to have significant statistical correlation with K(p) (R2 = 0.8225): logK(ow), X0 (quantification of both molecular size and the degree of skeletal branching), and SsssCH (count of aromatic carbon groups). In conclusion, two models to estimate skin absorption were developed. When compared to other skin absorption QSAR models in the literature, our model incorporated more chemicals and explored a large number of descriptors. Additionally, our models are reasonably predictive and have met both internal and external statistical validations.

  18. Evaluation of the implementation of an integrated program for musculoskeletal system care.

    PubMed

    Larrañaga, Igor; Soto-Gordoa, Myriam; Arrospide, Arantzazu; Jauregi, María Luz; Millas, Jesús; San Vicente, Ricardo; Aguirrebeña, Jabier; Mar, Javier

    The chronic nature of musculoskeletal diseases requires an integrated care which involves the Primary Care and the specialities of Rheumatology, Traumatology and Rehabilitation. The aim of this study was to assess the implementation of an integrated organizational model in osteoporosis, low back pain, shoulder disease and knee disease using Deming's continuous improvement process and considering referrals and resource consumption. A simulation model was used in the planning to predict the evolution of musculoskeletal diseases resource consumption and to carry out a Budget Impact Analysis from 2012 to 2020 in the Goierri-Alto Urola region. In the checking stage the status of the process in 2014 was evaluated using statistical analysis to check the degree of achievement of the objectives for each speciality. Simulation models showed that population with musculoskeletal disease in Goierri-Alto Urola will increase a 4.4% by 2020. Because of that, the expenses for a conventional healthcare system will have increased a 5.9%. However, if the intervention reaches its objectives the budget would decrease an 8.5%. The statistical analysis evidenced a decline in referrals to Traumatology service and a reduction of successive consultations in all specialities. The implementation of the integrated organizational model in osteoporosis, low back pain, shoulder disease and knee disease is still at an early stage. However, the empowerment of Primary Care improved patient referrals and reduced the costs. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  19. Continuum radiation from active galactic nuclei: A statistical study

    NASA Technical Reports Server (NTRS)

    Isobe, T.; Feigelson, E. D.; Singh, K. P.; Kembhavi, A.

    1986-01-01

    The physics of the continuum spectrum of active galactic nuclei (AGNs) was examined using a large data set and rigorous statistical methods. A data base was constructed for 469 objects which include radio selected quasars, optically selected quasars, X-ray selected AGNs, BL Lac objects, and optically unidentified compact radio sources. Each object has measurements of its radio, optical, X-ray core continuum luminosity, though many of them are upper limits. Since many radio sources have extended components, the core component were carefully selected out from the total radio luminosity. With survival analysis statistical methods, which can treat upper limits correctly, these data can yield better statistical results than those previously obtained. A variety of statistical tests are performed, such as the comparison of the luminosity functions in different subsamples, and linear regressions of luminosities in different bands. Interpretation of the results leads to the following tentative conclusions: the main emission mechanism of optically selected quasars and X-ray selected AGNs is thermal, while that of BL Lac objects is synchrotron; radio selected quasars may have two different emission mechanisms in the X-ray band; BL Lac objects appear to be special cases of the radio selected quasars; some compact radio sources show the possibility of synchrotron self-Compton (SSC) in the optical band; and the spectral index between the optical and the X-ray bands depends on the optical luminosity.

  20. Facial resemblance to emotions: group differences, impression effects, and race stereotypes.

    PubMed

    Zebrowitz, Leslie A; Kikuchi, Masako; Fellous, Jean-Marc

    2010-02-01

    The authors used connectionist modeling to extend previous research on emotion overgeneralization effects. Study 1 demonstrated that neutral expression male faces objectively resemble angry expressions more than female faces do, female faces objectively resemble surprise expressions more than male faces do, White faces objectively resemble angry expressions more than Black or Korean faces do, and Black faces objectively resemble happy and surprise expressions more than White faces do. Study 2 demonstrated that objective resemblance to emotion expressions influences trait impressions even when statistically controlling possible confounding influences of attractiveness and babyfaceness. It further demonstrated that emotion overgeneralization is moderated by face race and that racial differences in emotion resemblance contribute to White perceivers' stereotypes of Blacks and Asians. These results suggest that intergroup relations may be strained not only by cultural stereotypes but also by adaptive responses to emotion expressions that are overgeneralized to groups whose faces subtly resemble particular emotions. Copyright 2009 APA, all rights reserved

  1. Objective tropical cyclone extratropical transition detection in high-resolution reanalysis and climate model data

    DOE PAGES

    Zarzycki, Colin M.; Thatcher, Diana R.; Jablonowski, Christiane

    2017-01-22

    This paper describes an objective technique for detecting the extratropical transition (ET) of tropical cyclones (TCs) in high-resolution gridded climate data. The algorithm is based on previous observational studies using phase spaces to define the symmetry and vertical thermal structure of cyclones. Storm tracking is automated, allowing for direct analysis of climate data. Tracker performance in the North Atlantic is assessed using 23 years of data from the variable-resolution Community Atmosphere Model (CAM) at two different resolutions (DX 55 km and 28 km), the Climate Forecast System Reanalysis (CFSR, DX 38 km), and the ERA-Interim Reanalysis (ERA-I, DX 80 km).more » The mean spatiotemporal climatologies and seasonal cycles of objectively detected ET in the observationally constrained CFSR and ERA-I are well matched to previous observational studies, demonstrating the capability of the scheme to adequately find events. High resolution CAM reproduces TC and ET statistics that are in general agreement with reanalyses. One notable model bias, however, is significantly longer time between ET onset and ET completion in CAM, particularly for TCs that lose symmetry prior to developing a cold-core structure and becoming extratropical cyclones, demonstrating the capability of this method to expose model biases in simulated cyclones beyond the tropical phase.« less

  2. Bootstrap study of genome-enabled prediction reliabilities using haplotype blocks across Nordic Red cattle breeds.

    PubMed

    Cuyabano, B C D; Su, G; Rosa, G J M; Lund, M S; Gianola, D

    2015-10-01

    This study compared the accuracy of genome-enabled prediction models using individual single nucleotide polymorphisms (SNP) or haplotype blocks as covariates when using either a single breed or a combined population of Nordic Red cattle. The main objective was to compare predictions of breeding values of complex traits using a combined training population with haplotype blocks, with predictions using a single breed as training population and individual SNP as predictors. To compare the prediction reliabilities, bootstrap samples were taken from the test data set. With the bootstrapped samples of prediction reliabilities, we built and graphed confidence ellipses to allow comparisons. Finally, measures of statistical distances were used to calculate the gain in predictive ability. Our analyses are innovative in the context of assessment of predictive models, allowing a better understanding of prediction reliabilities and providing a statistical basis to effectively calibrate whether one prediction scenario is indeed more accurate than another. An ANOVA indicated that use of haplotype blocks produced significant gains mainly when Bayesian mixture models were used but not when Bayesian BLUP was fitted to the data. Furthermore, when haplotype blocks were used to train prediction models in a combined Nordic Red cattle population, we obtained up to a statistically significant 5.5% average gain in prediction accuracy, over predictions using individual SNP and training the model with a single breed. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Computational statistics using the Bayesian Inference Engine

    NASA Astrophysics Data System (ADS)

    Weinberg, Martin D.

    2013-09-01

    This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.

  4. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  5. Poisson-process generalization for the trading waiting-time distribution in a double-auction mechanism

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Ponta, Linda; Raberto, Marco; Scalas, Enrico

    2005-05-01

    In this paper, empirical analyses and computational experiments are presented on high-frequency data for a double-auction (book) market. Main objective of the paper is to generalize the order waiting time process in order to properly model such empirical evidences. The empirical study is performed on the best bid and best ask data of 7 U.S. financial markets, for 30-stock time series. In particular, statistical properties of trading waiting times have been analyzed and quality of fits is evaluated by suitable statistical tests, i.e., comparing empirical distributions with theoretical models. Starting from the statistical studies on real data, attention has been focused on the reproducibility of such results in an artificial market. The computational experiments have been performed within the Genoa Artificial Stock Market. In the market model, heterogeneous agents trade one risky asset in exchange for cash. Agents have zero intelligence and issue random limit or market orders depending on their budget constraints. The price is cleared by means of a limit order book. The order generation is modelled with a renewal process. Based on empirical trading estimation, the distribution of waiting times between two consecutive orders is modelled by a mixture of exponential processes. Results show that the empirical waiting-time distribution can be considered as a generalization of a Poisson process. Moreover, the renewal process can approximate real data and implementation on the artificial stocks market can reproduce the trading activity in a realistic way.

  6. Reconciling intuitive physics and Newtonian mechanics for colliding objects.

    PubMed

    Sanborn, Adam N; Mansinghka, Vikash K; Griffiths, Thomas L

    2013-04-01

    People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters.

  7. Simulation of Micron-Sized Debris Populations in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Hyde, J. L.; Prior, T.; Matney, Mark

    2010-01-01

    The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 m and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 m.

  8. Simulation of Micron-Sized Debris Populations in Low Earth Orbit

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Matney, M.; Liou, J.-C.; Hyde, J. L.; Prior, T. G.

    2010-01-01

    The update of ORDEM2000, the NASA Orbital Debris Engineering Model, to its new version . ORDEM2010, is nearly complete. As a part of the ORDEM upgrade, this paper addresses the simulation of micro-debris (greater than 10 micron and smaller than 1 mm in size) populations in low Earth orbit. The principal data used in the modeling of the micron-sized debris populations are in-situ hypervelocity impact records, accumulated in post-flight damage surveys on the space-exposed surfaces of returned spacecrafts. The development of the micro-debris model populations follows the general approach to deriving other ORDEM2010-required input populations for various components and types of debris. This paper describes the key elements and major steps in the statistical inference of the ORDEM2010 micro-debris populations. A crucial step is the construction of a degradation/ejecta source model to provide prior information on the micron-sized objects (such as orbital and object-size distributions). Another critical step is to link model populations with data, which is rather involved. It demands detailed information on area-time/directionality for all the space-exposed elements of a shuttle orbiter and damage laws, which relate impact damage with the physical properties of a projectile and impact conditions such as impact angle and velocity. Also needed are model-predicted debris fluxes as a function of object size and impact velocity from all possible directions. In spite of the very limited quantity of the available shuttle impact data, the population-derivation process is satisfactorily stable. Final modeling results obtained from shuttle window and radiator impact data are reasonably convergent and consistent, especially for the debris populations with object-size thresholds at 10 and 100 micron.

  9. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.

  10. Application of multivariable statistical techniques in plant-wide WWTP control strategies analysis.

    PubMed

    Flores, X; Comas, J; Roda, I R; Jiménez, L; Gernaey, K V

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.

  11. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGES

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  12. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  13. Influence of adaptive statistical iterative reconstruction algorithm on image quality in coronary computed tomography angiography

    PubMed Central

    Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess

    2016-01-01

    Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR. PMID:28405477

  14. A statistical metadata model for clinical trials' data management.

    PubMed

    Vardaki, Maria; Papageorgiou, Haralambos; Pentaris, Fragkiskos

    2009-08-01

    We introduce a statistical, process-oriented metadata model to describe the process of medical research data collection, management, results analysis and dissemination. Our approach explicitly provides a structure for pieces of information used in Clinical Study Data Management Systems, enabling a more active role for any associated metadata. Using the object-oriented paradigm, we describe the classes of our model that participate during the design of a clinical trial and the subsequent collection and management of the relevant data. The advantage of our approach is that we focus on presenting the structural inter-relation of these classes when used during datasets manipulation by proposing certain transformations that model the simultaneous processing of both data and metadata. Our solution reduces the possibility of human errors and allows for the tracking of all changes made during datasets lifecycle. The explicit modeling of processing steps improves data quality and assists in the problem of handling data collected in different clinical trials. The case study illustrates the applicability of the proposed framework demonstrating conceptually the simultaneous handling of datasets collected during two randomized clinical studies. Finally, we provide the main considerations for implementing the proposed framework into a modern Metadata-enabled Information System.

  15. Time Series Analysis Based on Running Mann Whitney Z Statistics

    USDA-ARS?s Scientific Manuscript database

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  16. Sexual network drivers of HIV and herpes simplex virus type 2 transmission

    PubMed Central

    Omori, Ryosuke; Abu-Raddad, Laith J.

    2017-01-01

    Objectives: HIV and herpes simplex virus type 2 (HSV-2) infections are sexually transmitted and propagate in sexual networks. Using mathematical modeling, we aimed to quantify effects of key network statistics on infection transmission, and extent to which HSV-2 prevalence can be a proxy of HIV prevalence. Design/methods: An individual-based simulation model was constructed to describe sex partnering and infection transmission, and was parameterized with representative natural history, transmission, and sexual behavior data. Correlations were assessed on model outcomes (HIV/HSV-2 prevalences) and multiple linear regressions were conducted to estimate adjusted associations and effect sizes. Results: HIV prevalence was one-third or less of HSV-2 prevalence. HIV and HSV-2 prevalences were associated with a Spearman's rank correlation coefficient of 0.64 (95% confidence interval: 0.58–0.69). Collinearities among network statistics were detected, most notably between concurrency versus mean and variance of number of partners. Controlling for confounding, unmarried mean/variance of number of partners (or alternatively concurrency) were the strongest predictors of HIV prevalence. Meanwhile, unmarried/married mean/variance of number of partners (or alternatively concurrency), and clustering coefficient were the strongest predictors of HSV-2 prevalence. HSV-2 prevalence was a strong predictor of HIV prevalence by proxying effects of network statistics. Conclusion: Network statistics produced similar and differential effects on HIV/HSV-2 transmission, and explained most of the variation in HIV and HSV-2 prevalences. HIV prevalence reflected primarily mean and variance of number of partners, but HSV-2 prevalence was affected by a range of network statistics. HSV-2 prevalence (as a proxy) can forecast a population's HIV epidemic potential, thereby informing interventions. PMID:28514276

  17. Cognitive Complaints After Breast Cancer Treatments: Examining the Relationship With Neuropsychological Test Performance

    PubMed Central

    2013-01-01

    Background Cognitive complaints are reported frequently after breast cancer treatments. Their association with neuropsychological (NP) test performance is not well-established. Methods Early-stage, posttreatment breast cancer patients were enrolled in a prospective, longitudinal, cohort study prior to starting endocrine therapy. Evaluation included an NP test battery and self-report questionnaires assessing symptoms, including cognitive complaints. Multivariable regression models assessed associations among cognitive complaints, mood, treatment exposures, and NP test performance. Results One hundred eighty-nine breast cancer patients, aged 21–65 years, completed the evaluation; 23.3% endorsed higher memory complaints and 19.0% reported higher executive function complaints (>1 SD above the mean for healthy control sample). Regression modeling demonstrated a statistically significant association of higher memory complaints with combined chemotherapy and radiation treatments (P = .01), poorer NP verbal memory performance (P = .02), and higher depressive symptoms (P < .001), controlling for age and IQ. For executive functioning complaints, multivariable modeling controlling for age, IQ, and other confounds demonstrated statistically significant associations with better NP visual memory performance (P = .03) and higher depressive symptoms (P < .001), whereas combined chemotherapy and radiation treatment (P = .05) approached statistical significance. Conclusions About one in five post–adjuvant treatment breast cancer patients had elevated memory and/or executive function complaints that were statistically significantly associated with domain-specific NP test performances and depressive symptoms; combined chemotherapy and radiation treatment was also statistically significantly associated with memory complaints. These results and other emerging studies suggest that subjective cognitive complaints in part reflect objective NP performance, although their etiology and biology appear to be multifactorial, motivating further transdisciplinary research. PMID:23606729

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of thismore » object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.« less

  19. Measurement of Satellite Impact Test Fragments for Modeling Orbital Debris

    NASA Technical Reports Server (NTRS)

    Hill, Nicole M.

    2009-01-01

    There are over 13,000 pieces of catalogued objects 10cm and larger in orbit around Earth [ODQN, January 2009, p12]. More than 6000 of these objects are fragments from explosions and collisions. As the earth-orbiting object count increases, debris-generating collisions in the future become a statistical inevitability. To aid in understanding this collision risk, the NASA Orbital Debris Program Office has developed computer models that calculate quantity and orbits of debris both currently in orbit and in future epochs. In order to create a reasonable computer model of the orbital debris environment, it is important to understand the mechanics of creation of debris as a result of a collision. The measurement of the physical characteristics of debris resulting from ground-based, hypervelocity impact testing aids in understanding the sizes and shapes of debris produced from potential impacts in orbit. To advance the accuracy of fragment shape/size determination, the NASA Orbital Debris Program Office recently implemented a computerized measurement system. The goal of this system is to improve knowledge and understanding of the relation between commonly used dimensions and overall shape. The technique developed involves scanning a single fragment with a hand-held laser device, measuring its size properties using a sophisticated software tool, and creating a three-dimensional computer model to demonstrate how the object might appear in orbit. This information is used to aid optical techniques in shape determination. This more automated and repeatable method provides higher accuracy in the size and shape determination of debris.

  20. Modelling solute dispersion in periodic heterogeneous porous media: Model benchmarking against intermediate scale experiments

    NASA Astrophysics Data System (ADS)

    Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham

    2018-06-01

    This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.

  1. Relational event models for longitudinal network data with an application to interhospital patient transfers.

    PubMed

    Vu, Duy; Lomi, Alessandro; Mascia, Daniele; Pallotti, Francesca

    2017-06-30

    The main objective of this paper is to introduce and illustrate relational event models, a new class of statistical models for the analysis of time-stamped data with complex temporal and relational dependencies. We outline the main differences between recently proposed relational event models and more conventional network models based on the graph-theoretic formalism typically adopted in empirical studies of social networks. Our main contribution involves the definition and implementation of a marked point process extension of currently available models. According to this approach, the sequence of events of interest is decomposed into two components: (a) event time and (b) event destination. This decomposition transforms the problem of selection of event destination in relational event models into a conditional multinomial logistic regression problem. The main advantages of this formulation are the possibility of controlling for the effect of event-specific data and a significant reduction in the estimation time of currently available relational event models. We demonstrate the empirical value of the model in an analysis of interhospital patient transfers within a regional community of health care organizations. We conclude with a discussion of how the models we presented help to overcome some the limitations of statistical models for networks that are currently available. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Consistent integration of experimental and ab initio data into molecular and coarse-grained models

    NASA Astrophysics Data System (ADS)

    Vlcek, Lukas

    As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.

  3. [Functional assessment of patients with vertigo and dizziness in occupational medicine].

    PubMed

    Zamysłowska-Szmytke, Ewa; Szostek-Rogula, Sylwia; Śliwińska-Kowalska, Mariola

    2018-03-09

    Balance assessment relies on symptoms, clinical examination and functional assessment and their verification in objective tests. Our study was aimed at calculating the assessment compatibility between questionnaires, functional scales and objective vestibular and balance examinations. A group of 131 patients (including 101 women; mean age: 59±14 years) of the audiology outpatient clinic was examined. Benign paroxysmal positional vertigo, phobic vertigo and central dizziness were the most common diseases observed in the study group. Patients' symptoms were tested using the questionnaire on Cawthworne-Cooksey exercises (CC), Dizziness Handicap Inventory (DHI) and Duke Anxiety-Depression Scale. Berg Balance Scale (BBS), Dynamic Gait Index (DGI), the Tinetti test, Timed Up and Go test (TUG), and Dynamic Visual Acuity (DVA) were used for the functional balance assessment. Objective evaluation included: videonystagmography caloric test and static posturography. The study results revealed statistically significant but moderate compatibility between functional tests BBS, DGI, TUG, DVA and caloric results (Kendall's W = 0.29) and higher for posturography (W = 0.33). The agreement between questionnaires and objective tests were very low (W = 0.08-0.11).The positive predictive values of BBS were 42% for caloric and 62% for posturography tests, of DGI - 46% and 57%, respectively. The results of functional tests (BBS, DGI, TUG, DVA) revealed statistically significant correlations with objective balance tests but low predictive values did not allow to use these tests in vestibular damage screening. Only half of the patients with functional disturbances revealed abnormal caloric or posturography tests. The qualification to work based on objective tests ignore functional state of the worker, which may influence the ability to work. Med Pr 2018;69(2):179-189. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  4. Spatial analyses for nonoverlapping objects with size variations and their application to coral communities.

    PubMed

    Muko, Soyoka; Shimatani, Ichiro K; Nozawa, Yoko

    2014-07-01

    Spatial distributions of individuals are conventionally analysed by representing objects as dimensionless points, in which spatial statistics are based on centre-to-centre distances. However, if organisms expand without overlapping and show size variations, such as is the case for encrusting corals, interobject spacing is crucial for spatial associations where interactions occur. We introduced new pairwise statistics using minimum distances between objects and demonstrated their utility when examining encrusting coral community data. We also calculated the conventional point process statistics and the grid-based statistics to clarify the advantages and limitations of each spatial statistical method. For simplicity, coral colonies were approximated by disks in these demonstrations. Focusing on short-distance effects, the use of minimum distances revealed that almost all coral genera were aggregated at a scale of 1-25 cm. However, when fragmented colonies (ramets) were treated as a genet, a genet-level analysis indicated weak or no aggregation, suggesting that most corals were randomly distributed and that fragmentation was the primary cause of colony aggregations. In contrast, point process statistics showed larger aggregation scales, presumably because centre-to-centre distances included both intercolony spacing and colony sizes (radius). The grid-based statistics were able to quantify the patch (aggregation) scale of colonies, but the scale was strongly affected by the colony size. Our approach quantitatively showed repulsive effects between an aggressive genus and a competitively weak genus, while the grid-based statistics (covariance function) also showed repulsion although the spatial scale indicated from the statistics was not directly interpretable in terms of ecological meaning. The use of minimum distances together with previously proposed spatial statistics helped us to extend our understanding of the spatial patterns of nonoverlapping objects that vary in size and the associated specific scales. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  5. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms.

    PubMed

    Tang, Jie; Nett, Brian E; Chen, Guang-Hong

    2009-10-07

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  6. Learning-based stochastic object models for characterizing anatomical variations

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  7. “Using Statistical Comparisons between SPartICus Cirrus Microphysical Measurements, Detailed Cloud Models, and GCM Cloud Parameterizations to Understand Physical Processes Controlling Cirrus Properties and to Improve the Cloud Parameterizations”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Sarah

    2015-12-01

    The dual objectives of this project were improving our basic understanding of processes that control cirrus microphysical properties and improvement of the representation of these processes in the parameterizations. A major effort in the proposed research was to integrate, calibrate, and better understand the uncertainties in all of these measurements.

  8. Incidence trend and risk factors for campylobacter infections in humans in Norway

    PubMed Central

    Sandberg, Marianne; Nygård, Karin; Meldal, Hege; Valle, Paul Steinar; Kruse, Hilde; Skjerve, Eystein

    2006-01-01

    Background The objectives of the study were to evaluate whether the increase in incidence of campylobacteriosis observed in humans in Norway from 1995 to 2001 was statistically significant and whether different biologically plausible risk factors were associated with the incidence of campylobacteriosis in the different counties in Norway. Methods To model the incidence of domestically acquired campylobacteriosis from 1995 to 2001, a population average random effect poisson model was applied (the trend model). To case data and assumed risk-factor/protective data such as sale of chicken, receiving treated drinking water, density of dogs and grazing animals, occupation of people in the municipalities and climatic factors from 2000 and 2001, an equivalent model accounting for geographical clustering was applied (the ecological model). Results The increase in incidence of campylobacteriosis in humans in Norway from 1995 to 2001 was statistically significant from 1998. Treated water was a protective factor against Campylobacter infections in humans with an IRR of 0.78 per percentage increase in people supplied. The two-level modelling technique showed no evidence of clustering of campylobacteriosis in any particular county. Aggregation of data on municipality level makes interpretation of the results at the individual level difficult. Conclusion The increase in incidence of Campylobacter infections in humans from 1995 to 2001 was statistically significant from 1998. Treated water was a protective factor against Campylobacter infections in humans with an IRR of 0.78 per percentage increase in people supplied. Campylobacter infections did not appear to be clustered in any particular county in Norway. PMID:16827925

  9. Development of a Cadaveric Model for Arthrocentesis.

    PubMed

    MacIver, Melissa A; Johnson, Matthew

    2015-01-01

    This article reports the development of a novel cadaveric model for future use in teaching arthrocentesis. In the clinical setting, animal safety is essential and practice is thus limited. Objectives of the study were to develop and compare a model to an unmodified cadaver by injecting one of two types of fluids to increase yield. The two fluids injected, mineral oil (MO) and hypertonic saline (HS), were compared to determine any difference on yield. Lastly, aspiration immediately after (T1) or three hours after (T2) injection were compared to determine any effect on diagnostic yield. Joints used included the stifle, elbow, and carpus in eight medium dog cadavers. Arthrocentesis was performed before injection (control) and yield measured. Test joints were injected with MO or HS and yield measured after range of motion (T1) and three hours post injection to simulate lab preparation (T2). Both models had statistically significantly higher yield compared with the unmodified cadaver in all joints at T1 and T2 (p<.05) with the exception of HST2 carpus. T2 aspiration had a statistically significant lower yield when compared to T1HS carpus, T1HS elbow, and T1MO carpus. Overall, irrespective of fluid volume or type, percent yield was lower in T2 compared to T1. No statistically significant difference was seen between HS and MO in most joints with the exception of MOT1 stifle and HST2 elbow. Within the time frame assessed, both models were acceptable. However, HS arthrocentesis models proved appropriate for student trial due to the difficult aspirations with MO.

  10. A framework for developing objective and measurable recovery criteria for threatened and endangered species.

    PubMed

    Himes Boor, Gina K

    2014-02-01

    For species listed under the U.S. Endangered Species Act (ESA), the U.S. Fish and Wildlife Service and National Marine Fisheries Service are tasked with writing recovery plans that include "objective, measurable criteria" that define when a species is no longer at risk of extinction, but neither the act itself nor agency guidelines provide an explicit definition of objective, measurable criteria. Past reviews of recovery plans, including one published in 2012, show that many criteria lack quantitative metrics with clear biological rationale and are not meeting the measureable and objective mandate. I reviewed how objective, measureable criteria have been defined implicitly and explicitly in peer-reviewed literature, the ESA, other U.S. statutes, and legal decisions. Based on a synthesis of these sources, I propose the following 6 standards be used as minimum requirements for objective, measurable criteria: contain a quantitative threshold with calculable units, stipulate a timeframe over which they must be met, explicitly define the spatial extent or population to which they apply, specify a sampling procedure that includes sample size, specify a statistical significance level, and include justification by providing scientific evidence that the criteria define a species whose extinction risk has been reduced to the desired level. To meet these 6 standards, I suggest that recovery plans be explicitly guided by and organized around a population viability modeling framework even if data or agency resources are too limited to complete a viability model. When data and resources are available, recovery criteria can be developed from the population viability model results, but when data and resources are insufficient for model implementation, extinction risk thresholds can be used as criteria. A recovery-planning approach centered on viability modeling will also yield appropriately focused data-acquisition and monitoring plans and will facilitate a seamless transition from recovery planning to delisting. © 2013 Society for Conservation Biology.

  11. Structural Models that Manage IT Portfolio Affecting Business Value of Enterprise Architecture

    NASA Astrophysics Data System (ADS)

    Kamogawa, Takaaki

    This paper examines the structural relationships between Information Technology (IT) governance and Enterprise Architecture (EA), with the objective of enhancing business value in the enterprise society. Structural models consisting of four related hypotheses reveal the relationship between IT governance and EA in the improvement of business values. We statistically examined the hypotheses by analyzing validated questionnaire items from respondents within firms listed on the Japanese stock exchange who were qualified to answer them. We concluded that firms which have organizational ability controlled by IT governance are more likely to deliver business value based on IT portfolio management.

  12. Modeling Bose-Einstein correlations via elementary emitting cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utyuzh, Oleg; Wilk, Grzegorz; Wlodarczyk, Zbigniew

    2007-04-01

    We propose a method of numerical modeling Bose-Einstein correlations by using the notion of the elementary emitting cell (EEC). They are intermediary objects containing identical bosons and are supposed to be produced independently during the hadronization process. Only bosons in the EEC, which represents a single quantum state here, are subjected to the effects of Bose-Einstein (BE) statistics, which forces them to follow a geometrical distribution. There are no such effects between particles from different EECs. We illustrate our proposition by calculating a representative number of typical distributions and discussing their sensitivity to EECs and their characteristics.

  13. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M., E-mail: nsanders@cfa.harvard.edu

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. Wemore » present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.« less

  14. Structure of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Criteria for Obsessive–Compulsive Personality Disorder in Patients With Binge Eating Disorder

    PubMed Central

    Ansell, Emily B; Pinto, Anthony; Edelen, Maria Orlando; Grilo, Carlos M

    2013-01-01

    Objective To examine 1-, 2-, and 3-factor model structures through confirmatory analytic procedures for Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) obsessive–compulsive personality disorder (OCPD) criteria in patients with binge eating disorder (BED). Method Participants were consecutive outpatients (n = 263) with binge eating disorder and were assessed with semi-structured interviews. The 8 OCPD criteria were submitted to confirmatory factor analyses in Mplus Version 4.2 (Los Angeles, CA) in which previously identified factor models of OCPD were compared for fit, theoretical relevance, and parsimony. Nested models were compared for significant improvements in model fit. Results Evaluation of indices of fit in combination with theoretical considerations suggest a multifactorial model is a significant improvement in fit over the current DSM-IV single-factor model of OCPD. Though the data support both 2- and 3-factor models, the 3-factor model is hindered by an underspecified third factor. Conclusion A multifactorial model of OCPD incorporating the factors perfectionism and rigidity represents the best compromise of fit and theory in modelling the structure of OCPD in patients with BED. A third factor representing miserliness may be relevant in BED populations but needs further development. The perfectionism and rigidity factors may represent distinct intrapersonal and interpersonal attempts at control and may have implications for the assessment of OCPD. PMID:19087485

  15. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  16. Visual saliency detection based on modeling the spatial Gaussianity

    NASA Astrophysics Data System (ADS)

    Ju, Hongbin

    2015-04-01

    In this paper, a novel salient object detection method based on modeling the spatial anomalies is presented. The proposed framework is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous objects among complex background. It is supposed that a natural image can be seen as a combination of some similar or dissimilar basic patches, and there is a direct relationship between its saliency and anomaly. Some patches share high degree of similarity and have a vast number of quantity. They usually make up the background of an image. On the other hand, some patches present strong rarity and specificity. We name these patches "anomalies". Generally, anomalous patch is a reflection of the edge or some special colors and textures in an image, and these pattern cannot be well "explained" by their surroundings. Human eyes show great interests in these anomalous patterns, and will automatically pick out the anomalous parts of an image as the salient regions. To better evaluate the anomaly degree of the basic patches and exploit their nonlinear statistical characteristics, a multivariate Gaussian distribution saliency evaluation model is proposed. In this way, objects with anomalous patterns usually appear as the outliers in the Gaussian distribution, and we identify these anomalous objects as salient ones. Experiments are conducted on the well-known MSRA saliency detection dataset. Compared with other recent developed visual saliency detection methods, our method suggests significant advantages.

  17. An Extended Objective Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program

    NASA Technical Reports Server (NTRS)

    Nutter, Paul; Manobianco, John

    1998-01-01

    This report describes the Applied Meteorology Unit's objective verification of the National Centers for Environmental Prediction 29-km eta model during separate warm and cool season periods from May 1996 through January 1998. The verification of surface and upper-air point forecasts was performed at three selected stations important for 45th Weather Squadron, Spaceflight Meteorology Group, and National Weather Service, Melbourne operational weather concerns. The statistical evaluation identified model biases that may result from inadequate parameterization of physical processes. Since model biases are relatively small compared to the random error component, most of the total model error results from day-to-day variability in the forecasts and/or observations. To some extent, these nonsystematic errors reflect the variability in point observations that sample spatial and temporal scales of atmospheric phenomena that cannot be resolved by the model. On average, Meso-Eta point forecasts provide useful guidance for predicting the evolution of the larger scale environment. A more substantial challenge facing model users in real time is the discrimination of nonsystematic errors that tend to inflate the total forecast error. It is important that model users maintain awareness of ongoing model changes. Such changes are likely to modify the basic error characteristics, particularly near the surface.

  18. Wave and Wind Model Performance Metrics Tools

    NASA Astrophysics Data System (ADS)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base with which statistics are readily calculated, for the short or long term. Such a system has potential for a quick transition to operations at NAVOCEANO.

  19. Mathematical modelling of tumour volume dynamics in response to stereotactic ablative radiotherapy for non-small cell lung cancer

    NASA Astrophysics Data System (ADS)

    Tariq, Imran; Humbert-Vidan, Laia; Chen, Tao; South, Christopher P.; Ezhil, Veni; Kirkby, Norman F.; Jena, Rajesh; Nisbet, Andrew

    2015-05-01

    This paper reports a modelling study of tumour volume dynamics in response to stereotactic ablative radiotherapy (SABR). The main objective was to develop a model that is adequate to describe tumour volume change measured during SABR, and at the same time is not excessively complex as lacking support from clinical data. To this end, various modelling options were explored, and a rigorous statistical method, the Akaike information criterion, was used to help determine a trade-off between model accuracy and complexity. The models were calibrated to the data from 11 non-small cell lung cancer patients treated with SABR. The results showed that it is feasible to model the tumour volume dynamics during SABR, opening up the potential for using such models in a clinical environment in the future.

  20. Subjective Ratings of Beauty and Aesthetics: Correlations With Statistical Image Properties in Western Oil Paintings

    PubMed Central

    Lehmann, Thomas; Redies, Christoph

    2017-01-01

    For centuries, oil paintings have been a major segment of the visual arts. The JenAesthetics data set consists of a large number of high-quality images of oil paintings of Western provenance from different art periods. With this database, we studied the relationship between objective image measures and subjective evaluations of the images, especially evaluations on aesthetics (defined as artistic value) and beauty (defined as individual liking). The objective measures represented low-level statistical image properties that have been associated with aesthetic value in previous research. Subjective rating scores on aesthetics and beauty correlated not only with each other but also with different combinations of the objective measures. Furthermore, we found that paintings from different art periods vary with regard to the objective measures, that is, they exhibit specific patterns of statistical image properties. In addition, clusters of participants preferred different combinations of these properties. In conclusion, the results of the present study provide evidence that statistical image properties vary between art periods and subject matters and, in addition, they correlate with the subjective evaluation of paintings by the participants. PMID:28694958

  1. Attitude Accessibility as a Function of Emotionality.

    PubMed

    Rocklage, Matthew D; Fazio, Russell H

    2018-04-01

    Despite the centrality of both attitude accessibility and attitude basis to the last 30 years of theoretical and empirical work concerning attitudes, little work has systematically investigated their relation. The research that does exist provides conflicting results and is not at all conclusive given the methodology that has been used. The current research uses recent advances in statistical modeling and attitude measurement to provide the most systematic examination of the relation between attitude accessibility and basis to date. Specifically, we use mixed-effects modeling which accounts for variation across individuals and attitude objects in conjunction with the Evaluative Lexicon (EL)-a linguistic approach that allows for the simultaneous measurement of an attitude's valence, extremity, and emotionality. We demonstrate across four studies, over 10,000 attitudes, and nearly 50 attitude objects that attitudes based on emotion tend to be more accessible in memory, particularly if the attitude is positive.

  2. Comparison Analysis of Recognition Algorithms of Forest-Cover Objects on Hyperspectral Air-Borne and Space-Borne Images

    NASA Astrophysics Data System (ADS)

    Kozoderov, V. V.; Kondranin, T. V.; Dmitriev, E. V.

    2017-12-01

    The basic model for the recognition of natural and anthropogenic objects using their spectral and textural features is described in the problem of hyperspectral air-borne and space-borne imagery processing. The model is based on improvements of the Bayesian classifier that is a computational procedure of statistical decision making in machine-learning methods of pattern recognition. The principal component method is implemented to decompose the hyperspectral measurements on the basis of empirical orthogonal functions. Application examples are shown of various modifications of the Bayesian classifier and Support Vector Machine method. Examples are provided of comparing these classifiers and a metrical classifier that operates on finding the minimal Euclidean distance between different points and sets in the multidimensional feature space. A comparison is also carried out with the " K-weighted neighbors" method that is close to the nonparametric Bayesian classifier.

  3. Satellite Systems Design/Simulation Environment: A Systems Approach to Pre-Phase A Design

    NASA Technical Reports Server (NTRS)

    Ferebee, Melvin J., Jr.; Troutman, Patrick A.; Monell, Donald W.

    1997-01-01

    A toolset for the rapid development of small satellite systems has been created. The objective of this tool is to support the definition of spacecraft mission concepts to satisfy a given set of mission and instrument requirements. The objective of this report is to provide an introduction to understanding and using the SMALLSAT Model. SMALLSAT is a computer-aided Phase A design and technology evaluation tool for small satellites. SMALLSAT enables satellite designers, mission planners, and technology program managers to observe the likely consequences of their decisions in terms of satellite configuration, non-recurring and recurring cost, and mission life cycle costs and availability statistics. It was developed by Princeton Synergetic, Inc. and User Systems, Inc. as a revision of the previous TECHSAT Phase A design tool, which modeled medium-sized Earth observation satellites. Both TECHSAT and SMALLSAT were developed for NASA.

  4. Detection of Single Standing Dead Trees from Aerial Color Infrared Imagery by Segmentation with Shape and Intensity Priors

    NASA Astrophysics Data System (ADS)

    Polewski, P.; Yao, W.; Heurich, M.; Krzystek, P.; Stilla, U.

    2015-03-01

    Standing dead trees, known as snags, are an essential factor in maintaining biodiversity in forest ecosystems. Combined with their role as carbon sinks, this makes for a compelling reason to study their spatial distribution. This paper presents an integrated method to detect and delineate individual dead tree crowns from color infrared aerial imagery. Our approach consists of two steps which incorporate statistical information about prior distributions of both the image intensities and the shapes of the target objects. In the first step, we perform a Gaussian Mixture Model clustering in the pixel color space with priors on the cluster means, obtaining up to 3 components corresponding to dead trees, living trees, and shadows. We then refine the dead tree regions using a level set segmentation method enriched with a generative model of the dead trees' shape distribution as well as a discriminative model of their pixel intensity distribution. The iterative application of the statistical shape template yields the set of delineated dead crowns. The prior information enforces the consistency of the template's shape variation with the shape manifold defined by manually labeled training examples, which makes it possible to separate crowns located in close proximity and prevents the formation of large crown clusters. Also, the statistical information built into the segmentation gives rise to an implicit detection scheme, because the shape template evolves towards an empty contour if not enough evidence for the object is present in the image. We test our method on 3 sample plots from the Bavarian Forest National Park with reference data obtained by manually marking individual dead tree polygons in the images. Our results are scenario-dependent and range from a correctness/completeness of 0.71/0.81 up to 0.77/1, with an average center-of-gravity displacement of 3-5 pixels between the detected and reference polygons.

  5. Gift from statistical learning: Visual statistical learning enhances memory for sequence elements and impairs memory for items that disrupt regularities.

    PubMed

    Otsuka, Sachio; Saiki, Jun

    2016-02-01

    Prior studies have shown that visual statistical learning (VSL) enhances familiarity (a type of memory) of sequences. How do statistical regularities influence the processing of each triplet element and inserted distractors that disrupt the regularity? Given that increased attention to triplets induced by VSL and inhibition of unattended triplets, we predicted that VSL would promote memory for each triplet constituent, and degrade memory for inserted stimuli. Across the first two experiments, we found that objects from structured sequences were more likely to be remembered than objects from random sequences, and that letters (Experiment 1) or objects (Experiment 2) inserted into structured sequences were less likely to be remembered than those inserted into random sequences. In the subsequent two experiments, we examined an alternative account for our results, whereby the difference in memory for inserted items between structured and random conditions is due to individuation of items within random sequences. Our findings replicated even when control letters (Experiment 3A) or objects (Experiment 3B) were presented before or after, rather than inserted into, random sequences. Our findings suggest that statistical learning enhances memory for each item in a regular set and impairs memory for items that disrupt the regularity. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Discovery of VHE γ -ray emission from the BL Lacertae object B3 2247+381 with the MAGIC telescopes

    DOE PAGES

    Aleksić, J.; Alvarez, E. A.; Antonelli, L. A.; ...

    2012-03-02

    Here, we study the non-thermal jet emission of the BL Lac object B3 2247+381 during a high optical state. The MAGIC telescopes observed the source during 13 nights between September 30th and October 30th 2010, collecting a total of 14.2 h of good quality very high energy (VHE) γ-ray data. Simultaneous multiwavelength data was obtained with X-ray observations by the Swift satellite and optical R-band observations at the KVA-telescope. We also use high energy γ-ray (HE, 0.1-100 GeV) data from the Fermi satellite. We also dedected the BL Lac object B3 2247+381 (z = 0.119) , for the first time,more » at VHE γ-rays at a statistical significance of 5.6σ. A soft VHE spectrum with a photon index of -3.2 ± 0.6 was determined. No significant short term flux variations were found. Finally, we model the spectral energy distribution using a one-zone SSC-model, which can successfully describe our data.« less

  7. A Classification of Statistics Courses (A Framework for Studying Statistical Education)

    ERIC Educational Resources Information Center

    Turner, J. C.

    1976-01-01

    A classification of statistics courses in presented, with main categories of "course type,""methods of presentation,""objectives," and "syllabus." Examples and suggestions for uses of the classification are given. (DT)

  8. COLLABORATIVE RESEARCH:USING ARM OBSERVATIONS & ADVANCED STATISTICAL TECHNIQUES TO EVALUATE CAM3 CLOUDS FOR DEVELOPMENT OF STOCHASTIC CLOUD-RADIATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somerville, Richard

    2013-08-22

    The long-range goal of several past and current projects in our DOE-supported research has been the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data, and the implementation and testing of these parameterizations in global models. The main objective of the present project being reported on here has been to develop and apply advanced statistical techniques, including Bayesian posterior estimates, to diagnose and evaluate features of both observed and simulated clouds. The research carried out under this project has been novel in two important ways. The first is that it is a key stepmore » in the development of practical stochastic cloud-radiation parameterizations, a new category of parameterizations that offers great promise for overcoming many shortcomings of conventional schemes. The second is that this work has brought powerful new tools to bear on the problem, because it has been a collaboration between a meteorologist with long experience in ARM research (Somerville) and a mathematician who is an expert on a class of advanced statistical techniques that are well-suited for diagnosing model cloud simulations using ARM observations (Shen).« less

  9. Diversity of Poissonian populations.

    PubMed

    Eliazar, Iddo I; Sokolov, Igor M

    2010-01-01

    Populations represented by collections of points scattered randomly on the real line are ubiquitous in science and engineering. The statistical modeling of such populations leads naturally to Poissonian populations-Poisson processes on the real line with a distinguished maximal point. Poissonian populations are infinite objects underlying key issues in statistical physics, probability theory, and random fractals. Due to their infiniteness, measuring the diversity of Poissonian populations depends on the lower-bound cut-off applied. This research characterizes the classes of Poissonian populations whose diversities are invariant with respect to the cut-off level applied and establishes an elemental connection between these classes and extreme-value theory. The measures of diversity considered are variance and dispersion, Simpson's index and inverse participation ratio, Shannon's entropy and Rényi's entropy, and Gini's index.

  10. Towards Principled Experimental Study of Autonomous Mobile Robots

    NASA Technical Reports Server (NTRS)

    Gat, Erann

    1995-01-01

    We review the current state of research in autonomous mobile robots and conclude that there is an inadequate basis for predicting the reliability and behavior of robots operating in unengineered environments. We present a new approach to the study of autonomous mobile robot performance based on formal statistical analysis of independently reproducible experiments conducted on real robots. Simulators serve as models rather than experimental surrogates. We demonstrate three new results: 1) Two commonly used performance metrics (time and distance) are not as well correlated as is often tacitly assumed. 2) The probability distributions of these performance metrics are exponential rather than normal, and 3) a modular, object-oriented simulation accurately predicts the behavior of the real robot in a statistically significant manner.

  11. Coagulation-fragmentation for a finite number of particles and application to telomere clustering in the yeast nucleus

    NASA Astrophysics Data System (ADS)

    Hozé, Nathanaël; Holcman, David

    2012-01-01

    We develop a coagulation-fragmentation model to study a system composed of a small number of stochastic objects moving in a confined domain, that can aggregate upon binding to form local clusters of arbitrary sizes. A cluster can also dissociate into two subclusters with a uniform probability. To study the statistics of clusters, we combine a Markov chain analysis with a partition number approach. Interestingly, we obtain explicit formulas for the size and the number of clusters in terms of hypergeometric functions. Finally, we apply our analysis to study the statistical physics of telomeres (ends of chromosomes) clustering in the yeast nucleus and show that the diffusion-coagulation-fragmentation process can predict the organization of telomeres.

  12. Contrasting effects of feature-based statistics on the categorisation and basic-level identification of visual objects.

    PubMed

    Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K

    2012-03-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    PubMed Central

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  14. The integration of probabilistic information during sensorimotor estimation is unimpaired in children with Cerebral Palsy

    PubMed Central

    Sokhey, Taegh; Gaebler-Spira, Deborah; Kording, Konrad P.

    2017-01-01

    Background It is important to understand the motor deficits of children with Cerebral Palsy (CP). Our understanding of this motor disorder can be enriched by computational models of motor control. One crucial stage in generating movement involves combining uncertain information from different sources, and deficits in this process could contribute to reduced motor function in children with CP. Healthy adults can integrate previously-learned information (prior) with incoming sensory information (likelihood) in a close-to-optimal way when estimating object location, consistent with the use of Bayesian statistics. However, there are few studies investigating how children with CP perform sensorimotor integration. We compare sensorimotor estimation in children with CP and age-matched controls using a model-based analysis to understand the process. Methods and findings We examined Bayesian sensorimotor integration in children with CP, aged between 5 and 12 years old, with Gross Motor Function Classification System (GMFCS) levels 1–3 and compared their estimation behavior with age-matched typically-developing (TD) children. We used a simple sensorimotor estimation task which requires participants to combine probabilistic information from different sources: a likelihood distribution (current sensory information) with a prior distribution (learned target information). In order to examine sensorimotor integration, we quantified how participants weighed statistical information from the two sources (prior and likelihood) and compared this to the statistical optimal weighting. We found that the weighing of statistical information in children with CP was as statistically efficient as that of TD children. Conclusions We conclude that Bayesian sensorimotor integration is not impaired in children with CP and therefore, does not contribute to their motor deficits. Future research has the potential to enrich our understanding of motor disorders by investigating the stages of motor processing set out by computational models. Therapeutic interventions should exploit the ability of children with CP to use statistical information. PMID:29186196

  15. Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.

    PubMed

    Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad

    2013-09-01

    We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.

  16. Computational algorithms dealing with the classical and statistical mechanics of celestial scale polymers in space elevator technology

    NASA Astrophysics Data System (ADS)

    Knudsen, Steven; Golubovic, Leonardo

    Prospects to build Space Elevator (SE) systems have become realistic with ultra-strong materials such as carbon nano-tubes and diamond nano-threads. At cosmic length-scales, space elevators can be modeled as polymer like floppy strings of tethered mass beads. A new venue in SE science has emerged with the introduction of the Rotating Space Elevator (RSE) concept supported by novel algorithms discussed in this presentation. An RSE is a loopy string reaching into outer space. Unlike the classical geostationary SE concepts of Tsiolkovsky, Artsutanov, and Pearson, our RSE exhibits an internal rotation. Thanks to this, objects sliding along the RSE loop spontaneously oscillate between two turning points, one of which is close to the Earth whereas the other one is in outer space. The RSE concept thus solves a major problem in SE technology which is how to supply energy to the climbers moving along space elevator strings. The investigation of the classical and statistical mechanics of a floppy string interacting with objects sliding along it required development of subtle computational algorithms described in this presentation

  17. Big heart data: advancing health informatics through data sharing in cardiovascular imaging.

    PubMed

    Suinesiaputra, Avan; Medrano-Gracia, Pau; Cowan, Brett R; Young, Alistair A

    2015-07-01

    The burden of heart disease is rapidly worsening due to the increasing prevalence of obesity and diabetes. Data sharing and open database resources for heart health informatics are important for advancing our understanding of cardiovascular function, disease progression and therapeutics. Data sharing enables valuable information, often obtained at considerable expense and effort, to be reused beyond the specific objectives of the original study. Many government funding agencies and journal publishers are requiring data reuse, and are providing mechanisms for data curation and archival. Tools and infrastructure are available to archive anonymous data from a wide range of studies, from descriptive epidemiological data to gigabytes of imaging data. Meta-analyses can be performed to combine raw data from disparate studies to obtain unique comparisons or to enhance statistical power. Open benchmark datasets are invaluable for validating data analysis algorithms and objectively comparing results. This review provides a rationale for increased data sharing and surveys recent progress in the cardiovascular domain. We also highlight the potential of recent large cardiovascular epidemiological studies enabling collaborative efforts to facilitate data sharing, algorithms benchmarking, disease modeling and statistical atlases.

  18. Characterization of ABS specimens produced via the 3D printing technology for drone structural components

    NASA Astrophysics Data System (ADS)

    Ferro, Carlo Giovanni; Brischetto, Salvatore; Torre, Roberto; Maggiore, Paolo

    2016-07-01

    The Fused Deposition Modelling (FDM) technology is widely used in rapid prototyping. 3D printers for home desktop applications are usually employed to make non-structural objects. When the mechanical stresses are not excessive, this technology can also be successfully employed to produce structural objects, not only in prototyping stage but also in the realization of series pieces. The innovative idea of the present work is the application of this technology, implemented in a desktop 3D printer, to the realization of components for aeronautical use, especially for unmanned aerial systems. For this purpose, the paper is devoted to the statistical study of the performance of a desktop 3D printer to understand how the process performs and which are the boundary limits of acceptance. Mechanical and geometrical properties of ABS (Acrylonitrile Butadiene Styrene) specimens, such as tensile strength and stiffness, have been evaluated. ASTM638 type specimens have been used. A capability analysis has been applied for both mechanical and dimensional performances. Statistically stable limits have been determined using experimentally collected data.

  19. Commentary on the statistical properties of noise and its implication on general linear models in functional near-infrared spectroscopy.

    PubMed

    Huppert, Theodore J

    2016-01-01

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.

  20. Robust Dynamic Multi-objective Vehicle Routing Optimization Method.

    PubMed

    Guo, Yi-Nan; Cheng, Jian; Luo, Sha; Gong, Dun-Wei

    2017-03-21

    For dynamic multi-objective vehicle routing problems, the waiting time of vehicle, the number of serving vehicles, the total distance of routes were normally considered as the optimization objectives. Except for above objectives, fuel consumption that leads to the environmental pollution and energy consumption was focused on in this paper. Considering the vehicles' load and the driving distance, corresponding carbon emission model was built and set as an optimization objective. Dynamic multi-objective vehicle routing problems with hard time windows and randomly appeared dynamic customers, subsequently, were modeled. In existing planning methods, when the new service demand came up, global vehicle routing optimization method was triggered to find the optimal routes for non-served customers, which was time-consuming. Therefore, robust dynamic multi-objective vehicle routing method with two-phase is proposed. Three highlights of the novel method are: (i) After finding optimal robust virtual routes for all customers by adopting multi-objective particle swarm optimization in the first phase, static vehicle routes for static customers are formed by removing all dynamic customers from robust virtual routes in next phase. (ii)The dynamically appeared customers append to be served according to their service time and the vehicles' statues. Global vehicle routing optimization is triggered only when no suitable locations can be found for dynamic customers. (iii)A metric measuring the algorithms' robustness is given. The statistical results indicated that the routes obtained by the proposed method have better stability and robustness, but may be sub-optimum. Moreover, time-consuming global vehicle routing optimization is avoided as dynamic customers appear.

  1. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.

  2. Learning-based stochastic object models for use in optimizing imaging systems

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua

    2017-03-01

    It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.

  3. Experimental Concepts for Testing Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  4. Asteroid shape and spin statistics from convex models

    NASA Astrophysics Data System (ADS)

    Torppa, J.; Hentunen, V.-P.; Pääkkönen, P.; Kehusmaa, P.; Muinonen, K.

    2008-11-01

    We introduce techniques for characterizing convex shape models of asteroids with a small number of parameters, and apply these techniques to a set of 87 models from convex inversion. We present three different approaches for determining the overall dimensions of an asteroid. With the first technique, we measured the dimensions of the shapes in the direction of the rotation axis and in the equatorial plane and with the two other techniques, we derived the best-fit ellipsoid. We also computed the inertia matrix of the model shape to test how well it represents the target asteroid, i.e., to find indications of possible non-convex features or albedo variegation, which the convex shape model cannot reproduce. We used shape models for 87 asteroids to perform statistical analyses and to study dependencies between shape and rotation period, size, and taxonomic type. We detected correlations, but more data are required, especially on small and large objects, as well as slow and fast rotators, to reach a more thorough understanding about the dependencies. Results show, e.g., that convex models of asteroids are not that far from ellipsoids in root-mean-square sense, even though clearly irregular features are present. We also present new spin and shape solutions for Asteroids (31) Euphrosyne, (54) Alexandra, (79) Eurynome, (93) Minerva, (130) Elektra, (376) Geometria, (471) Papagena, and (776) Berbericia. We used a so-called semi-statistical approach to obtain a set of possible spin state solutions. The number of solutions depends on the abundancy of the data, which for Eurynome, Elektra, and Geometria was extensive enough for determining an unambiguous spin and shape solution. Data of Euphrosyne, on the other hand, provided a wide distribution of possible spin solutions, whereas the rest of the targets have two or three possible solutions.

  5. A note on statistical analysis of shape through triangulation of landmarks

    PubMed Central

    Rao, C. Radhakrishna

    2000-01-01

    In an earlier paper, the author jointly with S. Suryawanshi proposed statistical analysis of shape through triangulation of landmarks on objects. It was observed that the angles of the triangles are invariant to scaling, location, and rotation of objects. No distinction was made between an object and its reflection. The present paper provides the methodology of shape discrimination when reflection is also taken into account and makes suggestions for modifications to be made when some of the landmarks are collinear. PMID:10737780

  6. Structural equation model analysis for the evaluation of overall driving performance: A driving simulator study focusing on driver distraction.

    PubMed

    Papantoniou, Panagiotis

    2018-04-03

    The present research relies on 2 main objectives. The first is to investigate whether latent model analysis through a structural equation model can be implemented on driving simulator data in order to define an unobserved driving performance variable. Subsequently, the second objective is to investigate and quantify the effect of several risk factors including distraction sources, driver characteristics, and road and traffic environment on the overall driving performance and not in independent driving performance measures. For the scope of the present research, 95 participants from all age groups were asked to drive under different types of distraction (conversation with passenger, cell phone use) in urban and rural road environments with low and high traffic volume in a driving simulator experiment. Then, in the framework of the statistical analysis, a correlation table is presented investigating any of a broad class of statistical relationships between driving simulator measures and a structural equation model is developed in which overall driving performance is estimated as a latent variable based on several individual driving simulator measures. Results confirm the suitability of the structural equation model and indicate that the selection of the specific performance measures that define overall performance should be guided by a rule of representativeness between the selected variables. Moreover, results indicate that conversation with the passenger was not found to have a statistically significant effect, indicating that drivers do not change their performance while conversing with a passenger compared to undistracted driving. On the other hand, results support the hypothesis that cell phone use has a negative effect on driving performance. Furthermore, regarding driver characteristics, age, gender, and experience all have a significant effect on driving performance, indicating that driver-related characteristics play the most crucial role in overall driving performance. The findings of this study allow a new approach to the investigation of driving behavior in driving simulator experiments and in general. By the successful implementation of the structural equation model, driving behavior can be assessed in terms of overall performance and not through individual performance measures, which allows an important scientific step forward from piecemeal analyses to a sound combined analysis of the interrelationship between several risk factors and overall driving performance.

  7. Relative Contributions of Agricultural Drift, Para-Occupational ...

    EPA Pesticide Factsheets

    Background: Increased pesticide concentrations in house dust in agricultural areas have been attributed to several exposure pathways, including agricultural drift, para-occupational, and residential use. Objective: To guide future exposure assessment efforts, we quantified relative contributions of these pathways using meta-regression models of published data on dust pesticide concentrations. Methods: From studies in North American agricultural areas published from 1995-2015, we abstracted dust pesticide concentrations reported as summary statistics (e.g., geometric means (GM)). We analyzed these data using mixed-effects meta-regression models that weighted each summary statistic by its inverse variance. Dependent variables were either the log-transformed GM (drift) or the log-transformed ratio of GMs from two groups (para-occupational, residential use). Results: For the drift pathway, predicted GMs decreased sharply and nonlinearly, with GMs 64% lower in homes 250 m versus 23 m from fields (inter-quartile range of published data) based on 52 statistics from 7 studies. For the para-occupational pathway, GMs were 2.3 times higher (95% confidence interval [CI]: 1.5-3.3; 15 statistics, 5 studies) in homes of farmers who applied pesticides more versus less recently or frequently. For the residential use pathway, GMs were 1.3 (95%CI: 1.1-1.4) and 1.5 (95%CI: 1.2-1.9) times higher in treated versus untreated homes, when the probability that a pesticide was used for

  8. 2.5-Year-Olds Use Cross-Situational Consistency to Learn Verbs under Referential Uncertainty

    ERIC Educational Resources Information Center

    Scott, Rose M.; Fisher, Cynthia

    2012-01-01

    Recent evidence shows that children can use cross-situational statistics to learn new object labels under referential ambiguity (e.g., Smith & Yu, 2008). Such evidence has been interpreted as support for proposals that statistical information about word-referent co-occurrence plays a powerful role in word learning. But object labels represent only…

  9. Impact of Assimilation on Heavy Rainfall Simulations Using WRF Model: Sensitivity of Assimilation Results to Background Error Statistics

    NASA Astrophysics Data System (ADS)

    Rakesh, V.; Kantharao, B.

    2017-03-01

    Data assimilation is considered as one of the effective tools for improving forecast skill of mesoscale models. However, for optimum utilization and effective assimilation of observations, many factors need to be taken into account while designing data assimilation methodology. One of the critical components that determines the amount and propagation observation information into the analysis, is model background error statistics (BES). The objective of this study is to quantify how BES in data assimilation impacts on simulation of heavy rainfall events over a southern state in India, Karnataka. Simulations of 40 heavy rainfall events were carried out using Weather Research and Forecasting Model with and without data assimilation. The assimilation experiments were conducted using global and regional BES while the experiment with no assimilation was used as the baseline for assessing the impact of data assimilation. The simulated rainfall is verified against high-resolution rain-gage observations over Karnataka. Statistical evaluation using several accuracy and skill measures shows that data assimilation has improved the heavy rainfall simulation. Our results showed that the experiment using regional BES outperformed the one which used global BES. Critical thermo-dynamic variables conducive for heavy rainfall like convective available potential energy simulated using regional BES is more realistic compared to global BES. It is pointed out that these results have important practical implications in design of forecast platforms while decision-making during extreme weather events

  10. Addressing the mischaracterization of extreme rainfall in regional climate model simulations - A synoptic pattern based bias correction approach

    NASA Astrophysics Data System (ADS)

    Li, Jingwan; Sharma, Ashish; Evans, Jason; Johnson, Fiona

    2018-01-01

    Addressing systematic biases in regional climate model simulations of extreme rainfall is a necessary first step before assessing changes in future rainfall extremes. Commonly used bias correction methods are designed to match statistics of the overall simulated rainfall with observations. This assumes that change in the mix of different types of extreme rainfall events (i.e. convective and non-convective) in a warmer climate is of little relevance in the estimation of overall change, an assumption that is not supported by empirical or physical evidence. This study proposes an alternative approach to account for the potential change of alternate rainfall types, characterized here by synoptic weather patterns (SPs) using self-organizing maps classification. The objective of this study is to evaluate the added influence of SPs on the bias correction, which is achieved by comparing the corrected distribution of future extreme rainfall with that using conventional quantile mapping. A comprehensive synthetic experiment is first defined to investigate the conditions under which the additional information of SPs makes a significant difference to the bias correction. Using over 600,000 synthetic cases, statistically significant differences are found to be present in 46% cases. This is followed by a case study over the Sydney region using a high-resolution run of the Weather Research and Forecasting (WRF) regional climate model, which indicates a small change in the proportions of the SPs and a statistically significant change in the extreme rainfall over the region, although the differences between the changes obtained from the two bias correction methods are not statistically significant.

  11. A heuristic statistical stopping rule for iterative reconstruction in emission tomography.

    PubMed

    Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D

    2013-01-01

    We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.

  12. Improving phylogenetic analyses by incorporating additional information from genetic sequence databases.

    PubMed

    Liang, Li-Jung; Weiss, Robert E; Redelings, Benjamin; Suchard, Marc A

    2009-10-01

    Statistical analyses of phylogenetic data culminate in uncertain estimates of underlying model parameters. Lack of additional data hinders the ability to reduce this uncertainty, as the original phylogenetic dataset is often complete, containing the entire gene or genome information available for the given set of taxa. Informative priors in a Bayesian analysis can reduce posterior uncertainty; however, publicly available phylogenetic software specifies vague priors for model parameters by default. We build objective and informative priors using hierarchical random effect models that combine additional datasets whose parameters are not of direct interest but are similar to the analysis of interest. We propose principled statistical methods that permit more precise parameter estimates in phylogenetic analyses by creating informative priors for parameters of interest. Using additional sequence datasets from our lab or public databases, we construct a fully Bayesian semiparametric hierarchical model to combine datasets. A dynamic iteratively reweighted Markov chain Monte Carlo algorithm conveniently recycles posterior samples from the individual analyses. We demonstrate the value of our approach by examining the insertion-deletion (indel) process in the enolase gene across the Tree of Life using the phylogenetic software BALI-PHY; we incorporate prior information about indels from 82 curated alignments downloaded from the BAliBASE database.

  13. The analysis of factors of management of safety of critical information infrastructure with use of dynamic models

    NASA Astrophysics Data System (ADS)

    Trostyansky, S. N.; Kalach, A. V.; Lavlinsky, V. V.; Lankin, O. V.

    2018-03-01

    Based on the analysis of the dynamic model of panel data by region, including fire statistics for surveillance sites and statistics of a set of regional socio-economic indicators, as well as the time of rapid response of the state fire service to fires, the probability of fires in the surveillance sites and the risk of human death in The result of such fires from the values of the corresponding indicators for the previous year, a set of regional social-economics factors, as well as regional indicators time rapid response of the state fire service in the fire. The results obtained are consistent with the results of the application to the fire risks of the model of a rational offender. Estimation of the economic equivalent of human life from data on surveillance objects for Russia, calculated on the basis of the analysis of the presented dynamic model of fire risks, correctly agrees with the known literary data. The results obtained on the basis of the econometric approach to fire risks allow us to forecast fire risks at the supervisory sites in the regions of Russia and to develop management solutions to minimize such risks.

  14. Training models of anatomic shape variability

    PubMed Central

    Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang

    2008-01-01

    Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919

  15. Assessment of metal pollution based on multivariate statistical modeling of 'hot spot' sediments from the Black Sea.

    PubMed

    Simeonov, V; Massart, D L; Andreev, G; Tsakovski, S

    2000-11-01

    The paper deals with application of different statistical methods like cluster and principal components analysis (PCA), partial least squares (PLSs) modeling. These approaches are an efficient tool in achieving better understanding about the contamination of two gulf regions in Black Sea. As objects of the study, a collection of marine sediment samples from Varna and Bourgas "hot spots" gulf areas are used. In the present case the use of cluster and PCA make it possible to separate three zones of the marine environment with different levels of pollution by interpretation of the sediment analysis (Bourgas gulf, Varna gulf and lake buffer zone). Further, the extraction of four latent factors offers a specific interpretation of the possible pollution sources and separates natural from anthropogenic factors, the latter originating from contamination by chemical, oil refinery and steel-work enterprises. Finally, the PLSs modeling gives a better opportunity in predicting contaminant concentration on tracer (or tracers) element as compared to the one-dimensional approach of the baseline models. The results of the study are important not only in local aspect as they allow quick response in finding solutions and decision making but also in broader sense as a useful environmetrical methodology.

  16. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  17. Geometric, Statistical, and Topological Modeling of Intrinsic Data Manifolds: Application to 3D Shapes

    DTIC Science & Technology

    2009-01-01

    representation to a simple curve in 3D by using the Whitney embedding theorem. In a very ludic way, we propose to combine phases one and two to...elimination principle which takes advantage of the designed parametrization. To further refine discrimination among objects, we introduce a post...packing numbers and design of principal curves. IEEE transactions on Pattern Analysis and Machine Intel- ligence, 22(3):281-297, 2000. [68] M. H. Yang, Face

  18. Statistical analysis of road-vehicle-driver interaction as an enabler to designing behavioural models

    NASA Astrophysics Data System (ADS)

    Chakravarty, T.; Chowdhury, A.; Ghose, A.; Bhaumik, C.; Balamuralidhar, P.

    2014-03-01

    Telematics form an important technology enabler for intelligent transportation systems. By deploying on-board diagnostic devices, the signatures of vehicle vibration along with its location and time are recorded. Detailed analyses of the collected signatures offer deep insights into the state of the objects under study. Towards that objective, we carried out experiments by deploying telematics device in one of the office bus that ferries employees to office and back. Data is being collected from 3-axis accelerometer, GPS, speed and the time for all the journeys. In this paper, we present initial results of the above exercise by applying statistical methods to derive information through systematic analysis of the data collected over four months. It is demonstrated that the higher order derivative of the measured Z axis acceleration samples display the properties Weibull distribution when the time axis is replaced by the amplitude of such processed acceleration data. Such an observation offers us a method to predict future behaviour where deviations from prediction are classified as context-based aberrations or progressive degradation of the system. In addition we capture the relationship between speed of the vehicle and median of the jerk energy samples using regression analysis. Such results offer an opportunity to develop a robust method to model road-vehicle interaction thereby enabling us to predict such like driving behaviour and condition based maintenance etc.

  19. Neuronal Function in Male Sprague Dawley Rats During Normal Ageing.

    PubMed

    Idowu, A J; Olatunji-Bello, I I; Olagunju, J A

    2017-03-06

    During normal ageing, there are physiological changes especially in high energy demanding tissues including the brain and skeletal muscles. Ageing may disrupt homeostasis and allow tissue vulnerability to disease. To establish an appropriate animal model which is readily available and will be useful to test therapeutic strategies during normal ageing, we applied behavioral approaches to study age-related changes in memory and motor function as a basis for neuronal function in ageing in male Sprague Dawley rats. 3 months, n=5; 6 months, n=5 and 18 months, n=5 male Sprague Dawley Rats were tested using the Novel Object Recognition Task (NORT) and the Elevated plus Maze (EPM) Test. Data was analyzed by ANOVA and the Newman-Keuls post hoc test. The results showed an age-related gradual decline in exploratory behavior and locomotor activity with increasing age in 3 months, 6 months and 18 months old rats, although the values were not statistically significant, but grooming activity significantly increased with increasing age. Importantly, we established a novel finding that the minimum distance from the novel object was statistically significant between 3 months and 18 months old rats and this may be an index for age-related memory impairment in the NORT. Altogether, we conclude that the male Sprague Dawley rat show age-related changes in neuronal function and may be a useful model for carrying out investigations into the mechanisms involved in normal ageing.

  20. Simulation of the National Aerospace System for Safety Analysis

    NASA Technical Reports Server (NTRS)

    Pritchett, Amy; Goldsman, Dave; Statler, Irv (Technical Monitor)

    2002-01-01

    Work started on this project on January 1, 1999, the first year of the grant. Following the outline of the grant proposal, a simulator architecture has been established which can incorporate the variety of types of models needed to accurately simulate national airspace dynamics. For the sake of efficiency, this architecture was based on an established single-aircraft flight simulator, the Reconfigurable Flight Simulator (RFS), already developed at Georgia Tech. Likewise, in the first year substantive changes and additions were made to the RFS to convert it into a simulation of the National Airspace System, with the flexibility to incorporate many types of models: aircraft models; controller models; airspace configuration generators; discrete event generators; embedded statistical functions; and display and data outputs. The architecture has been developed with the capability to accept any models of these types; due to its object-oriented structure, individual simulator components can be added and removed during run-time, and can be compiled separately. Simulation objects from other projects should be easy to convert to meet architecture requirements, with the intent that both this project may now be able to incorporate established simulation components from other projects, and that other projects may easily use this simulation without significant time investment.

Top