Mazerolle, M.J.
2006-01-01
In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.
ERIC Educational Resources Information Center
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models given their growing use in theory testing and construction. We discuss theoretical statistical results in regression and illustrate more important issues with novel simulations involving latent variable models including factor analysis, latent profile analysis, and factor mixture models. Asymptotically, the BIC is consistent, in that it will select the true model if, among other assumptions, the true model is among the candidate models considered. The AIC is not consistent under these circumstances. When the true model is not in the candidate model set the AIC is effcient, in that it will asymptotically choose whichever model minimizes the mean squared error of prediction/estimation. The BIC is not effcient under these circumstances. Unlike the BIC, the AIC also has a minimax property, in that it can minimize the maximum possible risk in finite sample sizes. In sum, the AIC and BIC have quite different properties that require different assumptions, and applied researchers and methodologists alike will benefit from improved understanding of the asymptotic and finite-sample behavior of these criteria. The ultimate decision to use AIC or BIC depends on many factors, including: the loss function employed, the study's methodological design, the substantive research question, and the notion of a true model and its applicability to the study at hand. PMID:22309957
Akaike information criterion to select well-fit resist models
NASA Astrophysics Data System (ADS)
Burbine, Andrew; Fryer, David; Sturtevant, John
2015-03-01
In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
Taki, Yasuyuki; Hashizume, Hiroshi; Thyreau, Benjamin; Sassa, Yuko; Takeuchi, Hikaru; Wu, Kai; Kotozaki, Yuka; Nouchi, Rui; Asano, Michiko; Asano, Kohei; Fukuda, Hiroshi; Kawashima, Ryuta
2013-08-01
We examined linear and curvilinear correlations of gray matter volume and density in cortical and subcortical gray matter with age using magnetic resonance images (MRI) in a large number of healthy children. We applied voxel-based morphometry (VBM) and region-of-interest (ROI) analyses with the Akaike information criterion (AIC), which was used to determine the best-fit model by selecting which predictor terms should be included. We collected data on brain structural MRI in 291 healthy children aged 5-18 years. Structural MRI data were segmented and normalized using a custom template by applying the diffeomorphic anatomical registration using exponentiated lie algebra (DARTEL) procedure. Next, we analyzed the correlations of gray matter volume and density with age in VBM with AIC by estimating linear, quadratic, and cubic polynomial functions. Several regions such as the prefrontal cortex, the precentral gyrus, and cerebellum showed significant linear or curvilinear correlations between gray matter volume and age on an increasing trajectory, and between gray matter density and age on a decreasing trajectory in VBM and ROI analyses with AIC. Because the trajectory of gray matter volume and density with age suggests the progress of brain maturation, our results may contribute to clarifying brain maturation in healthy children from the viewpoint of brain structure. PMID:22505237
Contaminant source reconstruction by empirical Bayes and Akaike's Bayesian Information Criterion.
Zanini, Andrea; Woodbury, Allan D
2016-01-01
The objective of the paper is to present an empirical Bayesian method combined with Akaike's Bayesian Information Criterion (ABIC) to estimate the contaminant release history of a source in groundwater starting from few concentration measurements in space and/or in time. From the Bayesian point of view, the ABIC considers prior information on the unknown function, such as the prior distribution (assumed Gaussian) and the covariance function. The unknown statistical quantities, such as the noise variance and the covariance function parameters, are computed through the process; moreover the method quantifies also the estimation error through the confidence intervals. The methodology was successfully tested on three test cases: the classic Skaggs and Kabala release function, three sharp releases (both cases regard the transport in a one-dimensional homogenous medium) and data collected from laboratory equipment that consists of a two-dimensional homogeneous unconfined aquifer. The performances of the method were tested with two different covariance functions (Gaussian and exponential) and also with large measurement error. The obtained results were discussed and compared to the geostatistical approach of Kitanidis (1995). PMID:26836200
Contaminant source reconstruction by empirical Bayes and Akaike's Bayesian Information Criterion
NASA Astrophysics Data System (ADS)
Zanini, Andrea; Woodbury, Allan D.
2016-02-01
The objective of the paper is to present an empirical Bayesian method combined with Akaike's Bayesian Information Criterion (ABIC) to estimate the contaminant release history of a source in groundwater starting from few concentration measurements in space and/or in time. From the Bayesian point of view, the ABIC considers prior information on the unknown function, such as the prior distribution (assumed Gaussian) and the covariance function. The unknown statistical quantities, such as the noise variance and the covariance function parameters, are computed through the process; moreover the method quantifies also the estimation error through the confidence intervals. The methodology was successfully tested on three test cases: the classic Skaggs and Kabala release function, three sharp releases (both cases regard the transport in a one-dimensional homogenous medium) and data collected from laboratory equipment that consists of a two-dimensional homogeneous unconfined aquifer. The performances of the method were tested with two different covariance functions (Gaussian and exponential) and also with large measurement error. The obtained results were discussed and compared to the geostatistical approach of Kitanidis (1995).
Model Selection Information Criteria for Non-Nested Latent Class Models.
ERIC Educational Resources Information Center
Lin, Ting Hsiang; Dayton, C. Mitchell
1997-01-01
The use of these three model selection information criteria for latent class models was studied for nonnested models: (1) Akaike's information criterion (H. Akaike, 1973) (AIC); (2) the Schwarz information (G. Schwarz, 1978) (SIC) criterion; and (3) the Bozdogan version of the AIC (CAIC) (H. Bozdogan, 1987). Situations in which each is preferable…
Multidimensional Rasch Model Information-Based Fit Index Accuracy
ERIC Educational Resources Information Center
Harrell-Williams, Leigh M.; Wolfe, Edward W.
2013-01-01
Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
AIC, BIC, Bayesian evidence against the interacting dark energy model
NASA Astrophysics Data System (ADS)
Szydłowski, Marek; Krawiec, Adam; Kurek, Aleksandra; Kamionka, Michał
2015-01-01
Recent astronomical observations have indicated that the Universe is in a phase of accelerated expansion. While there are many cosmological models which try to explain this phenomenon, we focus on the interacting CDM model where an interaction between the dark energy and dark matter sectors takes place. This model is compared to its simpler alternative—the CDM model. To choose between these models the likelihood ratio test was applied as well as the model comparison methods (employing Occam's principle): the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the Bayesian evidence. Using the current astronomical data: type Ia supernova (Union2.1), , baryon acoustic oscillation, the Alcock-Paczynski test, and the cosmic microwave background data, we evaluated both models. The analyses based on the AIC indicated that there is less support for the interacting CDM model when compared to the CDM model, while those based on the BIC indicated that there is strong evidence against it in favor of the CDM model. Given the weak or almost non-existing support for the interacting CDM model and bearing in mind Occam's razor we are inclined to reject this model.
NASA Astrophysics Data System (ADS)
Luo, Haipeng; Liu, Yang; Chen, Ting; Xu, Caijun; Wen, Yangmao
2016-01-01
We present a new method to derive 3-D surface deformation from an integration of interferometric synthetic aperture radar (InSAR) images and Global Navigation Satellite System (GNSS) observations based on Akaike's Bayesian Information Criterion (ABIC), considering relationship between deformations of neighbouring locations. This method avoids interpolated errors by excluding the interpolation of GNSS into the same spatial resolution as InSAR images and harnesses the data sets and the prior smooth constraints of surface deformation objectively and simultaneously by using ABIC, which were inherently unresolved in previous studies. In particular, we define surface roughness measuring smoothing degree to evaluate the performance of the prior constraints and deduce the formula of the covariance for the estimation errors to estimate the uncertainty of modelled solution. We validate this method using synthetic tests and the 2008 Mw 7.9 Wenchuan earthquake. We find that the optimal weights associated with ABIC minimum are generally at trade-off locations that balance contributions from InSAR, GNSS data sets and the prior constraints. We use this method to evaluate the influence of the interpolated errors from the Ordinary Kriging algorithm on the derivation of surface deformation. Tests show that the interpolated errors may contribute to biasing very large weights imposed on Kriged GNSS data, suggesting that fixing the relative weights is required in this case. We also make a comparison with SISTEM method, indicating that our method allows obtaining better estimations even with sparse GNSS observations. In addition, this method can be generalized to provide a solution for situations where some types of data sets are lacking and can be exploited further to account for data sets such as the integration of displacements along radar lines and offsets along satellite tracks.
Autonomic Intelligent Cyber Sensor (AICS) Version 1.0.1
2015-03-01
The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transported internally and externally on a standards based, flexible two-level communication structure.
Autonomic Intelligent Cyber Sensor (AICS) Version 1.0.1
Energy Science and Technology Software Center (ESTSC)
2015-03-01
The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transportedmore » internally and externally on a standards based, flexible two-level communication structure.« less
Dynamic microphones M-87/AIC and M-101/AIC and earphone H-143/AIC. [for space shuttle
NASA Technical Reports Server (NTRS)
Reiff, F. H.
1975-01-01
The electrical characteristics of the M-87/AIC and M-101/AIC dynamic microphone and H-143 earphones were tested for the purpose of establishing the relative performance levels of units supplied by four vendors. The microphones and earphones were tested for frequency response, sensitivity, linearity, impedance and noise cancellation. Test results are presented and discussed.
Information criteria and selection of vibration models.
Ruzek, Michal; Guyader, Jean-Louis; Pézerat, Charles
2014-12-01
This paper presents a method of determining an appropriate equation of motion of two-dimensional plane structures like membranes and plates from vibration response measurements. The local steady-state vibration field is used as input for the inverse problem that approximately determines the dispersion curve of the structure. This dispersion curve is then statistically treated with Akaike information criterion (AIC), which compares the experimentally measured curve to several candidate models (equations of motion). The model with the lowest AIC value is then chosen, and the utility of other models can also be assessed. This method is applied to three experimental case studies: A red cedar wood plate for musical instruments, a thick paper subjected to unknown membrane tension, and a thick composite sandwich panel. These three cases give three different situations of a model selection. PMID:25480053
Regularization Parameter Selections via Generalized Information Criterion
Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling
2009-01-01
We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354
NASA Astrophysics Data System (ADS)
Olbert, Kai; Meier, Thomas; Cristiano, Luigia
2015-04-01
A quick picking procedure is an important tool to process large datasets in seismology. Identifying phases and determining the precise onset times at seismological stations is essential not just for localization procedures but also for seismic body-wave tomography. The automated picking procedure should be fast, robust, precise and consistent. In manual processing the speed and consistency are not guaranteed and therefore unreproducible errors may be introduced, especially for large amounts of data. In this work an offline P- and S-phase picker based on an autoregressive-prediction approach is optimized and applied to different data sets. The onset time can be described as the sum of the event source time, the theoretic travel time according to a reference velocity model and a deviation from the theoretic travel time due to lateral heterogeneity or errors in the source location. With this approach the onset time at each station can be found around the theoretical travel time within a time window smaller than the maximum lateral heterogeneity. Around the theoretic travel time an autoregressive prediction error is calculated from one or several components as characteristic function of the waveform. The minimum of the Akaike-Information-Criteria of the characteristic function identifies the phase. As was shown by Küperkoch et al. (2012), the Akaike-Information-Criteria has the tendency to be too late. Therefore, an additional processing step for precise picking is needed. In the vicinity of the minimum of the Akaike-Information-Criteria a cost function is defined and used to find the optimal estimate of the arrival time. The cost function is composed of the CF and three side conditions. The idea behind the use of a cost function is to find the phase pick in the last minimum before the CF rises due to the phase onset. The final onset time is picked in the minimum of the cost function. The automatic picking procedure is applied on datasets recorded at stations of the
Depth-map and albedo estimation with superior information-theoretic performance
NASA Astrophysics Data System (ADS)
Harrison, Adam P.; Joseph, Dileepan
2015-02-01
Lambertian photometric stereo (PS) is a seminal computer vision method. However, using depth maps in the image formation model, instead of surface normals as in PS, reduces model parameters by a third, making it preferred from an information-theoretic perspective. The Akaike information criterion (AIC) quantifies this trade-off between goodness of fit and overfitting. Obtaining superior AIC values requires an effective maximum likelihood (ML) depth-map & albedo estimation method. Recently, the authors published an ML estimation method that uses a two-step approach based on PS. While effective, approximations of noise distributions and decoupling of depth-map & albedo estimation have limited its accuracy. Overcoming these limitations, this paper presents an ML method operating directly on images. The previous two-step ML method provides a robust initial solution, which kick starts a new nonlinear estimation process. An innovative formulation of the estimation task, including a separable nonlinear least-squares approach, reduces the computational burden of the optimization process. Experiments demonstrate visual improvements under noisy conditions by avoiding overfitting. As well, a comprehensive analysis shows that refined depth maps & albedos produce superior AIC metrics and enjoy better predictive accuracy than with literature methods. The results indicate that the new method is a promising means for depth-map & albedo estimation with superior information-theoretic performance.
Xu, Lei
2004-07-01
The nature of Bayesian Ying-Yang harmony learning is reexamined from an information theoretic perspective. Not only its ability for model selection and regularization is explained with new insights, but also discussions are made on its relations and differences from the studies of minimum description length (MDL), Bayesian approach, the bit-back based MDL, Akaike information criterion (AIC), maximum likelihood, information geometry, Helmholtz machines, and variational approximation. Moreover, a generalized projection geometry is introduced for further understanding such a new mechanism. Furthermore, new algorithms are also developed for implementing Gaussian factor analysis (FA) and non-Gaussian factor analysis (NFA) such that selecting appropriate factors is automatically made during parameter learning. PMID:15461081
Mission science value-cost savings from the Advanced Imaging Communication System (AICS)
NASA Technical Reports Server (NTRS)
Rice, R. F.
1984-01-01
An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.
Cheng, Wei; Zhang, Zhousuo; Cao, Hongrui; He, Zhengjia; Zhu, Guanwen
2014-01-01
This paper investigates one eigenvalue decomposition-based source number estimation method, and three information-based source number estimation methods, namely the Akaike Information Criterion (AIC), Minimum Description Length (MDL) and Bayesian Information Criterion (BIC), and improves BIC as Improved BIC (IBIC) to make it more efficient and easier for calculation. The performances of the abovementioned source number estimation methods are studied comparatively with numerical case studies, which contain a linear superposition case and a both linear superposition and nonlinear modulation mixing case. A test bed with three sound sources is constructed to test the performances of these methods on mechanical systems, and source separation is carried out to validate the effectiveness of the experimental studies. This work can benefit model order selection, complexity analysis of a system, and applications of source separation to mechanical systems for condition monitoring and fault diagnosis purposes. PMID:24776935
An information theory criteria based blind method for enumerating active users in DS-CDMA system
NASA Astrophysics Data System (ADS)
Samsami Khodadad, Farid; Abed Hodtani, Ghosheh
2014-11-01
In this paper, a new and blind algorithm for active user enumeration in asynchronous direct sequence code division multiple access (DS-CDMA) in multipath channel scenario is proposed. The proposed method is based on information theory criteria. There are two main categories of information criteria which are widely used in active user enumeration, Akaike Information Criterion (AIC) and Minimum Description Length (MDL) information theory criteria. The main difference between these two criteria is their penalty functions. Due to this difference, MDL is a consistent enumerator which has better performance in higher signal-to-noise ratios (SNR) but AIC is preferred in lower SNRs. In sequel, we propose a SNR compliance method based on subspace and training genetic algorithm to have the performance of both of them. Moreover, our method uses only a single antenna, in difference to the previous methods which decrease hardware complexity. Simulation results show that the proposed method is capable of estimating the number of active users without any prior knowledge and the efficiency of the method.
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
The T cell-selective IL-2 mutant AIC284 mediates protection in a rat model of Multiple Sclerosis.
Weishaupt, Andreas; Paulsen, Daniela; Werner, Sandra; Wolf, Nelli; Köllner, Gabriele; Rübsamen-Schaeff, Helga; Hünig, Thomas; Kerkau, Thomas; Beyersdorf, Niklas
2015-05-15
Targeting regulatory T cells (Treg cells) with interleukin-2 (IL-2) constitutes a novel therapeutic approach for autoimmunity. As anti-cancer therapy with IL-2 has revealed substantial toxicities a mutated human IL-2 molecule, termed AIC284 (formerly BAY 50-4798), has been developed to reduce these side effects. To assess whether AIC284 is efficacious in autoimmunity, we studied its therapeutic potential in an animal model for Multiple Sclerosis. Treatment of Lewis rats with AIC284 increased Treg cell numbers and protected the rats from Experimental Autoimmune Encephalomyelitis (EAE). AIC284 might, thus, also efficiently prevent progression of autoimmune diseases in humans. PMID:25903730
ERIC Educational Resources Information Center
Chapman, Rebekah; Buckley, Lisa; Sheehan, Mary
2011-01-01
The Extended Adolescent Injury Checklist (E-AIC), a self-report measure of injury based on the model of the Adolescent Injury Checklist (AIC), was developed for use in the evaluation of school-based interventions. The three stages of this development involved focus groups with adolescents and consultations with medical staff, pilot testing of the…
Test procedures, AN/AIC-27 system and component units. [for space shuttle
NASA Technical Reports Server (NTRS)
Reiff, F. H.
1975-01-01
The AN/AIC-27 (v) intercommunication system is a 30-channel audio distribution which consists of: air crew station units, maintenance station units, and a central control unit. A test procedure for each of the above units and also a test procedure for the system are presented. The intent of the test is to provide data for use in shuttle audio subsystem design.
AIC649 Induces a Bi-Phasic Treatment Response in the Woodchuck Model of Chronic Hepatitis B
Paulsen, Daniela; Weber, Olaf; Ruebsamen-Schaeff, Helga; Tennant, Bud C.; Menne, Stephan
2015-01-01
AIC649 has been shown to directly address the antigen presenting cell arm of the host immune defense leading to a regulated cytokine release and activation of T cell responses. In the present study we analyzed the antiviral efficacy of AIC649 as well as its potential to induce functional cure in animal models for chronic hepatitis B. Hepatitis B virus transgenic mice and chronically woodchuck hepatitis virus (WHV) infected woodchucks were treated with AIC649, respectively. In the mouse system AIC649 decreased the hepatitis B virus titer as effective as the “gold standard”, Tenofovir. Interestingly, AIC649-treated chronically WHV infected woodchucks displayed a bi-phasic pattern of response: The marker for functional cure—hepatitis surface antigen—first increased but subsequently decreased even after cessation of treatment to significantly reduced levels. We hypothesize that the observed bi-phasic response pattern to AIC649 treatment reflects a physiologically “concerted”, reconstituted immune response against WHV and therefore may indicate a potential for inducing functional cure in HBV-infected patients. PMID:26656974
AIC649 Induces a Bi-Phasic Treatment Response in the Woodchuck Model of Chronic Hepatitis B.
Paulsen, Daniela; Weber, Olaf; Ruebsamen-Schaeff, Helga; Tennant, Bud C; Menne, Stephan
2015-01-01
AIC649 has been shown to directly address the antigen presenting cell arm of the host immune defense leading to a regulated cytokine release and activation of T cell responses. In the present study we analyzed the antiviral efficacy of AIC649 as well as its potential to induce functional cure in animal models for chronic hepatitis B. Hepatitis B virus transgenic mice and chronically woodchuck hepatitis virus (WHV) infected woodchucks were treated with AIC649, respectively. In the mouse system AIC649 decreased the hepatitis B virus titer as effective as the "gold standard", Tenofovir. Interestingly, AIC649-treated chronically WHV infected woodchucks displayed a bi-phasic pattern of response: The marker for functional cure--hepatitis surface antigen--first increased but subsequently decreased even after cessation of treatment to significantly reduced levels. We hypothesize that the observed bi-phasic response pattern to AIC649 treatment reflects a physiologically "concerted", reconstituted immune response against WHV and therefore may indicate a potential for inducing functional cure in HBV-infected patients. PMID:26656974
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod M.C.
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
Shinohara, Haruka; Kumazaki, Minami; Minami, Yosuke; Ito, Yuko; Sugito, Nobuhiko; Kuranaga, Yuki; Taniguchi, Kohei; Yamada, Nami; Otsuki, Yoshinori; Naoe, Tomoki; Akao, Yukihiro
2016-02-01
In Ph-positive leukemia, imatinib brought marked clinical improvement; however, further improvement is needed to prevent relapse. Cancer cells efficiently use limited energy sources, and drugs targeting cellular metabolism improve the efficacy of therapy. In this study, we characterized the effects of novel anti-cancer fatty-acid derivative AIC-47 and imatinib, focusing on cancer-specific energy metabolism in chronic myeloid leukemia cells. AIC-47 and imatinib in combination exhibited a significant synergic cytotoxicity. Imatinib inhibited only the phosphorylation of BCR-ABL; whereas AIC-47 suppressed the expression of the protein itself. Both AIC-47 and imatinib modulated the expression of pyruvate kinase M (PKM) isoforms from PKM2 to PKM1 through the down-regulation of polypyrimidine tract-binding protein 1 (PTBP1). PTBP1 functions as alternative splicing repressor of PKM1, resulting in expression of PKM2, which is an inactive form of pyruvate kinase for the last step of glycolysis. Although inactivation of BCR-ABL by imatinib strongly suppressed glycolysis, compensatory fatty-acid oxidation (FAO) activation supported glucose-independent cell survival by up-regulating CPT1C, the rate-limiting FAO enzyme. In contrast, AIC-47 inhibited the expression of CPT1C and directly fatty-acid metabolism. These findings were also observed in the CD34(+) fraction of Ph-positive acute lymphoblastic leukemia cells. These results suggest that AIC-47 in combination with imatinib strengthened the attack on cancer energy metabolism, in terms of both glycolysis and compensatory activation of FAO. PMID:26607903
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod, M.C.
1994-08-12
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method. The problem involves a probability model for underwater noise due to distant shipping.
Tightening the Noose on LMXB Formation of MSPs: Need for AIC ?
NASA Astrophysics Data System (ADS)
Grindlay, J. E.; Yi, I.
1997-12-01
The origin of millisecond pulsars (MSPs) remains an outstanding problem despite the early and considerable evidence that they are the descendents of neutron stars spun up by accretion in low mass x-ray binaries (LMXBs). The route to MSPs from LMXBs may pass through the high luminosity Z-source LMXBs but is (severely) limited by the very limited population (and apparent birth rate) of Z-sources available. The more numerous x-ray bursters, the Atoll sources, are likely to (still) be short in numbers or birth rate but are now also found to be likely inefficient in the spin-up torques they can provide: the accretion in these relatively low accretion rate systems is likely dominated by an advection dominated flow in which matter accretes onto the NS via sub-Keplerian flows which then transfer correspondingly less angular momentum to the NS. We investigate the implications of the possible ADAF flows in low luminosity NS-LMXBs and find it is unlikely they can produce MSPs. The standard model can still be allowed if most NS-LMXBs are quiescent and undergo transient-like outbursts similar to the soft x-ray transients (which mostly contain black holes). However, apart from Cen X-4 and Aql X-1, few such systems have been found and the SXTs appear instead to be significantly deficient in NS systems. Direct production of MSPs by the accretion induced collapse (AIC) of white dwarfs has been previously suggested to solve the MSP vs. LMXB birth rate problem. We re-examine AIC models in light of the new constraints on direct LMXB production and the additional difficulty imposed by ADAF flows and constraints on SXT populations and derive constraints on the progenitor WD spin and magnetic fields.
Chen, H.; Chen, L.; Albright, T.P.
2007-01-01
Invasive exotic species pose a growing threat to the economy, public health, and ecological integrity of nations worldwide. Explaining and predicting the spatial distribution of invasive exotic species is of great importance to prevention and early warning efforts. We are investigating the potential distribution of invasive exotic species, the environmental factors that influence these distributions, and the ability to predict them using statistical and information-theoretic approaches. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, for most species, absence data are not available. Presented with the challenge of developing a model based on presence-only information, we developed an improved logistic regression approach using Information Theory and Frequency Statistics to produce a relative suitability map. This paper generated a variety of distributions of ragweed (Ambrosia artemisiifolia L.) from logistic regression models applied to herbarium specimen location data and a suite of GIS layers including climatic, topographic, and land cover information. Our logistic regression model was based on Akaike's Information Criterion (AIC) from a suite of ecologically reasonable predictor variables. Based on the results we provided a new Frequency Statistical method to compartmentalize habitat-suitability in the native range. Finally, we used the model and the compartmentalized criterion developed in native ranges to "project" a potential distribution onto the exotic ranges to build habitat-suitability maps. ?? Science in China Press 2007.
AN/AIC-22(V) Intercommunications Set (ICS) fiber optic link engineering analysis report
NASA Astrophysics Data System (ADS)
Minter, Richard; Blocksom, Roland; Ling, Christopher
1990-08-01
Electromagnetic interference (EMI) problems constitute a serious threat to operational Navy aircraft systems. The application of fiber optic technology is a potential solution to these problems. EMI reported problems in the P-3 patrol aircraft AN/AIC-22(V) Intercommunications System (ICS) were selected from an EMI problem database for investigation and possible application of fiber optic technology. A proof-of-concept experiment was performed to demonstrate the level of EMI immunity of fiber optics when used in an ICS. A full duplex single channel fiber optic audio link was designed and assembled from modified government furnished equipment (GFE) previously used in another Navy fiber optic application. The link was taken to the Naval Air Test Center (NATC) Patuxent River, Maryland and temporarily installed in a Naval Research Laboratory (NRL) P-3A aircraft for a side-by-side comparison test with the installed ICS. With regards to noise reduction, the fiber optic link provided a qualitative improvement over the conventional ICS. In an effort to obtain a quantitative measure of comparison, audio frequency range both with and without operation of the aircraft VHF and UHF radio transmitters.
NASA Astrophysics Data System (ADS)
Bambang Avip Priatna, M.; Lukman, Sumiaty, Encum
2016-02-01
This paper aims to determine the properties of Correspondence Analysis (CA) estimator to estimate latent variable models. The method used is the High-Dimensional AIC (HAIC) method with simulation of Bernoulli distribution data. Stages are: (1) determine the matrix CA; (2) create a model of the CA estimator to estimate the latent variables by using HAIC; (3) simulated the Bernoulli distribution data with repetition 1,000,748 times. The simulation results show the CA estimator models work well.
Franchini, M; Coppola, A; Rocino, A; Zanon, E; Morfini, M; Accorsi, Arianna; Aru, Anna Brigida; Biasoli, Chiara; Cantori, Isabella; Castaman, Giancarlo; Cesaro, Simone; Ciabatta, Carlo; De Cristofaro, Raimondo; Delios, Grazia; Di Minno, Giovanni; D'Incà, Marco; Dragani, Alfredo; Ettorre, Cosimo Pietro; Gagliano, Fabio; Gamba, Gabriella; Gandini, Giorgio; Giordano, Paola; Giuffrida, Gaetano; Gresele, Paolo; Latella, Caterina; Luciani, Matteo; Margaglione, Maurizio; Marietta, Marco; Mazzucconi, Maria Gabriella; Messina, Maria; Molinari, Angelo Claudio; Notarangelo, Lucia Dora; Oliovecchio, Emily; Peyvandi, Flora; Piseddu, Gavino; Rossetti, Gina; Rossi, Vincenza; Santagostino, Elena; Schiavoni, Mario; Schinco, Piercarla; Serino, Maria Luisa; Tagliaferri, Annarita; Testa, Sophie
2014-03-01
Despite great advances in haemophilia care in the last 20 years, a number of questions on haemophilia therapy remain unanswered. These debated issues primarily involve the choice of the product type (plasma-derived vs. recombinant) for patients with different characteristics: specifically, if they were infected by blood-borne virus infections, and if they bear high or low risk of inhibitor development. In addition, the most appropriate treatment regimen in non-inhibitor and inhibitor patients compel physicians operating at the haemophilia treatment centres (HTCs) to take important therapeutic decisions, which are often based on their personal clinical experience rather than on evidence-based recommendations from published literature data. To know the opinion on the most controversial aspects in haemophilia care of Italian expert physicians, who are responsible for common clinical practice and therapeutic decisions, we have conducted a survey among the Directors of HTCs affiliated to the Italian Association of Haemophilia Centres (AICE). A questionnaire, consisting of 19 questions covering the most important topics related to haemophilia treatment, was sent to the Directors of all 52 Italian HTCs. Forty Directors out of 52 (76.9%) responded, accounting for the large majority of HTCs affiliated to the AICE throughout Italy. The results of this survey provide for the first time a picture of the attitudes towards clotting factor concentrate use and product selection of clinicians working at Italian HTCs. PMID:24533954
Teacher's Corner: Conducting Specification Searches with Amos
ERIC Educational Resources Information Center
Schumacker, Randall E.
2006-01-01
Amos 5.0 (Arbuckle, 2003) permits exploratory specification searches for the best theoretical model given an initial model using the following fit function criteria: chi-square (C), chi-square--df (C--df), Akaike Information Criteria (AIC), Browne-Cudeck criterion (BCC), Bayes Information Criterion (BIC) , chi-square divided by the degrees of…
Hemerik, Lia; van der Hoeven, Nelly; van Alphen, Jacques J M
2002-01-01
Approximately three decades ago the question was first answered "whether parasitoids are able to assess the number or origin of eggs in a host" for a solitary parasitoid, Leptopilina heterotoma, by fitting theoretically derived distributions to empirical ones. We extend the set of different theoretically postulated distributions of eggs among hosts by combining searching modes and abilities in assessing host quality. In the models, parasitoids search either randomly (Poisson) (1) or by vibrotaxis (Negative Binomial) (2). Parasitoids are: (a) assumed to treat all hosts equally, (b) able to distinguish them in unparasitised and parasitised hosts only, (c) able to distinguish them by the number of eggs they contained, or (d) able to recognise their own eggs. Mathematically tractable combinations of searching mode (1 and 2) and abilities (a,b,c,d) result in seven different models (M1a, M1b, M1c, M1d, M2a, M2b and M2c). These models have been simulated for a varying number of searching parasitoids and various mean numbers of eggs per host. Each resulting distribution is fitted to all theoretical models. The model with the minimum Akaike's information criterion (AIC) is chosen as the best fitting for each simulated distribution. We thus investigate the power of the AIC and for each distribution with a specified mean number of eggs per host we derive a frequency distribution for classification. Firstly, we discuss the simulations of models including random search (M1a, M1b, M1c and M1d). For M1a, M1c and M1d the simulated distributions are correctly classified in at least 70% of all cases. However, in a few cases model M1b is only properly classified for intermediate mean values of eggs per host. The models including vibrotaxis as searching behaviour (M2a, M2b and M2c) cannot be distinguished from those with random search if the mean number of eggs per host is low. Among the models incorporating vibrotaxis the three abilities are detected analogously as in models with
NASA Astrophysics Data System (ADS)
Cama, Mariaelena; Cristi Nicu, Ionut; Conoscenti, Christian; Quénéhervé, Geraldine; Maerker, Michael
2016-04-01
Landslide susceptibility can be defined as the likelihood of a landslide occurring in a given area on the basis of local terrain conditions. In the last decades many research focused on its evaluation by means of stochastic approaches under the assumption that 'the past is the key to the future' which means that if a model is able to reproduce a known landslide spatial distribution, it will be able to predict the future locations of new (i.e. unknown) slope failures. Among the various stochastic approaches, Binary Logistic Regression (BLR) is one of the most used because it calculates the susceptibility in probabilistic terms and its results are easily interpretable from a geomorphological point of view. However, very often not much importance is given to multicollinearity assessment whose effect is that the coefficient estimates are unstable, with opposite sign and therefore difficult to interpret. Therefore, it should be evaluated every time in order to make a model whose results are geomorphologically correct. In this study the effects of multicollinearity in the predictive performance and robustness of landslide susceptibility models are analyzed. In particular, the multicollinearity is estimated by means of Variation Inflation Index (VIF) which is also used as selection criterion for the independent variables (VIF Stepwise Selection) and compared to the more commonly used AIC Stepwise Selection. The robustness of the results is evaluated through 100 replicates of the dataset. The study area selected to perform this analysis is the Moldavian Plateau where landslides are among the most frequent geomorphological processes. This area has an increasing trend of urbanization and a very high potential regarding the cultural heritage, being the place of discovery of the largest settlement belonging to the Cucuteni Culture from Eastern Europe (that led to the development of the great complex Cucuteni-Tripyllia). Therefore, identifying the areas susceptible to
Model weights and the foundations of multimodel inference
Link, W.A.; Barker, R.J.
2006-01-01
Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.
Stanley, T.R.; Burnham, K.P.
1998-01-01
Specification of an appropriate model is critical to valid stalistical inference. Given the "true model" for the data is unknown, the goal of model selection is to select a plausible approximating model that balances model bias and sampling variance. Model selection based on information criteria such as AIC or its variant AICc, or criteria like CAIC, has proven useful in a variety of contexts including the analysis of open-population capture-recapture data. These criteria have not been intensively evaluated for closed-population capture-recapture models, which are integer parameter models used to estimate population size (N), and there is concern that they will not perform well. To address this concern, we evaluated AIC, AICc, and CAIC model selection for closed-population capture-recapture models by empirically assessing the quality of inference for the population size parameter N. We found that AIC-, AICc-, and CAIC-selected models had smaller relative mean squared errors than randomly selected models, but that confidence interval coverage on N was poor unless unconditional variance estimates (which incorporate model uncertainty) were used to compute confidence intervals. Overall, AIC and AICc outperformed CAIC, and are preferred to CAIC for selection among the closed-population capture-recapture models we investigated. A model averaging approach to estimation, using AIC. AICc, or CAIC to estimate weights, was also investigated and proved superior to estimation using AIC-, AICc-, or CAIC-selected models. Our results suggested that, for model averaging, AIC or AICc. should be favored over CAIC for estimating weights.
End-to-end imaging information rate advantages of various alternative communication systems
NASA Technical Reports Server (NTRS)
Rice, R. F.
1982-01-01
The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.
Time series ARIMA models for daily price of palm oil
NASA Astrophysics Data System (ADS)
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects. PMID:17476647
Model Selection for Geostatistical Models
Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.
2006-02-01
We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of ‑2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point
ERIC Educational Resources Information Center
Nuttonson, M. Y., Ed.
Twelve papers were translated from Russian: Automation of Information Processing Involved in Experimental Studies of Atmospheric Diffusion, Micrometeorological Characteristics of Atmospheric Pollution Conditions, Study of theInfluence of Irregularities of the Earth's Surface on the Air Flow Characteristics in a Wind Tunnel, Use of Parameters of…
Variable selection with stepwise and best subset approaches
2016-01-01
While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786
Nielsen, Jakob T.; Eghbalnia, Hamid R.; Nielsen, Niels Chr.
2011-01-01
The exquisite sensitivity of chemical shifts as reporters of structural information, and the ability to measure them routinely and accurately, gives great import to formulations that elucidate the structure-chemical-shift relationship. Here we present a new and highly accurate, precise, and robust formulation for the prediction of NMR chemical shifts from protein structures. Our approach, shAIC (shift prediction guided by Akaikes Information Criterion), capitalizes on mathematical ideas and an information-theoretic principle, to represent the functional form of the relationship between structure and chemical shift as a parsimonious sum of smooth analytical potentials which optimally takes into account short-, medium-, and long-range parameters in a nuclei-specific manner to capture potential chemical shift perturbations caused by distant nuclei. shAIC outperforms the state-of-the-art methods that use analytical formulations. Moreover, for structures derived by NMR or structures with novel folds, shAIC delivers better overall results; even when it is compared to sophisticated machine learning approaches. shAIC provides for a computationally lightweight implementation that is unimpeded by molecular size, making it an ideal for use as a force field. PMID:22293396
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
A hybrid model to simulate the annual runoff of the Kaidu River in northwest China
NASA Astrophysics Data System (ADS)
Xu, Jianhua; Chen, Yaning; Bai, Ling; Xu, Yiwen
2016-04-01
Fluctuant and complicated hydrological processes can result in the uncertainty of runoff forecasting. Thus, it is necessary to apply the multi-method integrated modeling approaches to simulate runoff. Integrating the ensemble empirical mode decomposition (EEMD), the back-propagation artificial neural network (BPANN) and the nonlinear regression equation, we put forward a hybrid model to simulate the annual runoff (AR) of the Kaidu River in northwest China. We also validate the simulated effects by using the coefficient of determination (R2) and the Akaike information criterion (AIC) based on the observed data from 1960 to 2012 at the Dashankou hydrological station. The average absolute and relative errors show the high simulation accuracy of the hybrid model. R2 and AIC both illustrate that the hybrid model has a much better performance than the single BPANN. The hybrid model and integrated approach elicited by this study can be applied to simulate the annual runoff of similar rivers in northwest China.
Batth, S S; Sreeraman, R; Dienes, E; Beckett, L A; Daly, M E; Cui, J; Mathai, M; Purdy, J A
2013-01-01
Objective: To characterise the relationship between lacrimal gland dose and ocular toxicity among patients treated by intensity-modulated radiotherapy (IMRT) for sinonasal tumours. Methods: 40 patients with cancers involving the nasal cavity and paranasal sinuses were treated with IMRT to a median dose of 66.0 Gy. Toxicity was scored using the Radiation Therapy Oncology Group morbidity criteria based on conjunctivitis, corneal ulceration and keratitis. The paired lacrimal glands were contoured as organs at risk, and the mean dose, maximum dose, V10, V20 and V30 were determined. Statistical analysis was performed using logistic regression and the Akaike information criterion (AIC). Results: The maximum and mean dose to the ipsilateral lacrimal gland were 19.2 Gy (range, 1.4–75.4 Gy) and 14.5 Gy (range, 11.1–67.8 Gy), respectively. The mean V10, V20 and V30 values were 50%, 25% and 17%, respectively. The incidence of acute and late Grade 3+ toxicities was 23% and 19%, respectively. Based on logistic regression and AIC, the maximum dose to the ipsilateral lacrimal gland was identified as a more significant predictor of acute toxicity (AIC, 53.89) and late toxicity (AIC, 32.94) than the mean dose (AIC, 56.13 and 33.83, respectively). The V20 was identified as the most significant predictor of late toxicity (AIC, 26.81). Conclusion: A dose–response relationship between maximum dose to the lacrimal gland and ocular toxicity was established. Our data suggesting a threshold relationship may be useful in establishing dosimetric guidelines for IMRT planning that may decrease the risk of acute and late lacrimal toxicities in the future. Advances in knowledge: A threshold relationship between radiation dose to the lacrimal gland and ocular toxicity was demonstrated, which may aid in treatment planning and reducing the morbidity of radiotherapy for sinonasal tumours. PMID:24167183
Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim
2015-12-01
Linear mixed models were developed and used to predict rubber (Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %. PMID:25824122
NASA Astrophysics Data System (ADS)
Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim
2015-12-01
Linear mixed models were developed and used to predict rubber ( Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %.
Samet, J.; Zeger, S.; Kelsall, J.; Xu, J.; Kalkstein, L.
1998-04-01
This report considers the consequences of using alternative approaches to controlling for weather and explores modification of air pollution effects by weather, as weather patterns could plausibly alter air pollution`s effect on health. The authors analyzed 1973--1980 total mortality data for Philadelphia using four weather models and compared estimates of the effects of TSP and SO{sub 2} on mortality using a Poisson regression model. Two synoptic categories developed by Kalkstein were selected--The Temporal Synoptic Index (TSI) and the Spatial Synoptic Classification (SSC)--and compared with (1) descriptive models developed by Schwartz and Dockery (S-D); and (2) LOESS, a nonparametric function of the previous day`s temperature and dew point. The authors considered model fit using Akaike`s Information Criterion (AIC) and changes in the estimated effects of TSP and SO{sub 2}. In the full-year analysis, S-D is better than LOESS at predicting mortality, and S-D and LOESS are better than TSI, as measured by AIC. When TSP or SO{sub 2} was fit alone, the results were qualitatively similar, regardless of how weather was controlled; when TSP and SO{sub 2} were fit simultaneously, the S-D and LOESS models give qualitatively different results than TSI, which attributes more of the pollution effect to SO{sub 2} than to TSP. Model fit is substantially poorer with TSI.
Algorithm for systematic peak extraction from atomic pair distribution functions.
Granlund, L; Billinge, S J L; Duxbury, P M
2015-07-01
The study presents an algorithm, ParSCAPE, for model-independent extraction of peak positions and intensities from atomic pair distribution functions (PDFs). It provides a statistically motivated method for determining parsimony of extracted peak models using the information-theoretic Akaike information criterion (AIC) applied to plausible models generated within an iterative framework of clustering and chi-square fitting. All parameters the algorithm uses are in principle known or estimable from experiment, though careful judgment must be applied when estimating the PDF baseline of nanostructured materials. ParSCAPE has been implemented in the Python program SrMise. Algorithm performance is examined on synchrotron X-ray PDFs of 16 bulk crystals and two nanoparticles using AIC-based multimodeling techniques, and particularly the impact of experimental uncertainties on extracted models. It is quite resistant to misidentification of spurious peaks coming from noise and termination effects, even in the absence of a constraining structural model. Structure solution from automatically extracted peaks using the Liga algorithm is demonstrated for 14 crystals and for C60. Special attention is given to the information content of the PDF, theory and practice of the AIC, as well as the algorithm's limitations. PMID:26131896
Shi, Jian-min; Fan, Cheng-fang; Liu, Yang; Yang, Qing-pei; Fang, Kai; Fan, Fang-li; Yang, Guang-yao
2015-12-01
To detect the ecological process of the succession series of Phyllostachys glauca forest in a limestone mountain, five niche models, i.e., broken stick model (BSM), niche preemption model (NPM), dominance preemption model (DPM), random assortment model (RAM) and overlap- ping niche model (ONM) were employed to describe the species-abundance distribution patterns (SDPs) of 15 samples. χ² test and Akaike information criterion (AIC) were used to test the fitting effects of the five models. The results showed that the optimal SDP models for P. glauca forest, bamboo-broadleaved mixed forest and broadleaved forest were DPM (χ² = 35.86, AIC = -69.77), NPM (χ² = 1.60, AIC = -94.68) and NPM (χ² = 0.35, AIC = -364.61), respectively. BSM also well fitted the SDP of bamboo-broadleaved mixed forest and broad-leaved forest, while it was unsuitable to describe the SDP of P. glauca forest. The fittings of RAM and ONM in the three forest types were all rejected by the χ² test and AIC. With the development of community succession from P. glauca forest to broadleaved forest, the species richness and evenness increased, and the optimal SDP model changed from DPM to NPM. It was inferred that the change of ecological process from habitat filtration to interspecific competition was the main driving force of the forest succession. The results also indicated that the application of multiple SDP models and test methods would be beneficial to select the best model and deeply understand the ecological process of community succession. PMID:27111994
Takekawa, J.Y.; Wainwright-De La Cruz, S.E.; Hothem, R.L.; Yee, J.
2002-01-01
In wild waterfowl, poor winter body condition may negatively affect migration, survival, and reproduction. Environmental contaminants have been shown to adversely affect the body condition of captive birds, but few field studies have examined body condition and contaminants in wild birds during the winter. We assessed the body condition of carcasses from a collection of canvasbacks (Aythya valisineria) and lesser (A. affinis) and greater scaup (A. marila) wintering in coastal California. We used Akaike information criterion (AIC) to select the model with the best balance of parsimony and goodness of fit that related indices of body condition with concentrations of Cd, Cu, Hg, Se, and Zn. Total ash-free protein in canvasbacks decreased with increasing Se concentrations, and pancreas mass decreased with increasing Hg. We combined the closely related lesser and greater scaup in analyses and found that total carcass fat, pancreas mass, and carcass mass decreased with increasing Zn concentrations, and pancreas mass decreased with increasing Hg. Our AIC analysis indicated that some indices of body condition in diving ducks were inversely related to some environmental contaminants in this collection, but additional AIC analyses should be conducted across a wider range of contaminant concentrations to corroborate our findings.
Takekawa, J Y; Wainwright-De La Cruz, S E; Hothem, R L; Yee, J
2002-01-01
In wild waterfowl, poor winter body condition may negatively affect migration, survival, and reproduction. Environmental contaminants have been shown to adversely affect the body condition of captive birds, but few field studies have examined body condition and contaminants in wild birds during the winter. We assessed the body condition of carcasses from a collection of canvasbacks (Aythya valisineria) and lesser (A. affinis) and greater scaup (A. marila) wintering in coastal California. We used Akaike information criterion (AIC) to select the model with the best balance of parsimony and goodness of fit that related indices of body condition with concentrations of Cd, Cu, Hg, Se, and Zn. Total ash-free protein in canvasbacks decreased with increasing Se concentrations, and pancreas mass decreased with increasing Hg. We combined the closely related lesser and greater scaup in analyses and found that total carcass fat, pancreas mass, and carcass mass decreased with increasing Zn concentrations, and pancreas mass decreased with increasing Hg. Our AIC analysis indicated that some indices of body condition in diving ducks were inversely related to some environmental contaminants in this collection, but additional AIC analyses should be conducted across a wider range of contaminant concentrations to corroborate our findings. PMID:11706369
NASA Astrophysics Data System (ADS)
Sleeman, Reinoud; van Eck, Torild
1999-06-01
The onset of a seismic signal is determined through joint AR modeling of the noise and the seismic signal, and the application of the Akaike Information Criterion (AIC) using the onset time as parameter. This so-called AR-AIC phase picker has been tested successfully and implemented on the Z-component of the broadband station HGN to provide automatic P-phase picks for a rapid warning system. The AR-AIC picker is shown to provide accurate and robust automatic picks on a large experimental database. Out of 1109 P-phase onsets with signal-to-noise ratio (SNR) above 1 from local, regional and teleseismic earthquakes, our implementation detects 71% and gives a mean difference with manual picks of 0.1 s. An optimal version of the well-established picker of Baer and Kradolfer [Baer, M., Kradolfer, U., An automatic phase picker for local and teleseismic events, Bull. Seism. Soc. Am. 77 (1987) 1437-1445] detects less than 41% and gives a mean difference with manual picks of 0.3 s using the same dataset.
Regnault, N.; Gillman, M. W.; Kleinman, K.; Rifas-Shiman, S.; Botton, J.
2016-01-01
Background/Aims The objective of our study was to compare the fit of four growth models for weight and height in contemporary US children between birth and 9 years. Methods In Project Viva, we collected weight and height growth data between birth and 9 years. We compared the Jenss model, the adapted Jenss model that adds a quadratic term, and the Reed 1st and 2nd order models. We used the log likelihood ratio test to compare nested models and the Akaike (AIC)/Bayesian information criterion (BIC) to compare nonnested models. Results For weight and height, the adapted Jenss model had a better fit than the Jenss model (for weight: p < 0.0001), and the Reed 2nd order model had a better fit than the Reed 1st order model (for weight: p < 0.0001). Compared with the Reed 2nd order model, the adapted Jenss model had a better fit for both weight (adapted Jenss vs. Reed 2nd order, AIC: 66,974 vs. 82,791, BIC: 67,066 vs. 82,883) and height (adapted Jenss vs. Reed 2nd order, AIC: 87,108 vs. 87,612, BIC: 87,196 vs. 87,700). Conclusions In this pre-birth study of children aged 0–9 years, for both weight and height the adapted Jenss model presented the best fit of all four tested models. PMID:25413655
Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models
Roshani, Daem; Ghaderi, Ebrahim
2016-01-01
Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809
Measure the Semantic Similarity of GO Terms Using Aggregate Information Content.
Song, Xuebo; Li, Lin; Srimani, Pradip K; Yu, Philip S; Wang, James Z
2014-01-01
The rapid development of gene ontology (GO) and huge amount of biomedical data annotated by GO terms necessitate computation of semantic similarity of GO terms and, in turn, measurement of functional similarity of genes based on their annotations. In this paper we propose a novel and efficient method to measure the semantic similarity of GO terms. The proposed method addresses the limitations in existing GO term similarity measurement techniques; it computes the semantic content of a GO term by considering the information content of all of its ancestor terms in the graph. The aggregate information content (AIC) of all ancestor terms of a GO term implicitly reflects the GO term's location in the GO graph and also represents how human beings use this GO term and all its ancestor terms to annotate genes. We show that semantic similarity of GO terms obtained by our method closely matches the human perception. Extensive experimental studies show that this novel method also outperforms all existing methods in terms of the correlation with gene expression data. We have developed web services for measuring semantic similarity of GO terms and functional similarity of genes using the proposed AIC method and other popular methods. These web services are available at http://bioinformatics.clemson.edu/G-SESAME. PMID:26356015
Ground surface paleotemperature reconstruction using information measures and empirical Bayes
NASA Astrophysics Data System (ADS)
Woodbury, Allan D.; Ferguson, Grant
2006-03-01
We outline an empirical Bayesian approach to ground-surface temperature (GST) reconstruction that utilizes Akaike's Bayesian information criterion (ABIC). Typical unknown statistical quantities, such as the noise variance and so on, are automatically determined through the analysis. We compare the ABIC inversion to the singular value decomposition on a synthetic downhole temperature data set. In comparing the root mean square errors between the synthetic climatic signal and each of the reconstructions (singular value and ABIC) from 1900 to 2002, we see that the ABIC solution produced the `best' reconstruction in a mean square sense. We also carry out an analysis of the Canadian borehole data set in which we use 221 temperature profiles. The reconstructed GST record shows warming between 1800 and 1949 of approximately 1.0 K, with the maximum rate of warming occurring between 1900 and 1949.
The optimum order of a Markov chain model for daily rainfall in Nigeria
NASA Astrophysics Data System (ADS)
Jimoh, O. D.; Webster, P.
1996-11-01
Markov type models are often used to describe the occurrence of daily rainfall. Although models of Order 1 have been successfully employed, there remains uncertainty concerning the optimum order for such models. This paper is concerned with estimation of the optimum order of Markov chains and, in particular, the use of objective criteria of the Akaike and Bayesian Information Criteria (AIC and BIC, respectively). Using daily rainfall series for five stations in Nigeria, it has been found that the AIC and BIC estimates vary with month as well as the value of the rainfall threshold used to define a wet day. There is no apparent system to this variation, although AIC estimates are consistently greater than or equal to BIC estimates, with values of the latter limited to zero or unity. The optimum order is also investigated through generation of synthetic sequences of wet and dry days using the transition matrices of zero-, first- and second-order Markov chains. It was found that the first-order model is superior to the zero-order model in representing the characteristics of the historical sequence as judged using frequency duration curves. There was no discernible difference between the model performance for first- and second-order models. There was no seasonal varation in the model performance, which contrasts with the optimum models identified using AIC and BIC estimates. It is concluded that caution is needed with the use of objective criteria for determining the optimum order of the Markov model and that the use of frequency duration curves can provide a robust alternative method of model identification. Comments are also made on the importance of record length and non-stationarity for model identification
Acceleration of the universe: a reconstruction of the effective equation of state
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-04-01
The present work is based upon a parametric reconstruction of the effective or total equation of state in a model for the universe with accelerated expansion. The constraints on the model parameters are obtained by maximum likelihood analysis using the supernova distance modulus data, observational Hubble data, baryon acoustic oscillation data and cosmic microwave background shift parameter data. For statistical comparison, the same analysis has also been carried out for the wCDM dark energy model. Different model selection criteria (Akaike information criterion (AIC)) and (Bayesian Information Criterion (BIC)) give the clear indication that the reconstructed model is well consistent with the wCDM model. Then both the models (weff(z) model and wCDM model) have also been presented through (q0,j0) parameter space. Tighter constraint on the present values of dark energy equation of state parameter (wDE(z = 0)) and cosmological jerk (j0) have been achieved for the reconstructed model.
Predicting road accidents: Structural time series approach
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-07-01
In this paper, the model for occurrence of road accidents in Malaysia between the years of 1970 to 2010 was developed and throughout this model the number of road accidents have been predicted by using the structural time series approach. The models are developed by using stepwise method and the residual of each step has been analyzed. The accuracy of the model is analyzed by using the mean absolute percentage error (MAPE) and the best model is chosen based on the smallest Akaike information criterion (AIC) value. A structural time series approach found that local linear trend model is the best model to represent the road accidents. This model allows level and slope component to be varied over time. In addition, this approach also provides useful information on improving the conventional time series method.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Acceleration of the universe: a reconstruction of the effective equation of state
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-07-01
The present work is based upon a parametric reconstruction of the effective or total equation of state in a model for the universe with accelerated expansion. The constraints on the model parameters are obtained by maximum likelihood analysis using the supernova distance modulus data, observational Hubble data, baryon acoustic oscillation data and cosmic microwave background shift parameter data. For statistical comparison, the same analysis has also been carried out for the wCDM dark energy model. Different model selection criteria (Akaike information criterion (AIC)) and (Bayesian Information Criterion (BIC)) give the clear indication that the reconstructed model is well consistent with the wCDM model. Then both the models (w_{eff}(z) model and wCDM model) have also been presented through (q_0 ,j_0 ) parameter space. Tighter constraint on the present values of dark energy equation of state parameter (w_{DE}(z = 0)) and cosmological jerk (j_0) have been achieved for the reconstructed model.
Power-law ansatz in complex systems: Excessive loss of information.
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring. PMID:26764792
Power-law ansatz in complex systems: Excessive loss of information
NASA Astrophysics Data System (ADS)
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.
Heflin, Laura Elizabeth; Gibbs, Victoria K; Powell, Mickie L; Makowsky, Robert; Lawrence, Addison L; Lawrence, John M
2014-01-01
Small adult (19.50 ± 2.01g wet weight) Lytechinus variegatus were fed eight formulated diets with different protein (12 to 36% dry weight as fed) and carbohydrate (21 to 39 % dry weight) levels. Each sea urchin (n = 8 per treatment) was fed a daily ration of 1.5% of the average body weight of all individuals for 9 weeks. Akaike information criterion scores were used to compare six different dietary composition hypotheses for eight growth measurements. For each physical growth response, different mathematical models representing a priori hypotheses were compared using the Akaike Information Criterion (AIC) score. The AIC is one of many information-theoretic approaches that allows for direct comparison of non-nested models with varying number of parameters. Dietary protein level and protein: energy ratio were the best models for prediction of test diameter increase. Dietary protein level was the best model of test with spines wet weight gain and test with spines dry matter production. When the Aristotle’s lantern was corrected for size of the test, there was an inverse relationship with dietary protein level. Log transformed lantern to test with spines index was also best associated with the dietary protein model. Dietary carbohydrate level was a poor predictor for growth parameters. However, the protein × carbohydrate interaction model was the best model of organic content (% dry weight) of the test without spines. These data suggest that there is a differential allocation of resources when dietary protein is limiting and the test with spines, but not the Aristotle’s lantern, is affected by availability of dietary nutrients. PMID:25431520
NASA Astrophysics Data System (ADS)
Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi
We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.
Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.
Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua
2014-01-01
We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best. PMID:25121108
A model of multisecond timing behaviour under peak-interval procedures.
Hasegawa, Takayuki; Sakata, Shogo
2015-04-01
In this study, the authors developed a fundamental theory of interval timing behaviour, inspired by the learning-to-time (LeT) model and the scalar expectancy theory (SET) model, and based on quantitative analyses of such timing behaviour. Our experiments used the peak-interval procedure with rats. The proposed model of timing behaviour comprises clocks, a regulator, a mixer, a response, and memory. Using our model, we calculated the basic clock speeds indicated by the subjects' behaviour under such peak procedures. In this model, the scalar property can be defined as a kind of transposition, which can then be measured quantitatively. The Akaike information criterion (AIC) values indicated that the current model fit the data slightly better than did the SET model. Our model may therefore provide a useful addition to SET for the analysis of timing behaviour. PMID:25539685
Particle-Size Distribution Models for the Conversion of Chinese Data to FAO/USDA System
Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua
2014-01-01
We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r 2, Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best. PMID:25121108
Bivariate copula in fitting rainfall data
NASA Astrophysics Data System (ADS)
Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui
2014-07-01
The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).
NASA Astrophysics Data System (ADS)
Iwata, Takaki; Katao, Hiroshi
2006-04-01
We study the correlation between the phase of the moon and the occurrence of microearthquakes in the Tamba region, close to the fault of the 1995 Kobe earthquake. The existence of the correlation during the two-year period following the Kobe earthquake was suggested in a previous study. First, in this study, we investigate the statistical significance of such correlation. Using point-process modeling and AIC (Akaike Information Criterion), we confirm that the existence of the correlation is statistically significant. Second, we investigate the temporal variation of the correlation during the four-year period following the Kobe earthquake. The result of the second analysis indicates that the correlation is strongest just after the Kobe earthquake and that it then becomes weaker year by year.
A K-BKZ Formulation for Soft-Tissue Viscoelasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Diethelm, Kai
2005-01-01
A viscoelastic model of the K-BKZ (Kaye 1962; Bernstein et al. 1963) type is developed for isotropic biological tissues, and applied to the fat pad of the human heel. To facilitate this pursuit, a class of elastic solids is introduced through a novel strain-energy function whose elements possess strong ellipticity, and therefore lead to stable material models. The standard fractional-order viscoelastic (FOV) solid is used to arrive at the overall elastic/viscoelastic structure of the model, while the elastic potential via the K-BKZ hypothesis is used to arrive at the tensorial structure of the model. Candidate sets of functions are proposed for the elastic and viscoelastic material functions present in the model, including a regularized fractional derivative that was determined to be the best. The Akaike information criterion (AIC) is advocated for performing multi-model inference, enabling an objective selection of the best material function from within a candidate set.
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
Seasonal fractional integrated time series models for rainfall data in Nigeria
NASA Astrophysics Data System (ADS)
Yaya, Olaoluwa S.; Fashae, Olutoyin A.
2015-04-01
Rainfall variability, seasonality and extremity have a lot of consequences in planning and decision making of every sphere of human endeavour especially in Nigeria where majority of agricultural practices and planning is dependent on rainfed agriculture. For this reason, an extensive understanding of rainfall regime is an important prerequisite in such planning. We approach this work using time series approach. Seasonality and possibility of long-term dependence in rainfall data are considered, and these have significant effects in explaining the distribution of rainfall in each state of the six geopolitical zones of Nigeria. The estimated seasonal autoregressive fractionally integrated moving average (SARFIMA) model for each of the six rainfall zones was found to perform better in predicting rainfall distribution than the corresponding seasonal autoregressive moving average (SARMA) model in terms of minimum Akaike information criterion (AIC) and other model diagnostic measures.
NASA Astrophysics Data System (ADS)
Liu, Qi-Chun; Li, Tie-Fu; Luo, Xiao-Qing; Zhao, Hu; Xiong, Wei; Zhang, Ying-Shan; Chen, Zhen; Liu, J. S.; Chen, Wei; Nori, Franco; Tsai, J. S.; You, J. Q.
2016-05-01
Electromagnetically induced transparency (EIT) has been realized in atomic systems, but fulfilling the EIT conditions for artificial atoms made from superconducting circuits is a more difficult task. Here we report an experimental observation of the EIT in a tunable three-dimensional transmon by probing the cavity transmission. To fulfill the EIT conditions, we tune the transmon to adjust its damping rates by utilizing the effect of the cavity on the transmon states. From the experimental observations, we clearly identify the EIT and Autler-Townes splitting (ATS) regimes as well as the transition regime in between. Also, the experimental data demonstrate that the threshold ΩAIC determined by the Akaike information criterion can describe the EIT-ATS transition better than the threshold ΩEIT given by the EIT theory.
Bayesian decision tree for the classification of the mode of motion in single-molecule trajectories.
Türkcan, Silvan; Masson, Jean-Baptiste
2013-01-01
Membrane proteins move in heterogeneous environments with spatially (sometimes temporally) varying friction and with biochemical interactions with various partners. It is important to reliably distinguish different modes of motion to improve our knowledge of the membrane architecture and to understand the nature of interactions between membrane proteins and their environments. Here, we present an analysis technique for single molecule tracking (SMT) trajectories that can determine the preferred model of motion that best matches observed trajectories. The method is based on Bayesian inference to calculate the posteriori probability of an observed trajectory according to a certain model. Information theory criteria, such as the Bayesian information criterion (BIC), the Akaike information criterion (AIC), and modified AIC (AICc), are used to select the preferred model. The considered group of models includes free Brownian motion, and confined motion in 2nd or 4th order potentials. We determine the best information criteria for classifying trajectories. We tested its limits through simulations matching large sets of experimental conditions and we built a decision tree. This decision tree first uses the BIC to distinguish between free Brownian motion and confined motion. In a second step, it classifies the confining potential further using the AIC. We apply the method to experimental Clostridium Perfingens [Formula: see text]-toxin (CP[Formula: see text]T) receptor trajectories to show that these receptors are confined by a spring-like potential. An adaptation of this technique was applied on a sliding window in the temporal dimension along the trajectory. We applied this adaptation to experimental CP[Formula: see text]T trajectories that lose confinement due to disaggregation of confining domains. This new technique adds another dimension to the discussion of SMT data. The mode of motion of a receptor might hold more biologically relevant information than the diffusion
MMA, A Computer Code for Multi-Model Analysis
Eileen P. Poeter and Mary C. Hill
2007-08-20
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations.
Ternès, Nils; Rotolo, Federico; Michiels, Stefan
2016-07-10
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26970107
Ercanli, İlker; Kahriman, Aydın
2015-03-01
We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB. PMID:25663395
An improved automatic time-of-flight picker for medical ultrasound tomography
Li, Cuiping; Huang, Lianjie; Duric, Nebojsa; Zhang, Haijiang; Rowe, Charlotte
2014-01-01
Objective and motivation Time-of-flight (TOF) tomography used by a clinical ultrasound tomography device can efficiently and reliably produce sound–speed images of the breast for cancer diagnosis. Accurate picking of TOFs of transmitted ultrasound signals is extremely important to ensure high-resolution and high-quality ultrasound sound–speed tomograms. Since manually picking is time-consuming for large datasets, we developed an improved automatic TOF picker based on the Akaike information criterion (AIC), as described in this paper. Methods We make use of an approach termed multi-model inference (model averaging), based on the calculated AIC values, to improve the accuracy of TOF picks. By using multi-model inference, our picking method incorporates all the information near the TOF of ultrasound signals. Median filtering and reciprocal pair comparison are also incorporated in our AIC picker to effectively remove outliers. Results We validate our AIC picker using synthetic ultrasound waveforms, and demonstrate that our automatic TOF picker can accurately pick TOFs in the presence of random noise with absolute amplitudes up to 80% of the maximum absolute signal amplitude. We apply the new method to 1160 in vivo breast ultrasound waveforms, and compare the picked TOFs with manual picks and amplitude threshold picks. The mean value and standard deviation between our TOF picker and manual picking are 0.4 μs and 0.29 μs, while for amplitude threshold picker the values are 1.02 μs and 0.9 μs, respectively. Tomograms for in vivo breast data with high signal-to-noise ratio (SNR) (~25 dB) and low SNR (~18 dB) clearly demonstrate that our AIC picker is much less sensitive to the SNRs of the data, compared to the amplitude threshold picker. Discussion and conclusions The picking routine developed here is aimed at determining reliable quantitative values, necessary for adding diagnostic information to our clinical ultrasound tomography device – CURE. It has been
Carstens, P D; Sharifi, A R; Brand, T S; Hoffman, L C
2014-01-01
1. Feeding costs are the largest expense in an ostrich production system, and protein is one of the more expensive components of the diet. This study evaluated the growth response of ostrich chicks on diets containing different concentrations of protein (amino acids). The diets were formulated to contain three concentrations of protein (one diet with 20% less protein than the conventional concentration, L; one diet with the conventional concentration of protein, M and one diet with 20% more protein than the conventional concentration, H) for each of the phase diets. The phase diets were pre-starter, starter, grower and finisher. 2. This study includes the analysis of ostrich body weight (BW) by modelling growth with linear polynomial and non-linear functions for all the data not separated for treatments. In total, 3378 BW recordings of 90 animals were collected weekly from hatch (d 0) to 287 d (41 weeks) of age. 3. Seven non-linear growth models and three linear polynomial models were fitted to the data. The growth functions were compared by using Akaike's information criterion (AIC). For the non-linear models, the Bridges and Janoschek models had the lowest AIC values for the H treatment, while the Richards curve had the lowest value for M and the von Bertalanffy for the L treatment. 4. For the linear polynomial models, the linear polynomial of the third degree had the lowest AIC values for all three treatments, thus making it the most suitable model for the data; therefore, the predictions of this model were used to interpret the growth data. Significant differences were found between treatments for growth data. 5. The results from this study can aid in describing the growth of ostriches subjected to optimum feeding conditions. This information can also be used in research when modelling the nutrient requirements of growing birds. PMID:25132424
Sakurai, Shinichiro; Takashima, Hiroaki; Waseda, Katsuhisa; Gosho, Masahiko; Kurita, Akiyoshi; Ando, Hirohiko; Maeda, Kazuyuki; Suzuki, Akihiro; Fujimoto, Masanobu; Amano, Tetsuya
2015-10-01
The aim of this study was to determine the correlation between the fractional flow reserve (FFR) values and volumetric intravascular ultrasound (IVUS) parameters derived from classic gray-scale IVUS and integrated backscatter (IB)-IVUS, taking into account known confounding factors. Patients with unstable angina pectoris with the frequent development of vulnerable plaques often showed the discrepancy between the FFR value and the quantitative coronary angiography findings. Our target population was 107 consecutive subjects with 114 isolated lesions who were scheduled for elective coronary angiography. The FFR was calculated as the mean distal coronary pressure divided by the mean aortic pressure during maximal hyperemia. Various volumetric parameters such as lipid plaque volume (LPV) and percentage of LPV (%LPV) were measured using IB-IVUS. Simple and multivariate linear regression analysis was employed to evaluate the correlation between FFR values and various classic gray-scale IVUS and IB-IVUS parameters. The Akaike information criterion (AIC) was used to compare the goodness of fit in an each model. Both the %LPV (r = -0.24; p = 0.01) and LPV (r = -0.40; p < 0.01) were significantly correlated with the FFR value. Only the LPV (AIC = -147.0; p = 0.006) and %LPV (AIC = -152.9; p = 0.005) proved to be independent predictors for the FFR value even after the adjustment of known confounding factors. The volumetric assessment by IB-IVUS could provide better information in terms of the relationship between plaque morphology and the FFR values as compared to the classic IVUS 2-dimensional gray-scale analysis. PMID:26129657
Assessing bimodality to detect the presence of a dual cognitive process.
Freeman, Jonathan B; Dale, Rick
2013-03-01
Researchers have long sought to distinguish between single-process and dual-process cognitive phenomena, using responses such as reaction times and, more recently, hand movements. Analysis of a response distribution's modality has been crucial in detecting the presence of dual processes, because they tend to introduce bimodal features. Rarely, however, have bimodality measures been systematically evaluated. We carried out tests of readily available bimodality measures that any researcher may easily employ: the bimodality coefficient (BC), Hartigan's dip statistic (HDS), and the difference in Akaike's information criterion between one-component and two-component distribution models (AIC(diff)). We simulated distributions containing two response populations and examined the influences of (1) the distances between populations, (2) proportions of responses, (3) the amount of positive skew present, and (4) sample size. Distance always had a stronger effect than did proportion, and the effects of proportion greatly differed across the measures. Skew biased the measures by increasing bimodality detection, in some cases leading to anomalous interactive effects. BC and HDS were generally convergent, but a number of important discrepancies were found. AIC(diff) was extremely sensitive to bimodality and identified nearly all distributions as bimodal. However, all measures served to detect the presence of bimodality in comparison to unimodal simulations. We provide a validation with experimental data, discuss methodological and theoretical implications, and make recommendations regarding the choice of analysis. PMID:22806703
Mejía-Falla, Paola A.; Cortés, Enric; Navia, Andrés F.; Zapata, Fernando A.
2014-01-01
We examined the age and growth of Urotrygon rogersi on the Colombian coast of the Eastern Tropical Pacific Ocean by directly estimating age using vertebral centra. We verified annual deposition of growth increments with marginal increment analysis. Eight growth curves were fitted to four data sets defined on the basis of the reproductive cycle (unadjusted or adjusted for age at first band) and size variables (disc width or total length). Model performance was evaluated using Akaike's Information Criterion (AIC), AIC weights and multi-model inference criteria. A two-phase growth function with adjusted age provided the best description of growth for females (based on five parameters, DW∞ = 20.1 cm, k = 0.22 yr–1) and males (based on four and five parameters, DW∞ = 15.5 cm, k = 0.65 yr–1). Median maturity of female and male U. rogersi is reached very fast (mean ± SE = 1.0 ± 0.1 year). This is the first age and growth study for a species of the genus Urotrygon and results indicate that U. rogersi attains a smaller maximum size and has a shorter lifespan and lower median age at maturity than species of closely related genera. These life history traits are in contrast with those typically reported for other elasmobranchs. PMID:24776963
Simple and Efficient Algorithm for Improving the MDL Estimator of the Number of Sources
Guimarães, Dayan A.; de Souza, Rausley A. A.
2014-01-01
We propose a simple algorithm for improving the MDL (minimum description length) estimator of the number of sources of signals impinging on multiple sensors. The algorithm is based on the norms of vectors whose elements are the normalized and nonlinearly scaled eigenvalues of the received signal covariance matrix and the corresponding normalized indexes. Such norms are used to discriminate the largest eigenvalues from the remaining ones, thus allowing for the estimation of the number of sources. The MDL estimate is used as the input data of the algorithm. Numerical results unveil that the so-called norm-based improved MDL (iMDL) algorithm can achieve performances that are better than those achieved by the MDL estimator alone. Comparisons are also made with the well-known AIC (Akaike information criterion) estimator and with a recently-proposed estimator based on the random matrix theory (RMT). It is shown that our algorithm can also outperform the AIC and the RMT-based estimator in some situations. PMID:25330050
Pharmacokinetic Analysis of 64Cu-ATSM Dynamic PET in Human Xenograft Tumors in Mice
Li, Fan; Jørgensen, Jesper Tranekjær; Madsen, Jacob; Kjaer, Andreas
2015-01-01
The aim of this study was to evaluate the feasibility to perform voxel-wise kinetic modeling on datasets obtained from tumor-bearing mice that underwent dynamic PET scans with 64Cu-ATSM and extract useful physiological parameters. Methods: Tumor-bearing mice underwent 90-min dynamic PET scans with 64Cu-ATSM and CT scans with contrast. Irreversible and reversible two-tissue compartment models were fitted to time activity curves (TACs) obtained from whole tumor volumes and compared using the Akaike information criterion (AIC). Based on voxel-wise pharmacokinetic analysis, parametric maps of model rate constants k1, k3 and Ki were generated and compared to 64Cu-ATSM uptake. Results: Based on the AIC, an irreversible two-tissue compartment model was selected for voxel-wise pharmacokinetic analysis. Of the extracted parameters, k1 (~perfusion) showed a strong correlation with early tracer uptake (mean spearman R = 0.88) 5 min post injection (pi). Moreover, positive relationships were found between late tracer uptake (90 min pi) and both k3 and the net influx rate constant, Ki (mean spearman R = 0.56 and R = 0.86; respectively). Conclusion: This study shows the feasibility to extract relevant parameters from voxel-wise pharmacokinetic analysis to be used for preclinical validation of 64Cu-ATSM as a hypoxia-specific PET tracer. PMID:26854145
Double point source W-phase inversion: Real-time implementation and automated model selection
NASA Astrophysics Data System (ADS)
Nealy, Jennifer L.; Hayes, Gavin P.
2015-12-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Archbald-Pannone, Laurie R.; McMurry, Timothy L.; Guerrant, Richard L.; Warren, Cirle A.
2015-01-01
Background Clostridium difficile infection (CDI) severity has increased, especially among hospitalized elderly. We evaluated clinical factors to predict mortality following CDI. Methods We collected data from inpatients diagnosed with CDI at US academic medical center (HSR-IRB# 13630). We evaluated age, Charlson comorbidity index (CCI), admission from a long-term care facility (LTCF), intensive care unit (ICU) at time of diagnosis, white blood cell count (WBC), blood urea nitrogen (BUN), low body mass index (BMI), and delirium as possible predictors. A parsimonious predictive model was chosen using Akaike information criterion (AIC) and a best subsets model selection algorithm. Area under the ROC curve was used to assess the model’s comparative; with AIC as selection criterion for all subsets to measure fit and control for over-fitting. Results From 362 subjects, the selected model included CCI, WBC, BUN, ICU, and delirium. The logistic regression coefficients were converted to a points scale and calibrated so that each unit on the CCI contributed 2 points, ICU contributed 5, unit of WBC (natural log scale) contributed 3, unit of BUN contributed 5, and delirium contributed 11. Discussion Our model shows substantial ability to predict short term mortality in patients hospitalized with CDI. Conclusion Patients who were diagnosed in the ICU and developed delirium are at highest risk for dying within 30 days of CDI diagnosis. PMID:25920706
Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.
2013-01-01
Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.
Costa, L R F; Barthem, R B; Albernaz, A L; Bittencourt, M M; Villacorta-Corrêa, M A
2013-05-01
The tambaqui, Colossoma macropomum, is one of the most commercially valuable Amazonian fish species, and in the floodplains of the region, they are caught in both rivers and lakes. Most growth studies on this species to date have adjusted only one growth model, the von Bertalanffy, without considering its possible uncertainties. In this study, four different models (von Bertalanffy, Logistic, Gompertz and the general model of Schnüte-Richards) were adjusted to a data set of fish caught within lakes from the middle Solimões River. These models were adjusted by non-linear equations, using the sample size of each age class as its weight. The adjustment evaluation of each model was based on the Akaike Information Criterion (AIC), the variation of AIC between the models (Δi) and the evidence weights (wi). Both the Logistic (Δi = 0.0) and Gompertz (Δi = 1.12) models were supported by the data, but neither of them was clearly superior (wi, respectively 52.44 and 29.95%). Thus, we propose the use of an averaged-model to estimate the asymptotic length (L∞). The averaged-model, based on Logistic and Gompertz models, resulted in an estimate of L∞=90.36, indicating that the tambaqui would take approximately 25 years to reach average size. PMID:23917568
Aydin, Zeynep; Marcussen, Thomas; Ertekin, Alaattin Selcuk; Oxelman, Bengt
2014-01-01
Coalescent-based inference of phylogenetic relationships among species takes into account gene tree incongruence due to incomplete lineage sorting, but for such methods to make sense species have to be correctly delimited. Because alternative assignments of individuals to species result in different parametric models, model selection methods can be applied to optimise model of species classification. In a Bayesian framework, Bayes factors (BF), based on marginal likelihood estimates, can be used to test a range of possible classifications for the group under study. Here, we explore BF and the Akaike Information Criterion (AIC) to discriminate between different species classifications in the flowering plant lineage Silene sect. Cryptoneurae (Caryophyllaceae). We estimated marginal likelihoods for different species classification models via the Path Sampling (PS), Stepping Stone sampling (SS), and Harmonic Mean Estimator (HME) methods implemented in BEAST. To select among alternative species classification models a posterior simulation-based analog of the AIC through Markov chain Monte Carlo analysis (AICM) was also performed. The results are compared to outcomes from the software BP&P. Our results agree with another recent study that marginal likelihood estimates from PS and SS methods are useful for comparing different species classifications, and strongly support the recognition of the newly described species S. ertekinii. PMID:25216034
An Investigation of State-Space Model Fidelity for SSME Data
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2008-01-01
In previous studies, a variety of unsupervised anomaly detection techniques for anomaly detection were applied to SSME (Space Shuttle Main Engine) data. The observed results indicated that the identification of certain anomalies were specific to the algorithmic method under consideration. This is the reason why one of the follow-on goals of these previous investigations was to build an architecture to support the best capabilities of all algorithms. We appeal to that goal here by investigating a cascade, serial architecture for the best performing and most suitable candidates from previous studies. As a precursor to a formal ROC (Receiver Operating Characteristic) curve analysis for validation of resulting anomaly detection algorithms, our primary focus here is to investigate the model fidelity as measured by variants of the AIC (Akaike Information Criterion) for state-space based models. We show that placing constraints on a state-space model during or after the training of the model introduces a modest level of suboptimality. Furthermore, we compare the fidelity of all candidate models including those embodying the cascade, serial architecture. We make recommendations on the most suitable candidates for application to subsequent anomaly detection studies as measured by AIC-based criteria.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Bermudez, Eduardo B.; Klerman, Elizabeth B.; Czeisler, Charles A.; Cohen, Daniel A.; Wyatt, James K.; Phillips, Andrew J. K.
2016-01-01
Sleep restriction causes impaired cognitive performance that can result in adverse consequences in many occupational settings. Individuals may rely on self-perceived alertness to decide if they are able to adequately perform a task. It is therefore important to determine the relationship between an individual’s self-assessed alertness and their objective performance, and how this relationship depends on circadian phase, hours since awakening, and cumulative lost hours of sleep. Healthy young adults (aged 18–34) completed an inpatient schedule that included forced desynchrony of sleep/wake and circadian rhythms with twelve 42.85-hour “days” and either a 1:2 (n = 8) or 1:3.3 (n = 9) ratio of sleep-opportunity:enforced-wakefulness. We investigated whether subjective alertness (visual analog scale), circadian phase (melatonin), hours since awakening, and cumulative sleep loss could predict objective performance on the Psychomotor Vigilance Task (PVT), an Addition/Calculation Test (ADD) and the Digit Symbol Substitution Test (DSST). Mathematical models that allowed nonlinear interactions between explanatory variables were evaluated using the Akaike Information Criterion (AIC). Subjective alertness was the single best predictor of PVT, ADD, and DSST performance. Subjective alertness alone, however, was not an accurate predictor of PVT performance. The best AIC scores for PVT and DSST were achieved when all explanatory variables were included in the model. The best AIC score for ADD was achieved with circadian phase and subjective alertness variables. We conclude that subjective alertness alone is a weak predictor of objective vigilant or cognitive performance. Predictions can, however, be improved by knowing an individual’s circadian phase, current wake duration, and cumulative sleep loss. PMID:27019198
NASA Astrophysics Data System (ADS)
He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang
2016-05-01
The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.
NASA Astrophysics Data System (ADS)
He, Anhua; Singh, Ramesh P.; Sun, Zhaohua; Ye, Qing; Zhao, Gang
2016-07-01
The earth tide, atmospheric pressure, precipitation and earthquake fluctuations, especially earthquake greatly impacts water well levels, thus anomalous co-seismic changes in ground water levels have been observed. In this paper, we have used four different models, simple linear regression (SLR), multiple linear regression (MLR), principal component analysis (PCA) and partial least squares (PLS) to compute the atmospheric pressure and earth tidal effects on water level. Furthermore, we have used the Akaike information criterion (AIC) to study the performance of various models. Based on the lowest AIC and sum of squares for error values, the best estimate of the effects of atmospheric pressure and earth tide on water level is found using the MLR model. However, MLR model does not provide multicollinearity between inputs, as a result the atmospheric pressure and earth tidal response coefficients fail to reflect the mechanisms associated with the groundwater level fluctuations. On the premise of solving serious multicollinearity of inputs, PLS model shows the minimum AIC value. The atmospheric pressure and earth tidal response coefficients show close response with the observation using PLS model. The atmospheric pressure and the earth tidal response coefficients are found to be sensitive to the stress-strain state using the observed data for the period 1 April-8 June 2008 of Chuan 03# well. The transient enhancement of porosity of rock mass around Chuan 03# well associated with the Wenchuan earthquake (Mw = 7.9 of 12 May 2008) that has taken its original pre-seismic level after 13 days indicates that the co-seismic sharp rise of water well could be induced by static stress change, rather than development of new fractures.
A Test of the DSM-5 Severity Scale for Alcohol Use Disorder
Fazzino, Tera L.; Rose, Gail L.; Burt, Keith B.; Helzer, John E.
2014-01-01
BACKGROUND For the DSM-5-defined alcohol use disorder (AUD) diagnosis, a tricategorized scale that designates mild, moderate, and severe AUD was selected over a fully dimensional scale to represent AUD severity. The purpose of this study was to test whether the DSM-5-defined AUD severity measure was as proficient a predictor of alcohol use following a brief intervention, compared to a fully dimensional scale. METHODS Heavy drinking primary care patients (N=246) received a physician-delivered brief intervention (BI), and then reported daily alcohol consumption for six months using an Interactive Voice Response (IVR) system. The dimensional AUD measure we constructed was a summation of all AUD criteria met at baseline (mean = 6.5; SD = 2.5). A multi-model inference technique was used to determine whether the DSM-5 tri-categorized severity measure or a dimensional approach would provide a more precise prediction of change in weekly alcohol consumption following a BI. RESULTS The Akaike information criterion (AIC) for the dimensional AUD model (AIC=7623.88) was four points lower than the tri-categorized model (AIC=7627.88) and weight of evidence calculations indicated there was 88% likelihood the dimensional model was the better approximating model. The dimensional model significantly predicted change in alcohol consumption (p =.04) whereas the DSM-5 tri-categorized model did not. CONCLUSION A dimensional AUD measure was superior, detecting treatment effects that were not apparent with tri-categorized severity model as defined by the DSM-5. We recommend using a dimensional measure for determining AUD severity. PMID:24893979
Bermudez, Eduardo B; Klerman, Elizabeth B; Czeisler, Charles A; Cohen, Daniel A; Wyatt, James K; Phillips, Andrew J K
2016-01-01
Sleep restriction causes impaired cognitive performance that can result in adverse consequences in many occupational settings. Individuals may rely on self-perceived alertness to decide if they are able to adequately perform a task. It is therefore important to determine the relationship between an individual's self-assessed alertness and their objective performance, and how this relationship depends on circadian phase, hours since awakening, and cumulative lost hours of sleep. Healthy young adults (aged 18-34) completed an inpatient schedule that included forced desynchrony of sleep/wake and circadian rhythms with twelve 42.85-hour "days" and either a 1:2 (n = 8) or 1:3.3 (n = 9) ratio of sleep-opportunity:enforced-wakefulness. We investigated whether subjective alertness (visual analog scale), circadian phase (melatonin), hours since awakening, and cumulative sleep loss could predict objective performance on the Psychomotor Vigilance Task (PVT), an Addition/Calculation Test (ADD) and the Digit Symbol Substitution Test (DSST). Mathematical models that allowed nonlinear interactions between explanatory variables were evaluated using the Akaike Information Criterion (AIC). Subjective alertness was the single best predictor of PVT, ADD, and DSST performance. Subjective alertness alone, however, was not an accurate predictor of PVT performance. The best AIC scores for PVT and DSST were achieved when all explanatory variables were included in the model. The best AIC score for ADD was achieved with circadian phase and subjective alertness variables. We conclude that subjective alertness alone is a weak predictor of objective vigilant or cognitive performance. Predictions can, however, be improved by knowing an individual's circadian phase, current wake duration, and cumulative sleep loss. PMID:27019198
A Potential Role for Allostatic Load in Preeclampsia
Hux, Vanessa J.; Roberts, James M.
2014-01-01
Objective Preeclampsia is a multisystemic disorder of pregnancy associated with maternal and fetal complications as well as later-life cardiovascular disease. Its exact cause is not known. We developed a pregnancy-specific multisystem index score of physiologic risk and chronic stress, allostatic load (AL), early in pregnancy. Our objective was to determine whether AL measured early in pregnancy was associated with increased odds of developing preeclampsia. Methods Data were from a single-center, prospectively collected database in a 1:2 individual-matched case control of women enrolled at <15 weeks gestation. We matched 38 preeclamptic cases to 75 uncomplicated, term deliveries on age, parity, and lifetime smoking status. AL was determined using 9 measures of cardiovascular, metabolic, and inflammatory function. Cases and matched controls were compared using conditional logistic regression. We compared the model's association with preeclampsia to that of obesity, a well-known risk factor for preeclampsia, by assessing goodness-of-fit by Akaike information criterion (AIC), where a difference >1-2 suggests better fit. Results Early pregnancy AL was higher in women with preeclampsia (1.25 +/- 0.68 vs. 0.83 +/- 0.62, p=0.002); women with higher AL had increasing odds of developing preeclampsia (OR 2.91, 95% CI 1.50-5.65). The difference between AIC for AL and obesity was >2 (AIC 74.4 vs. 84.4), indicating AL had a stronger association with preeclampsia. Conclusion Higher allostatic load in early pregnancy is associated with increasing odds of preeclampsia. This work supports a possible role of multiple maternal systems and chronic stress early in pregnancy in the development of preeclampsia. PMID:24939173
Spatial Distribution of Black Bear Incident Reports in Michigan.
McFadden-Hiller, Jamie E; Beyer, Dean E; Belant, Jerrold L
2016-01-01
Interactions between humans and carnivores have existed for centuries due to competition for food and space. American black bears are increasing in abundance and populations are expanding geographically in many portions of its range, including areas that are also increasing in human density, often resulting in associated increases in human-bear conflict (hereafter, bear incidents). We used public reports of bear incidents in Michigan, USA, from 2003-2011 to assess the relative contributions of ecological and anthropogenic variables in explaining the spatial distribution of bear incidents and estimated the potential risk of bear incidents. We used weighted Normalized Difference Vegetation Index mean as an index of primary productivity, region (i.e., Upper Peninsula or Lower Peninsula), primary and secondary road densities, and percentage land cover type within 6.5-km2 circular buffers around bear incidents and random points. We developed 22 a priori models and used generalized linear models and Akaike's Information Criterion (AIC) to rank models. The global model was the best compromise between model complexity and model fit (w = 0.99), with a ΔAIC 8.99 units from the second best performing model. We found that as deciduous forest cover increased, the probability of bear incident occurrence increased. Among the measured anthropogenic variables, cultivated crops and primary roads were the most important in our AIC-best model and were both positively related to the probability of bear incident occurrence. The spatial distribution of relative bear incident risk varied markedly throughout Michigan. Forest cover fragmented with agriculture and other anthropogenic activities presents an environment that likely facilitates bear incidents. Our map can help wildlife managers identify areas of bear incident occurrence, which in turn can be used to help develop strategies aimed at reducing incidents. Researchers and wildlife managers can use similar mapping techniques to
Comparison of manual and automatic onset Time picking for local earthquake in North Eastern Italy.
NASA Astrophysics Data System (ADS)
Spallarossa, D.; Tiberi, L.; Costa, G.
2012-04-01
Automatic estimates of earthquake parameters continues to be of considerable interest to the seismological community. The automatic processing of seismic data, whether for real-time seismic warning system or to reprocessing large amount of seismic recordings, is increasingly being demanded by seismologists. In this study is presented a new method used for automatic phase picking (P and S) which include envelope function calculation, STA/LTA detectors and AR picking algorithms based on the Akaike information criterion (AIC) The main characteristics of the proposed picking algorithm are: a) Pre-filtering and envelope calculation to prearrange the onset; b) Preliminary detection of P onset using both the AIC based picker and the STA/LTA picker; c) S/N analysis, P validation, filtering and re-picking; d) Preliminary earthquake location; e) Detection of S onset adopting the AIC based picker; f) S/N analysis, S validation; g) Earthquake location. The algorithm is applied to a reference data composed by 200 events set with very heterogeneous qualities of P and S onsets acquired by South Eastern Alps Transfontier network from 01/01/2008 to 03/31/2008 in North Eastern Italy and surrounding regions. These data are collected through the use of the software Antelope, an integrated collection of programs for data management and seismic data analysis. The reliability and robustness of the proposed algorithm is tested by comparing manually derived P and S readings (determined by an experienced seismic analyst), serving as reference picks, with the corresponding automatically estimated P and S arrival times. An additional analysis is comparing these automatic picks with the ones produced by Antelope, which used only STA/LTA detectors and finally studying the effect of these different set of arrival times in the resultant localizations for each database event. Preliminary results indicate that seismic detectors which integrate different techniques could improve the stability of the
A procedure for seiche analysis with Bayesian information criterion
NASA Astrophysics Data System (ADS)
Aichi, Masaatsu
2016-04-01
Seiche is a standing wave in enclosed or semi-enclosed water body. Its amplitude irregularly changes in time due to weather condition etc. Then, extracting seiche signal is not easy by usual methods for time series analysis such as fast Fourier transform (FFT). In this study, a new method for time series analysis with Bayesian information criterion was developed to decompose seiche, tide, long-term trend and residual components from time series data of tide stations. The method was developed based on the maximum marginal likelihood estimation of tide amplitudes, seiche amplitude, and trend components. Seiche amplitude and trend components were assumed that they gradually changes as second derivative in time was close to zero. These assumptions were incorporated as prior distributions. The variances of prior distributions were estimated by minimizing Akaike-Bayes information criterion (ABIC). The frequency of seiche was determined by Newton method with initial guess by FFT. The accuracy of proposed method was checked by analyzing synthetic time series data composed of known components. The reproducibility of the original components was quite well. The proposed method was also applied to the actual time series data of sea level observed by tide station and the strain of coastal rock masses observed by fiber Bragg grating sensor in Aburatsubo Bay, Japan. The seiche in bay and its response of rock masses were successfully extracted.
Anterior Insular Cortex and Emotional Awareness
Gu, Xiaosi; Hof, Patrick R.; Friston, Karl J.; Fan, Jin
2014-01-01
This paper reviews the foundation for a role of the human anterior insular cortex (AIC) in emotional awareness, defined as the conscious experience of emotions. We first introduce the neuroanatomical features of AIC and existing findings on emotional awareness. Using empathy, the awareness and understanding of other people’s emotional states, as a test case, we then present evidence to demonstrate: 1) AIC and anterior cingulate cortex (ACC) are commonly coactivated as revealed by a meta-analysis, 2) AIC is functionally dissociable from ACC, 3) AIC integrates stimulus-driven and top-down information, and 4) AIC is necessary for emotional awareness. We propose a model in which AIC serves two major functions: integrating bottom-up interoceptive signals with top-down predictions to generate a current awareness state and providing descending predictions to visceral systems that provide a point of reference for autonomic reflexes. We argue that AIC is critical and necessary for emotional awareness. PMID:23749500
Anterior insular cortex and emotional awareness.
Gu, Xiaosi; Hof, Patrick R; Friston, Karl J; Fan, Jin
2013-10-15
This paper reviews the foundation for a role of the human anterior insular cortex (AIC) in emotional awareness, defined as the conscious experience of emotions. We first introduce the neuroanatomical features of AIC and existing findings on emotional awareness. Using empathy, the awareness and understanding of other people's emotional states, as a test case, we then present evidence to demonstrate: 1) AIC and anterior cingulate cortex (ACC) are commonly coactivated as revealed by a meta-analysis, 2) AIC is functionally dissociable from ACC, 3) AIC integrates stimulus-driven and top-down information, and 4) AIC is necessary for emotional awareness. We propose a model in which AIC serves two major functions: integrating bottom-up interoceptive signals with top-down predictions to generate a current awareness state and providing descending predictions to visceral systems that provide a point of reference for autonomic reflexes. We argue that AIC is critical and necessary for emotional awareness. PMID:23749500
Difference image analysis: automatic kernel design using information criteria
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.
2016-03-01
We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.
Information Economics: Valuing Information.
ERIC Educational Resources Information Center
Brinberg, Herbert R.
1989-01-01
Addresses the question of why previous articles and studies on the value of information have failed to provide meaningful techniques for measuring that value. The discussion covers four principle causes for confusion surrounding the valuation of information and draws conclusions about the value added model of information. (seven references) (CLB)
Theorizing Information for Information Science.
ERIC Educational Resources Information Center
Cornelius, Ian
2002-01-01
Considers whether information science has a theory of information. Highlights include guides to information and its theory; constructivism; information outside information science; process theories; cognitive views of information; measuring information; meaning; and misinformation. (Contains 89 references.) (LRW)
Average Information Content Maximization—A New Approach for Fingerprint Hybridization and Reduction
Śmieja, Marek; Warszycki, Dawid
2016-01-01
Fingerprints, bit representations of compound chemical structure, have been widely used in cheminformatics for many years. Although fingerprints with the highest resolution display satisfactory performance in virtual screening campaigns, the presence of a relatively high number of irrelevant bits introduces noise into data and makes their application more time-consuming. In this study, we present a new method of hybrid reduced fingerprint construction, the Average Information Content Maximization algorithm (AIC-Max algorithm), which selects the most informative bits from a collection of fingerprints. This methodology, applied to the ligands of five cognate serotonin receptors (5-HT2A, 5-HT2B, 5-HT2C, 5-HT5A, 5-HT6), proved that 100 bits selected from four non-hashed fingerprints reflect almost all structural information required for a successful in silico discrimination test. A classification experiment indicated that a reduced representation is able to achieve even slightly better performance than the state-of-the-art 10-times-longer fingerprints and in a significantly shorter time. PMID:26784447
Some novel growth functions and their application with reference to growth in ostrich.
Faridi, A; López, S; Ammar, H; Salwa, K S; Golian, A; Thornley, J H M; France, J
2015-06-01
Four novel growth functions, namely, Pareto, extreme value distribution (EVD), Lomolino, and cumulative β-P distribution (CBP), are derived, and their ability to describe ostrich growth curves is evaluated. The functions were compared with standard growth equations, namely, the monomolecular, Michaelis-Menten (MM), Gompertz, Richards, and generalized MM (gMM). For this purpose, 2 separate comparisons were conducted. In the first, all the functions were fitted to 40 individual growth curves (5 males and 35 females) of ostriches using nonlinear regression. In the second, performance of the functions was assessed when data from 71 individuals were composited (570 data points). This comparison was undertaken using nonlinear mixed models and considering 3 approaches: 1) models with no random effect, 2) random effect incorporated as the intercept, and 3) random effect incorporated into the asymptotic weight parameter (Wf). The results from the first comparison showed that the functions generally gave acceptable values of R2 and residual variance. On the basis of the Akaike information criterion (AIC), CBP gave the best fit, whereas the Gompertz and Lomolino equations were the preferred functions on the basis of corrected AIC (AICc). Bias, accuracy factor, the Durbin-Watson statistic, and the number of runs of sign were used to analyze the residuals. CBP gave the best distribution of residuals but also produced more residual autocorrelation (significant Durbin-Watson statistic). The functions were applied to sample data for a more conventional farm species (2 breeds of cattle) to verify the results of the comparison of fit among functions and their applicability across species. In the second comparison, analysis of mixed models showed that incorporation of a random effect into Wf gave the best fit, resulting in smaller AIC and AIC values compared with those in the other 2 approaches. On the basis of AICc, best fit was achieved with CBP, followed by gMM, Lomolino, and
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek
Identification of sorption processes and parameters for radionuclide transport in fractured rock
NASA Astrophysics Data System (ADS)
Dai, Zhenxue; Wolfsberg, Andrew; Reimus, Paul; Deng, Hailin; Kwicklis, Edward; Ding, Mei; Ware, Doug; Ye, Ming
2012-01-01
SummaryIdentification of chemical reaction processes in subsurface environments is a key issue for reactive transport modeling because simulating different processes requires developing different chemical-mathematical models. In this paper, two sorption processes (equilibrium and kinetics) are considered for modeling neptunium and uranium sorption in fractured rock. Based on different conceptualizations of the two processes occurring in fracture and/or matrix media, seven dual-porosity, multi-component reactive transport models are developed. The process models are identified with a stepwise strategy by using multi-tracer concentration data obtained from a series of transport experiments. In the first step, breakthrough data of a conservative tracer (tritium) obtained from four experiments are used to estimate the flow and non-reactive transport parameters (i.e., mean fluid residence time in fracture, fracture aperture, and matrix tortuosity) common to all the reactive transport models. In the second and third steps, by fixing the common non-reactive flow and transport parameters, the sorption parameters (retardation factor, sorption coefficient, and kinetic rate constant) of each model are estimated using the breakthrough data of reactive tracers, neptunium and uranium, respectively. Based on the inverse modeling results, the seven sorption-process models are discriminated using four model discrimination (or selection) criteria, Akaike information criterion ( AIC), modified Akaike information criterion ( AICc), Bayesian information criterion ( BIC) and Kashyap information criterion ( KIC). These criteria suggest the kinetic sorption process for modeling reactive transport of neptunium and uranium transport in both fracture and matrix. This conclusion is confirmed by two chemical criteria, the half reaction time and Damköhler number criterion.
Rogers, K A; Ruppert, A S; Bingman, A; Andritsos, L A; Awan, F T; Blum, K A; Flynn, J M; Jaglowski, S M; Lozanski, G; Maddocks, K J; Byrd, J C; Woyach, J A; Jones, J A
2016-02-01
Chronic lymphocytic leukemia (CLL) is frequently complicated by secondary autoimmune cytopenias (AICs). Ibrutinib is an irreversible inhibitor of Bruton's tyrosine kinase approved for the treatment of relapsed CLL and CLL with del(17p). The effect of ibrutinib treatment on the incidence of AIC is currently unknown. We reviewed medical records of 301 patients treated with ibrutinib, as participants in therapeutic clinical trials at The Ohio State University Comprehensive Cancer Center between July 2010 and July 2014. Subjects were reviewed with respect to past history of AIC, and treatment-emergent AIC cases were identified. Before starting ibrutinib treatment, 26% of patients had experienced AIC. Information was available for a total of 468 patient-years of ibrutinib exposure, during which there were six cases of treatment-emergent AIC. This corresponds to an estimated incidence rate of 13 episodes for every 1000 patient-years of ibrutinib treatment. We further identified 22 patients receiving therapy for AIC at the time ibrutinib was started. Of these 22 patients, 19 were able to discontinue AIC therapy. We found that ibrutinib treatment is associated with a low rate of treatment-emergent AIC. Patients with an existing AIC have been successfully treated with ibrutinib and subsequently discontinued AIC therapy. PMID:26442611
Rogers, Kerry A.; Ruppert, Amy S.; Bingman, Anissa; Andritsos, Leslie A.; Awan, Farrukh T.; Blum, Kristie A.; Flynn, Joseph M.; Jaglowski, Samantha M.; Lozanski, Gerard; Maddocks, Kami J.; Byrd, John C.; Woyach, Jennifer A.; Jones, Jeffrey A.
2016-01-01
Chronic lymphocytic leukemia (CLL) is frequently complicated by secondary autoimmune cytopenias (AIC). Ibrutinib is an irreversible inhibitor of Bruton’s Tyrosine Kinase approved for treatment of relapsed CLL and CLL with del(17p). The effect of ibrutinib treatment on the incidence of AIC is currently unknown. We reviewed medical records of 301 patients treated with ibrutinib as participants in therapeutic clinical trials at the Ohio State University Comprehensive Cancer Center between July 2010 and July 2014. Subjects were reviewed with respect to past history of AIC, and treatment emergent AIC cases were identified. Prior to starting ibrutinib treatment, 26% of patients had experienced AIC. Information was available for a total of 468 patient-years of ibrutinib exposure, during which there were six cases of treatment emergent AIC. This corresponds to an estimated incidence rate of 13 episodes for every 1 000 patient-years of ibrutinib treatment. We further identified 22 patients receiving therapy for AIC at the time ibrutinib was started. Of these 22 patients, 19 were able to discontinue AIC therapy. We found that ibrutinib treatment is associated with a low rate of treatment emergent AIC. Patients with an existing AIC have been successfully treated with ibrutinib and subsequently discontinued AIC therapy. PMID:26442611
Liang, Sisi; Panagiotaki, Eleftheria; Bongers, Andre; Shi, Peng; Sved, Paul; Watson, Geoffrey; Bourne, Roger
2016-05-01
This study compares the theoretical information content of single- and multi-compartment models of diffusion-weighted signal attenuation in prostate tissue. Diffusion-weighted imaging (DWI) was performed at 9.4 T with multiple diffusion times and an extended range of b values in four whole formalin-fixed prostates. Ten models, including different combinations of isotropic, anisotropic and restricted components, were tested. Models were ranked using the Akaike information criterion. In all four prostates, two-component models, comprising an anisotropic Gaussian component and an isotropic restricted component, ranked highest in the majority of voxels. Single-component models, whether isotropic (apparent diffusion coefficient, ADC) or anisotropic (diffusion tensor imaging, DTI), consistently ranked lower than multi-component models. Model ranking trends were independent of voxel size and maximum b value in the range tested (1.6-16 mm(3) and 3000-10 000 s/mm(2) ). This study characterizes the two major water components previously identified by biexponential models and shows that models incorporating both anisotropic and restricted components provide more information-rich descriptions of DWI signals in prostate tissue than single- or multi-component anisotropic models and models that do not account for restricted diffusion. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999065
VLP Source Inversion and Evaluation of Error Analysis Techniques at Fuego Volcano, Guatemala
NASA Astrophysics Data System (ADS)
Brill, K. A.; Waite, G. P.
2015-12-01
In January of 2012, our team occupied 10 sites around Fuego volcano with broadband seismometers, two of which were collocated with infrasound microphone arrays and tilt-meters (see Figure 1 for full deployment details). Our radial coverage around Fuego during the 2012 campaign satisfies conditions outlined by Dawson et al. [2011] for good network coverage. Very-long-period (VLP) events that accompany small-scale explosions were classified by waveform and eruption style. We located these VLP event families which have been persistent at Fuego since at least 2008 through inversion in the same manner employed by Lyons and Waite [2011] with improved radial coverage in our network. We compare results for source inversions performed with independent tilt data against inversions incorporating tilt data extracted from the broadband. The current best-practice method for choosing an optimum solution for inversion results is based on each solution's residual error, the relevance of free parameters used in the model, and the physical significance of the source mechanism. Error analysis was performed through a boot strapping in order to explore the source location uncertainty and significance of components of the moment tensor. The significance of the number of free parameters has mostly been evaluated by calculating Akaike's Information Criterion (AIC), but little has been done to evaluate the sensitivity of AIC or other criteria (i.e. Bayesian Information Criterion) to the number of model parameters. We compare solutions as chosen by these alternate methods with more standard techniques for our real data set as well through the use of synthetic data and make recommendations as to best practices. Figure 1: a) Map of 2012 station network: stations highlighted in red were collocated with infrasound arrays. b) Location of Fuego within Guatemala and view of the complex from the west with different eruptive centers labeled. c) Operational times for each of the stations and cameras.
Using Post-Traumatic Amnesia To Predict Outcome after Traumatic Brain Injury.
Ponsford, Jennie L; Spitz, Gershon; McKenzie, Dean
2016-06-01
Duration of post-traumatic amnesia (PTA) has emerged as a strong measure of injury severity after traumatic brain injury (TBI). Despite the growing international adoption of this measure, there remains a lack of consistency in the way in which PTA duration is used to classify severity of injury. This study aimed to establish the classification of PTA that would best predict functional or productivity outcomes. We conducted a cohort study of 1041 persons recruited from inpatient admissions to a TBI rehabilitation center between 1985 and 2013. Participants had a primary diagnosis of TBI, emerged from PTA before discharge from inpatient hospital, and engaged in productive activities before injury. Eight models that classify duration of PTA were evaluated-six that were based on the literature and two that were statistically driven. Models were assessed using area under the receiver operating characteristic curve (AUC) as well as model-based Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) statistics. All categorization models showed longer PTA to be associated with a greater likelihood of being nonproductive at 1 year after TBI. Classification systems with a greater number of categories performed better than two-category systems. The dimensional (continuous) form of PTA resulted in the greatest AUC, and lowest AIC as well as BIC, of the classification systems examined. This finding indicates that the greatest accuracy in prognosis is likely to be achieved using PTA as a continuous variable. This enables the probability of productive outcomes to be estimated with far greater precision than that possible using a classification system. Categorizing PTA to classify severity of injury may be reducing the precision with which clinicians can plan the treatment of patients after TBI. PMID:26234939
Light, Theo; Marchetti, Michael P
2007-04-01
Many of California's native populations of freshwater fish are in serious decline, as are freshwater faunas worldwide. Habitat loss and alteration, hydrologic modification, water pollution, and invasions have been identified as major drivers of these losses. Because these potential causes of decline are frequently correlated, it is difficult to separate direct from indirect effects of each factor and to appropriately rank their importance for conservation action. Recently a few authors have questioned the conservation significance of invasions, suggesting that they are "passengers" rather than "drivers" of ecological change. We compiled an extensive, watershed-level data set of fish presence and conservation status, land uses, and hydrologic modifications in California and used an information theoretic approach (Akaike's information criterion, AIC) and path analysis to evaluate competing models of native fish declines. Hydrologic modification (impoundments and diversions), invasions, and proportion of developed land were all predictive of the number of extinct and at-risk native fishes in California watersheds in the AIC analysis. Although nonindigenous fish richness was the best single predictor (after native richness) of fishes of conservation concern, the combined ranking of models containing hydrologic modification variables was slightly higher than that of models containing nonindigenous richness. Nevertheless, the path analysis indicated that the effects of both hydrologic modification and development on fishes of conservation concern were largely indirect, through their positive effects on nonindigenous fish richness. The best-fitting path model was the driver model, which included no direct effects of abiotic disturbance on native fish declines. Our results suggest that, for California freshwater fishes, invasions are the primary direct driver of extinctions and population declines, whereas the most damaging effect of habitat alteration is the tendency of
NASA Astrophysics Data System (ADS)
Ibanez, C. A. G.; Carcellar, B. G., III; Paringit, E. C.; Argamosa, R. J. L.; Faelga, R. A. G.; Posilero, M. A. V.; Zaragosa, G. P.; Dimayacyac, N. A.
2016-06-01
Diameter-at-Breast-Height Estimation is a prerequisite in various allometric equations estimating important forestry indices like stem volume, basal area, biomass and carbon stock. LiDAR Technology has a means of directly obtaining different forest parameters, except DBH, from the behavior and characteristics of point cloud unique in different forest classes. Extensive tree inventory was done on a two-hectare established sample plot in Mt. Makiling, Laguna for a natural growth forest. Coordinates, height, and canopy cover were measured and types of species were identified to compare to LiDAR derivatives. Multiple linear regression was used to get LiDAR-derived DBH by integrating field-derived DBH and 27 LiDAR-derived parameters at 20m, 10m, and 5m grid resolutions. To know the best combination of parameters in DBH Estimation, all possible combinations of parameters were generated and automated using python scripts and additional regression related libraries such as Numpy, Scipy, and Scikit learn were used. The combination that yields the highest r-squared or coefficient of determination and lowest AIC (Akaike's Information Criterion) and BIC (Bayesian Information Criterion) was determined to be the best equation. The equation is at its best using 11 parameters at 10mgrid size and at of 0.604 r-squared, 154.04 AIC and 175.08 BIC. Combination of parameters may differ among forest classes for further studies. Additional statistical tests can be supplemented to help determine the correlation among parameters such as Kaiser- Meyer-Olkin (KMO) Coefficient and the Barlett's Test for Spherecity (BTS).
Validation of the Chinese Version of the Quality of Nursing Work Life Scale
Fu, Xia; Xu, Jiajia; Song, Li; Li, Hua; Wang, Jing; Wu, Xiaohua; Hu, Yani; Wei, Lijun; Gao, Lingling; Wang, Qiyi; Lin, Zhanyi; Huang, Huigen
2015-01-01
Quality of Nursing Work Life (QNWL) serves as a predictor of a nurse’s intent to leave and hospital nurse turnover. However, QNWL measurement tools that have been validated for use in China are lacking. The present study evaluated the construct validity of the QNWL scale in China. A cross-sectional study was conducted conveniently from June 2012 to January 2013 at five hospitals in Guangzhou, which employ 1938 nurses. The participants were asked to complete the QNWL scale and the World Health Organization Quality of Life abbreviated version (WHOQOL-BREF). A total of 1922 nurses provided the final data used for analyses. Sixty-five nurses from the first investigated division were re-measured two weeks later to assess the test-retest reliability of the scale. The internal consistency reliability of the QNWL scale was assessed using Cronbach’s α. Test-retest reliability was assessed using the intra-class correlation coefficient (ICC). Criterion-relation validity was assessed using the correlation of the total scores of the QNWL and the WHOQOL-BREF. Construct validity was assessed with the following indices: χ2 statistics and degrees of freedom; relative mean square error of approximation (RMSEA); the Akaike information criterion (AIC); the consistent Akaike information criterion (CAIC); the goodness-of-fit index (GFI); the adjusted goodness of fit index; and the comparative fit index (CFI). The findings demonstrated high internal consistency (Cronbach’s α = 0.912) and test-retest reliability (interclass correlation coefficient = 0.74) for the QNWL scale. The chi-square test (χ2 = 13879.60, df [degree of freedom] = 813 P = 0.0001) was significant. The RMSEA value was 0.091, and AIC = 1806.00, CAIC = 7730.69, CFI = 0.93, and GFI = 0.74. The correlation coefficient between the QNWL total scores and the WHOQOL-BREF total scores was 0.605 (p<0.01). The QNWL scale was reliable and valid in Chinese-speaking nurses and could be used as a clinical and research
NASA Technical Reports Server (NTRS)
Ricks, Wendell; Corker, Kevin
1990-01-01
Primary Flight Display (PFD) information management and cockpit display of information management research is presented in viewgraph form. The information management problem in the cockpit, information management burdens, the key characteristics of an information manager, the interface management system handling the flow of information and the dialogs between the system and the pilot, and overall system architecture are covered.
A universal approximate cross-validation criterion for regular risk functions.
Commenges, Daniel; Proust-Lima, Cécile; Samieri, Cécilia; Liquet, Benoit
2015-05-01
Selection of estimators is an essential task in modeling. A general framework is that the estimators of a distribution are obtained by minimizing a function (the estimating function) and assessed using another function (the assessment function). A classical case is that both functions estimate an information risk (specifically cross-entropy); this corresponds to using maximum likelihood estimators and assessing them by Akaike information criterion (AIC). In more general cases, the assessment risk can be estimated by leave-one-out cross-validation. Since leave-one-out cross-validation is computationally very demanding, we propose in this paper a universal approximate cross-validation criterion under regularity conditions (UACVR). This criterion can be adapted to different types of estimators, including penalized likelihood and maximum a posteriori estimators, and also to different assessment risk functions, including information risk functions and continuous rank probability score (CRPS). UACVR reduces to Takeuchi information criterion (TIC) when cross-entropy is the risk for both estimation and assessment. We provide the asymptotic distributions of UACVR and of a difference of UACVR values for two estimators. We validate UACVR using simulations and provide an illustration on real data both in the psychometric context where estimators of the distributions of ordered categorical data derived from threshold models and models based on continuous approximations are compared. PMID:25849800
NASA Astrophysics Data System (ADS)
Liu, Zhiyong; Zhou, Ping; Chen, Gang; Guo, Ledong
2014-11-01
This study investigated the performance and potential of a hybrid model that combined the discrete wavelet transform and support vector regression (the DWT-SVR model) for daily and monthly streamflow forecasting. Three key factors of the wavelet decomposition phase (mother wavelet, decomposition level, and edge effect) were proposed to consider for improving the accuracy of the DWT-SVR model. The performance of DWT-SVR models with different combinations of these three factors was compared with the regular SVR model. The effectiveness of these models was evaluated using the root-mean-squared error (RMSE) and Nash-Sutcliffe model efficiency coefficient (NSE). Daily and monthly streamflow data observed at two stations in Indiana, United States, were used to test the forecasting skill of these models. The results demonstrated that the different hybrid models did not always outperform the SVR model for 1-day and 1-month lead time streamflow forecasting. This suggests that it is crucial to consider and compare the three key factors when using the DWT-SVR model (or other machine learning methods coupled with the wavelet transform), rather than choosing them based on personal preferences. We then combined forecasts from multiple candidate DWT-SVR models using a model averaging technique based upon Akaike's information criterion (AIC). This ensemble prediction was superior to the single best DWT-SVR model and regular SVR model for both 1-day and 1-month ahead predictions. With respect to longer lead times (i.e., 2- and 3-day and 2-month), the ensemble predictions using the AIC averaging technique were consistently better than the best DWT-SVR model and SVR model. Therefore, integrating model averaging techniques with the hybrid DWT-SVR model would be a promising approach for daily and monthly streamflow forecasting. Additionally, we strongly recommend considering these three key factors when using wavelet-based SVR models (or other wavelet-based forecasting models).
NASA Astrophysics Data System (ADS)
Coelho, Luís Francisco Mello; Ribeiro, Milton Cezar; Pereira, Rodrigo Augusto Santinelo
2014-05-01
The success of fig trees in tropical ecosystems is evidenced by the great diversity (+750 species) and wide geographic distribution of the genus. We assessed the contribution of environmental variables on the species richness and density of fig trees in fragments of seasonal semideciduous forest (SSF) in Brazil. We assessed 20 forest fragments in three regions in Sao Paulo State, Brazil. Fig tree richness and density was estimated in rectangular plots, comprising 31.4 ha sampled. Both richness and fig tree density were linearly modeled as function of variables representing (1) fragment metrics, (2) forest structure, and (3) landscape metrics expressing water drainage in the fragments. Model selection was performed by comparing the AIC values (Akaike Information Criterion) and the relative weight of each model (wAIC). Both species richness and fig tree density were better explained by the water availability in the fragment (meter of streams/ha): wAICrichness = 0.45, wAICdensity = 0.96. The remaining variables related to anthropic perturbation and forest structure were of little weight in the models. The rainfall seasonality in SSF seems to select for both establishment strategies and morphological adaptations in the hemiepiphytic fig tree species. In the studied SSF, hemiepiphytes established at lower heights in their host trees than reported for fig trees in evergreen rainforests. Some hemiepiphytic fig species evolved superficial roots extending up to 100 m from their trunks, resulting in hectare-scale root zones that allow them to efficiently forage water and soil nutrients. The community of fig trees was robust to variation in forest structure and conservation level of SSF fragments, making this group of plants an important element for the functioning of seasonal tropical forests.
Ghavi Hossein-Zadeh, N
2016-02-01
In order to describe the lactation curves of milk yield (MY) and composition in buffaloes, seven non-linear mathematical equations (Wood, Dhanoa, Sikka, Nelder, Brody, Dijkstra and Rook) were used. Data were 116,117 test-day records for MY, fat (FP) and protein (PP) percentages of milk from the first three lactations of buffaloes which were collected from 893 herds in the period from 1992 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly production records of dairy buffaloes using the NLIN and MODEL procedures in SAS and the parameters were estimated. The models were tested for goodness of fit using adjusted coefficient of determination (Radj(2)), root means square error (RMSE), Durbin-Watson statistic and Akaike's information criterion (AIC). The Dijkstra model provided the best fit of MY and PP of milk for the first three parities of buffaloes due to the lower values of RMSE and AIC than other models. For the first-parity buffaloes, Sikka and Brody models provided the best fit of FP, but for the second- and third-parity buffaloes, Sikka model and Brody equation provided the best fit of lactation curve for FP, respectively. The results of this study showed that the Wood and Dhanoa equations were able to estimate the time to the peak MY more accurately than the other equations. In addition, Nelder and Dijkstra equations were able to estimate the peak time at second and third parities more accurately than other equations, respectively. Brody function provided more accurate predictions of peak MY over the first three parities of buffaloes. There was generally a positive relationship between 305-day MY and persistency measures and also between peak yield and 305-day MY, calculated by different models, within each lactation in the current study. Overall, evaluation of the different equations used in the current study indicated the potential of the non-linear models for fitting monthly productive records of buffaloes. PMID:26354679
Fenwick, John D.; Pardo-Montero, Juan; Nahum, Alan E.; Malik, Zafar I.
2012-02-01
Purpose: To determine how modelled maximum tumor control rates, achievable without exceeding mucositis tolerance (tcp{sub max-early}) vary with schedule duration for head and neck squamous cell carcinoma (HNSCC). Methods and materials: Using maximum-likelihood techniques, we have fitted a range of tcp models to two HNSCC datasets (Withers' and British Institute of Radiology [BIR]), characterizing the dependence of tcp on duration and equivalent dose in 2 Gy fractions (EQD{sub 2}). Models likely to best describe future data have been selected using the Akaike information criterion (AIC) and its quasi-AIC extension to overdispersed data. Setting EQD{sub 2}s in the selected tcp models to levels just tolerable for mucositis, we have plotted tcp{sub max-early} against schedule duration. Results: While BIR dataset tcp fits describe dose levels isoeffective for tumor control as rising significantly with schedule protraction, indicative of accelerated tumor repopulation, repopulation terms in fits to Withers' dataset do not reach significance after accounting for overdispersion of the data. The tcp{sub max-early} curves calculated from tcp fits to the overall Withers' and BIR datasets rise by 8% and 0-4%, respectively, between 20 and 50 days duration; likewise, tcp{sub max-early} curves calculated for stage-specific cohorts also generally rise slowly with increasing duration. However none of the increases in tcp{sub max-early} calculated from the overall or stage-specific fits reach significance. Conclusions: Local control rates modeled for treatments which lie just within mucosal tolerance rise slowly but insignificantly with increasing schedule length. This finding suggests that whereas useful gains may be made by accelerating unnecessarily slow schedules until they approach early reaction tolerance, little is achieved by shortening schedules further while reducing doses to remain within mucosal tolerance, an approach that may slightly worsen outcomes.
Røislien, Jo; Lossius, Hans Morten; Kristiansen, Thomas
2015-01-01
Background Trauma is a leading global cause of death. Trauma mortality rates are higher in rural areas, constituting a challenge for quality and equality in trauma care. The aim of the study was to explore population density and transport time to hospital care as possible predictors of geographical differences in mortality rates, and to what extent choice of statistical method might affect the analytical results and accompanying clinical conclusions. Methods Using data from the Norwegian Cause of Death registry, deaths from external causes 1998–2007 were analysed. Norway consists of 434 municipalities, and municipality population density and travel time to hospital care were entered as predictors of municipality mortality rates in univariate and multiple regression models of increasing model complexity. We fitted linear regression models with continuous and categorised predictors, as well as piecewise linear and generalised additive models (GAMs). Models were compared using Akaike's information criterion (AIC). Results Population density was an independent predictor of trauma mortality rates, while the contribution of transport time to hospital care was highly dependent on choice of statistical model. A multiple GAM or piecewise linear model was superior, and similar, in terms of AIC. However, while transport time was statistically significant in multiple models with piecewise linear or categorised predictors, it was not in GAM or standard linear regression. Conclusions Population density is an independent predictor of trauma mortality rates. The added explanatory value of transport time to hospital care is marginal and model-dependent, highlighting the importance of exploring several statistical models when studying complex associations in observational data. PMID:25972600
Effects of human recreation on the incubation behavior of American Oystercatchers
McGowan, C.P.; Simons, T.R.
2006-01-01
Human recreational disturbance and its effects on wildlife demographics and behavior is an increasingly important area of research. We monitored the nesting success of American Oystercatchers (Haematopus palliatus) in coastal North Carolina in 2002 and 2003. We also used video monitoring at nests to measure the response of incubating birds to human recreation. We counted the number of trips per hour made by adult birds to and from the nest, and we calculated the percent time that adults spent incubating. We asked whether human recreational activities (truck, all-terrain vehicle [ATV], and pedestrian traffic) were correlated with parental behavioral patterns. Eleven a priori models of nest survival and behavioral covariates were evaluated using Akaike's Information Criterion (AIC) to see whether incubation behavior influenced nest survival. Factors associated with birds leaving their nests (n = 548) included ATV traffic (25%), truck traffic (17%), pedestrian traffic (4%), aggression with neighboring oystercatchers or paired birds exchanging incubation duties (26%), airplane traffic (1%) and unknown factors (29%). ATV traffic was positively associated with the rate of trips to and away from the nest (??1 = 0.749, P < 0.001) and negatively correlated with percent time spent incubating (??1 = -0.037, P = 0.025). Other forms of human recreation apparently had little effect on incubation behaviors. Nest survival models incorporating the frequency of trips by adults to and from the nest, and the percentage of time adults spent incubating, were somewhat supported in the AIC analyses. A low frequency of trips to and from the nest and, counter to expectations, low percent time spent incubating were associated with higher daily nest survival rates. These data suggest that changes in incubation behavior might be one mechanism by which human recreation affects the reproductive success of American Oystercatchers.
Canada lynx Lynx canadensis habitat and forest succession in northern Maine, USA
Hoving, C.L.; Harrison, D.J.; Krohn, W.B.; Jakubas, W.J.; McCollough, M.A.
2004-01-01
The contiguous United States population of Canada lynx Lynx canadensis was listed as threatened in 2000. The long-term viability of lynx populations at the southern edge of their geographic range has been hypothesized to be dependent on old growth forests; however, lynx are a specialist predator on snowshoe hare Lepus americanus, a species associated with early-successional forests. To quantify the effects of succession and forest management on landscape-scale (100 km2) patterns of habitat occupancy by lynx, we compared landscape attributes in northern Maine, USA, where lynx had been detected on snow track surveys to landscape attributes where surveys had been conducted, but lynx tracks had not been detected. Models were constructed a priori and compared using logistic regression and Akaike's Information Criterion (AIC), which quantitatively balances data fit and parsimony. In the models with the lowest (i.e. best) AIC, lynx were more likely to occur in landscapes with much regenerating forest, and less likely to occur in landscapes with much recent clearcut, partial harvest and forested wetland. Lynx were not associated positively or negatively with mature coniferous forest. A probabilistic map of the model indicated a patchy distribution of lynx habitat in northern Maine. According to an additional survey of the study area for lynx tracks during the winter of 2003, the model correctly classified 63.5% of the lynx occurrences and absences. Lynx were more closely associated with young forests than mature forests; however, old-growth forests were functionally absent from the landscape. Lynx habitat could be reduced in northern Maine, given recent trends in forest management practices. Harvest strategies have shifted from clearcutting to partial harvesting. If this trend continues, future landscapes will shift away from extensive regenerating forests and toward landscapes dominated by pole-sized and larger stands. Because Maine presently supports the only verified
Monthly streamflow prediction in the Volta Basin of West Africa: A SISO NARMAX polynomial modelling
NASA Astrophysics Data System (ADS)
Amisigo, B. A.; van de Giesen, N.; Rogers, C.; Andah, W. E. I.; Friesen, J.
Single-input-single-output (SISO) non-linear system identification techniques were employed to model monthly catchment runoff at selected gauging sites in the Volta Basin of West Africa. NARMAX (Non-linear Autoregressive Moving Average with eXogenous Input) polynomial models were fitted to basin monthly rainfall and gauging station runoff data for each of the selected sites and used to predict monthly runoff at the sites. An error reduction ratio (ERR) algorithm was used to order regressors for various combinations of input, output and noise lags (various model structures) and the significant regressors for each model selected by applying an Akaike Information Criterion (AIC) to independent rainfall-runoff validation series. Model parameters were estimated from the Matlab REGRESS function (an orthogonal least squares method). In each case, the sub-model without noise terms was fitted first followed by a fitting of the noise model. The coefficient of determination ( R-squared), the Nash-Sutcliffe Efficiency criterion (NSE) and the F statistic for the estimation (training) series were used to evaluate the significance of fit of each model to this series while model selection from the range of models fitted for each gauging site was done by examining the NSEs and the AICs of the validation series. Monthly runoff predictions from the selected models were very good, and the polynomial models appeared to have captured a good part of the rainfall-runoff non-linearity. The results indicate that the NARMAX modelling framework is suitable for monthly river runoff prediction in the Volta Basin. The several good models made available by the NARMAX modelling framework could be useful in the selection of model structures that also provide insights into the physical behaviour of the catchment rainfall-runoff system.
Pharmacokinetic Modeling of Intranasal Scopolamine in Plasma Saliva and Urine
NASA Technical Reports Server (NTRS)
Wu, L.; Chow, D. S. L.; Tam, V.; Putcha, L.
2014-01-01
An intranasal gel formulation of scopolamine (INSCOP) was developed for the treatment of Space Motion Sickness. The bioavailability and pharmacokinetics (PK) were evaluated under the Food and Drug Administration guidelines for clinical trials for an Investigative New Drug (IND). The aim of this project was to develop a PK model that can predict the relationship between plasma, saliva and urinary scopolamine concentrations using data collected from the IND clinical trial with INSCOP. METHODS: Twelve healthy human subjects were administered three dose levels (0.1, 0.2 and 0.4 mg) of INSCOP. Serial blood, saliva and urine samples were collected between 5 min to 24 h after dosing and scopolamine concentrations measured by using a validated LC-MS-MS assay. Pharmacokinetic Compartmental models, using actual dosing and sampling times, were built using Phoenix (version 1.2). Model discrimination was performed, by minimizing the Akaike Information Criteria (AIC), maximizing the coefficient of determination (r²) and by comparison of the quality of fit plots. RESULTS: The best structural model to describe scopolamine disposition after INSCOP administration (minimal AIC =907.2) consisted of one compartment for plasma, saliva and urine respectively that were inter-connected with different rate constants. The estimated values of PK parameters were compiled in Table 1. The model fitting exercises revealed a nonlinear PK for scopolamine between plasma and saliva compartments for K21, Vmax and Km. CONCLUSION: PK model for INSCOP was developed and for the first time it satisfactorily predicted the PK of scopolamine in plasma, saliva and urine after INSCOP administration. Using non-linear PK yielded the best structural model to describe scopolamine disposition between plasma and saliva compartments, and inclusion of non-linear PK resulted in a significant improved model fitting. The model can be utilized to predict scopolamine plasma concentration using saliva and/or urine data that
Bunnell, David B.; Hale, R. Scott; Vanni, Michael J.; Stein, Roy A.
2006-01-01
Stock-recruit models typically use only spawning stock size as a predictor of recruitment to a fishery. In this paper, however, we used spawning stock size as well as larval density and key environmental variables to predict recruitment of white crappies Pomoxis annularis and black crappies P. nigromaculatus, a genus notorious for variable recruitment. We sampled adults and recruits from 11 Ohio reservoirs and larvae from 9 reservoirs during 1998-2001. We sampled chlorophyll as an index of reservoir productivity and obtained daily estimates of water elevation to determine the impact of hydrology on recruitment. Akaike's information criterion (AIC) revealed that Ricker and Beverton-Holt stock-recruit models that included chlorophyll best explained the variation in larval density and age-2 recruits. Specifically, spawning stock catch per effort (CPE) and chlorophyll explained 63-64% of the variation in larval density. In turn, larval density and chlorophyll explained 43-49% of the variation in age-2 recruit CPE. Finally, spawning stock CPE and chlorophyll were the best predictors of recruit CPE (i.e., 74-86%). Although larval density and recruitment increased with chlorophyll, neither was related to seasonal water elevation. Also, the AIC generally did not distinguish between Ricker and Beverton-Holt models. From these relationships, we concluded that crappie recruitment can be limited by spawning stock CPE and larval production when spawning stock sizes are low (i.e., CPE , 5 crappies/net-night). At higher levels of spawning stock sizes, spawning stock CPE and recruitment were less clearly related. To predict recruitment in Ohio reservoirs, managers should assess spawning stock CPE with trap nets and estimate chlorophyll concentrations. To increase crappie recruitment in reservoirs where recruitment is consistently poor, managers should use regulations to increase spawning stock size, which, in turn, should increase larval production and recruits to the fishery.
Assessing the wildlife habitat value of New England salt marshes: II. Model testing and validation.
McKinney, Richard A; Charpentier, Michael A; Wigand, Cathleen
2009-07-01
We tested a previously described model to assess the wildlife habitat value of New England salt marshes by comparing modeled habitat values and scores with bird abundance and species richness at sixteen salt marshes in Narragansett Bay, Rhode Island USA. As a group, wildlife habitat value assessment scores for the marshes ranged from 307-509, or 31-67% of the maximum attainable score. We recorded 6 species of wading birds (Ardeidae; herons, egrets, and bitterns) at the sites during biweekly survey. Species richness (r (2)=0.24, F=4.53, p=0.05) and abundance (r (2)=0.26, F=5.00, p=0.04) of wading birds significantly increased with increasing assessment score. We optimized our assessment model for wading birds by using Akaike information criteria (AIC) to compare a series of models comprised of specific components and categories of our model that best reflect their habitat use. The model incorporating pre-classification, wading bird habitat categories, and natural land surrounding the sites was substantially supported by AIC analysis as the best model. The abundance of wading birds significantly increased with increasing assessment scores generated with the optimized model (r (2)=0.48, F=12.5, p=0.003), demonstrating that optimizing models can be helpful in improving the accuracy of the assessment for a given species or species assemblage. In addition to validating the assessment model, our results show that in spite of their urban setting our study marshes provide substantial wildlife habitat value. This suggests that even small wetlands in highly urbanized coastal settings can provide important wildlife habitat value if key habitat attributes (e.g., natural buffers, habitat heterogeneity) are present. PMID:18597178
Houweling, Antonetta C.; Philippens, Marielle E.P.; Dijkema, Tim; Roesink, Judith M.; Terhaard, Chris H.J.; Schilstra, Cornelis; Ten Haken, Randall K.; Eisbruch, Avraham; Raaijmakers, Cornelis P.J.
2010-03-15
Purpose: The dose-response relationship of the parotid gland has been described most frequently using the Lyman-Kutcher-Burman model. However, various other normal tissue complication probability (NTCP) models exist. We evaluated in a large group of patients the value of six NTCP models that describe the parotid gland dose response 1 year after radiotherapy. Methods and Materials: A total of 347 patients with head-and-neck tumors were included in this prospective parotid gland dose-response study. The patients were treated with either conventional radiotherapy or intensity-modulated radiotherapy. Dose-volume histograms for the parotid glands were derived from three-dimensional dose calculations using computed tomography scans. Stimulated salivary flow rates were measured before and 1 year after radiotherapy. A threshold of 25% of the pretreatment flow rate was used to define a complication. The evaluated models included the Lyman-Kutcher-Burman model, the mean dose model, the relative seriality model, the critical volume model, the parallel functional subunit model, and the dose-threshold model. The goodness of fit (GOF) was determined by the deviance and a Monte Carlo hypothesis test. Ranking of the models was based on Akaike's information criterion (AIC). Results: None of the models was rejected based on the evaluation of the GOF. The mean dose model was ranked as the best model based on the AIC. The TD{sub 50} in these models was approximately 39 Gy. Conclusions: The mean dose model was preferred for describing the dose-response relationship of the parotid gland.
Preliminary analysis using multi-atlas labeling algorithms for tracing longitudinal change.
Kim, Regina E Y; Lourens, Spencer; Long, Jeffrey D; Paulsen, Jane S; Johnson, Hans J
2015-01-01
Multicenter longitudinal neuroimaging has great potential to provide efficient and consistent biomarkers for research of neurodegenerative diseases and aging. In rare disease studies it is of primary importance to have a reliable tool that performs consistently for data from many different collection sites to increase study power. A multi-atlas labeling algorithm is a powerful brain image segmentation approach that is becoming increasingly popular in image processing. The present study examined the performance of multi-atlas labeling tools for subcortical identification using two types of in-vivo image database: Traveling Human Phantom (THP) and PREDICT-HD. We compared the accuracy (Dice Similarity Coefficient; DSC and intraclass correlation; ICC), multicenter reliability (Coefficient of Variance; CV), and longitudinal reliability (volume trajectory smoothness and Akaike Information Criterion; AIC) of three automated segmentation approaches: two multi-atlas labeling tools, MABMIS and MALF, and a machine-learning-based tool, BRAINSCut. In general, MALF showed the best performance (higher DSC, ICC, lower CV, AIC, and smoother trajectory) with a couple of exceptions. First, the results of accumben, where BRAINSCut showed higher reliability, were still premature to discuss their reliability levels since their validity is still in doubt (DSC < 0.7, ICC < 0.7). For caudate, BRAINSCut presented slightly better accuracy while MALF showed significantly smoother longitudinal trajectory. We discuss advantages and limitations of these performance variations and conclude that improved segmentation quality can be achieved using multi-atlas labeling methods. While multi-atlas labeling methods are likely to help improve overall segmentation quality, caution has to be taken when one chooses an approach, as our results suggest that segmentation outcome can vary depending on research interest. PMID:26236182
Sarasa, Mathieu; Soriguer, Ramón C; Serrano, Emmanuel; Granados, José-Enrique; Pérez, Jesús M
2014-01-01
Most studies of lateralized behaviour have to date focused on active behaviour such as sensorial perception and locomotion and little is known about lateralized postures, such as lying, that can potentially magnify the effectiveness of lateralized perception and reaction. Moreover, the relative importance of factors such as sex, age and the stress associated with social status in laterality is now a subject of increasing interest. In this study, we assess the importance of sex, age and reproductive investment in females in lying laterality in the Iberian ibex (Capra pyrenaica). Using generalized additive models under an information-theoretic approach based on the Akaike information criterion, we analyzed lying laterality of 78 individually marked ibexes. Sex, age and nursing appeared as key factors associated, in interaction and non-linearly, with lying laterality. Beyond the benefits of studying laterality with non-linear models, our results highlight the fact that a combination of static factors such as sex, and dynamic factors such as age and stress associated with parental care, are associated with postural laterality. PMID:24611891
Human benzene metabolism following occupational and environmental exposures.
Rappaport, Stephen M; Kim, Sungkyoon; Lan, Qing; Li, Guilan; Vermeulen, Roel; Waidyanatha, Suramya; Zhang, Luoping; Yin, Songnian; Smith, Martyn T; Rothman, Nathaniel
2010-03-19
We previously reported evidence that humans metabolize benzene via two enzymes, including a hitherto unrecognized high-affinity enzyme that was responsible for an estimated 73% of total urinary metabolites [sum of phenol (PH), hydroquinone (HQ), catechol (CA), E,E-muconic acid (MA), and S-phenylmercapturic acid (SPMA)] in nonsmoking females exposed to benzene at sub-saturating (ppb) air concentrations. Here, we used the same Michaelis-Menten-like kinetic models to individually analyze urinary levels of PH, HQ, CA and MA from 263 nonsmoking Chinese women (179 benzene-exposed workers and 84 control workers) with estimated benzene air concentrations ranging from less than 0.001-299 ppm. One model depicted benzene metabolism as a single enzymatic process (1-enzyme model) and the other as two enzymatic processes which competed for access to benzene (2-enzyme model). We evaluated model fits based upon the difference in values of Akaike's Information Criterion (DeltaAIC), and we gauged the weights of evidence favoring the two models based upon the associated Akaike weights and Evidence Ratios. For each metabolite, the 2-enzyme model provided a better fit than the 1-enzyme model with DeltaAIC values decreasing in the order 9.511 for MA, 7.379 for PH, 1.417 for CA, and 0.193 for HQ. The corresponding weights of evidence favoring the 2-enzyme model (Evidence Ratios) were: 116.2:1 for MA, 40.0:1 for PH, 2.0:1 for CA and 1.1:1 for HQ. These results indicate that our earlier findings from models of total metabolites were driven largely by MA, representing the ring-opening pathway, and by PH, representing the ring-hydroxylation pathway. The predicted percentage of benzene metabolized by the putative high-affinity enzyme at an air concentration of 0.001 ppm was 88% based upon urinary MA and was 80% based upon urinary PH. As benzene concentrations increased, the respective percentages of benzene metabolized to MA and PH by the high-affinity enzyme decreased successively to 66 and
Inferential Statistics from Black Hispanic Breast Cancer Survival Data
Khan, Hafiz M. R.; Saxena, Anshul; Ross, Elizabeth
2014-01-01
In this paper we test the statistical probability models for breast cancer survival data for race and ethnicity. Data was collected from breast cancer patients diagnosed in United States during the years 1973–2009. We selected a stratified random sample of Black Hispanic female patients from the Surveillance Epidemiology and End Results (SEER) database to derive the statistical probability models. We used three common model building criteria which include Akaike Information Criteria (AIC), Bayesian Information Criteria (BIC), and Deviance Information Criteria (DIC) to measure the goodness of fit tests and it was found that Black Hispanic female patients survival data better fit the exponentiated exponential probability model. A novel Bayesian method was used to derive the posterior density function for the model parameters as well as to derive the predictive inference for future response. We specifically focused on Black Hispanic race. Markov Chain Monte Carlo (MCMC) method was used for obtaining the summary results of posterior parameters. Additionally, we reported predictive intervals for future survival times. These findings would be of great significance in treatment planning and healthcare resource allocation. PMID:24678273
Lu, Wu-sheng; Yan, Lu-nan; Xiao, Guang-qin; Jiang, Li; Yang, Jian; Yang, Jia-yin
2014-01-01
Purpose This study is to evaluate the Hangzhou criteria (HC) for patients with HCC undergoing surgical resection and to identify whether this staging system is superior to other staging systems in predicting the survival of resectable HCC. Method 774 HCC patients underwent surgical resection between 2007 and 2009 in West China Hospital were enrolled retrospectively. Predictors of survival were identified using the Kaplan–Meier method and the Cox model. The disease state was staged by the HC, as well as by the TNM and BCLC staging systems. Prognostic powers were quantified using a linear trend χ2 test, c-index, and the likelihood ratio (LHR) χ2 test and correlated using Cox's regression model adjusted using the Akaike information criterion (AIC). Results Serum AFP level (P = 0.02), tumor size (P<0.001), tumor number (P<0.001), portal vein invasion (P<0.001), hepatic vein invasion (P<0.001), tumor differentiation (P<0.001), and distant organ (P = 0.016) and lymph node metastasis (P<0.001) were identified as independent risk factors of survival after resection by multivariate analysis. The comparison of the different staging system results showed that BCLC had the best homogeneity (likelihood ratio χ2 test 151.119, P<0.001), the TNM system had the best monotonicity of gradients (linear trend χ2 test 137.523, P<0.001), and discriminatory ability was the highest for the BCLC (the AUCs for 1-year mortality were 0.759) and TNM staging systems (the AUCs for 3-, and 5-year mortality were 0.738 and 0.731, respectively). However, based on the c-index and AIC, the HC was the most informative staging system in predicting survival (c-index 0.6866, AIC 5924.4729). Conclusions The HC can provide important prognostic information after surgery. The HC were shown to be a promising survival predictor in a Chinese cohort of patients with resectable HCC. PMID:25133493
The Infuence of Physical Variables on Whole-Stream Metabolism in an Arctic Tundra River
NASA Astrophysics Data System (ADS)
Cappelletti, C. K.; Bowden, W.
2005-05-01
We examined the influence of light, temperature, nutrients, and discharge on whole-stream metabolism (WSM) in three experimental reaches of the Kuparuk River, Alaska, using the open-system, single-station method. Ambient PO4 levels in the reference reach were ~0.05μM, while addition of phosphoric acid since 1983 in the fertilized reach and since 2004 in the ultra-fertilized reach increased PO4 levels to ~0.30μM and ~0.90μM, respectively. Among all reaches, gross primary production (GPP) was positively correlated with light, temperature, and PO4 and negatively correlated with discharge. Temperature explained most of the variance in GPP. Among all reaches, community respiration (CR) was weakly correlated with light, temperature, PO4, and discharge. However, CR showed a greater response to temperature in the fertilized reaches. Benthic respiration by mosses in the fertilized reaches responds to temperature while heterotrophic respiration in the hyporheic zone is similar in all reaches and does not respond to temperature due to thermal buffering. Light, temperature, and discharge were moderately intercorrelated. An Information-Theoretic approach using Akaike's Information Criterion (AIC) was used to examine the relative importance of each physical variable on WSM and to develop a photosynthesis model.
Effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum.
Kadam, Shekhar U; Tiwari, Brijesh K; O'Donnell, Colm P
2015-03-01
The effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum under hot-air convective drying was investigated. Pretreatments were carried out at ultrasound intensity levels ranging from 7.00 to 75.78 Wcm(-2) for 10 min using an ultrasonic probe system. It was observed that ultrasound pre-treatments reduced the drying time required. The shortest drying times were obtained from samples pre-treated at 75.78 Wcm(-2). The fit quality of 6 thin-layer drying models was also evaluated using the determination of coefficient (R(2)), root means square error (RMSE), AIC (Akaike information criterion) and BIC (Bayesian information criterion). Drying kinetics were modelled using the Newton, Henderson and Pabis, Page, Wang and Singh, Midilli et al. and Weibull models. The Newton, Wang and Singh, and Midilli et al. models showed the best fit to the experimental drying data. Color of ultrasound pretreated dried seaweed samples were lighter compared to control samples. It was concluded that ultrasound pretreatment can be effectively used to reduce the energy cost and drying time for drying of A. nodosum. PMID:25454823
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis. PMID:25913899
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
The Interdependence between Rainfall and Temperature: Copula Analyses
Cong, Rong-Gang; Brady, Mark
2012-01-01
Rainfall and temperature are important climatic inputs for agricultural production, especially in the context of climate change. However, accurate analysis and simulation of the joint distribution of rainfall and temperature are difficult due to possible interdependence between them. As one possible approach to this problem, five families of copula models are employed to model the interdependence between rainfall and temperature. Scania is a leading agricultural province in Sweden and is affected by a maritime climate. Historical climatic data for Scania is used to demonstrate the modeling process. Heteroscedasticity and autocorrelation of sample data are also considered to eliminate the possibility of observation error. The results indicate that for Scania there are negative correlations between rainfall and temperature for the months from April to July and September. The student copula is found to be most suitable to model the bivariate distribution of rainfall and temperature based on the Akaike information criterion (AIC) and Bayesian information criterion (BIC). Using the student copula, we simulate temperature and rainfall simultaneously. The resulting models can be integrated with research on agricultural production and planning to study the effects of changing climate on crop yields. PMID:23213286
Prediction and extension of curves of distillation of vacuum residue using probability functions
NASA Astrophysics Data System (ADS)
León, A. Y.; Riaño, P. A.; Laverde, D.
2016-02-01
The use of the probability functions for the prediction of crude distillation curves has been implemented in different characterization studies for refining processes. The study of four functions of probability (Weibull extreme, Weibull, Kumaraswamy and Riazi), was analyzed in this work for the fitting of curves of distillation of vacuum residue. After analysing the experimental data was selected the Weibull extreme function as the best prediction function, the fitting capability of the best function was validated considering as criterions of estimation the AIC (Akaike Information Criterion), BIC (Bayesian information Criterion), and correlation coefficient R2. To cover a wide range of composition were selected fifty-five (55) vacuum residue derived from different hydrocarbon mixture. The parameters of the probability function Weibull Extreme were adjusted from simple measure properties such as Conradson Carbon Residue (CCR), and compositional analysis SARA (saturates, aromatics, resins and asphaltenes). The proposed method is an appropriate tool to describe the tendency of distillation curves and offers a practical approach in terms of classification of vacuum residues.
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Wang, Hua; Fan, Yiren; Cao, Yingchang; Chen, Hua; Huang, Rui
2016-01-01
With more information than the conventional one dimensional (1D) longitudinal relaxation time (T1) and transversal relaxation time (T2) spectrums, a two dimensional (2D) T1-T2 spectrum in a low field nuclear magnetic resonance (NMR) is developed to discriminate the relaxation components of fluids such as water, oil and gas in porous rock. However, the accuracy and efficiency of the T1-T2 spectrum are limited by the existing inversion algorithms and data acquisition schemes. We introduce a joint method to inverse the T1-T2 spectrum, which combines iterative truncated singular value decomposition (TSVD) and a parallel particle swarm optimization (PSO) algorithm to get fast computational speed and stable solutions. We reorganize the first kind Fredholm integral equation of two kernels to a nonlinear optimization problem with non-negative constraints, and then solve the ill-conditioned problem by the iterative TSVD. Truncating positions of the two diagonal matrices are obtained by the Akaike information criterion (AIC). With the initial values obtained by TSVD, we use a PSO with parallel structure to get the global optimal solutions with a high computational speed. We use the synthetic data with different signal to noise ratio (SNR) to test the performance of the proposed method. The result shows that the new inversion algorithm can achieve favorable solutions for signals with SNR larger than 10, and the inversion precision increases with the decrease of the components of the porous rock.
The kinetics of fluoride sorption by zeolite: Effects of cadmium, barium and manganese.
Cai, Qianqian; Turner, Brett D; Sheng, Daichao; Sloan, Scott
2015-01-01
Industrial wastewaters often consist of a complex chemical cocktail with treatment of target contaminants complicated by adverse chemical reactions. The impact of metal ions (Cd(2+), Ba(2+) and Mn(2+)) on the kinetics of fluoride removal from solution by natural zeolite was investigated. In order to better understand the kinetics, the pseudo-second order (PSO), Hill (Hill 4 and Hill 5) and intra-particle diffusion (IPD) models were applied. Model fitting was compared using the Akaike Information Criterion (AIC) and the Schwarz Bayesian Information Criterion (BIC). The Hill models (Hill 4 and Hill 5) were found to be superior in describing the fluoride removal processes due to the sigmoidal nature of the kinetics. Results indicate that the presence of Mn (100 mg L(-1)) and Cd (100 mg L(-1)) respectively increases the rate of fluoride sorption by a factor of ~28.3 and ~10.9, the maximum sorption capacity is increased by ~2.2 and ~1.7. The presence of Ba (100 mg L(-1)) initially inhibited fluoride removal and very poor fits were obtained for all models. Fitting was best described with a biphasic sigmoidal model with the degree of inhibition decreasing with increasing temperature suggesting that at least two processes are involved with fluoride sorption onto natural zeolite in the presence of Ba. PMID:25909159
The kinetics of fluoride sorption by zeolite: Effects of cadmium, barium and manganese
NASA Astrophysics Data System (ADS)
Cai, Qianqian; Turner, Brett D.; Sheng, Daichao; Sloan, Scott
2015-06-01
Industrial wastewaters often consist of a complex chemical cocktail with treatment of target contaminants complicated by adverse chemical reactions. The impact of metal ions (Cd2 +, Ba2 + and Mn2 +) on the kinetics of fluoride removal from solution by natural zeolite was investigated. In order to better understand the kinetics, the pseudo-second order (PSO), Hill (Hill 4 and Hill 5) and intra-particle diffusion (IPD) models were applied. Model fitting was compared using the Akaike Information Criterion (AIC) and the Schwarz Bayesian Information Criterion (BIC). The Hill models (Hill 4 and Hill 5) were found to be superior in describing the fluoride removal processes due to the sigmoidal nature of the kinetics. Results indicate that the presence of Mn (100 mg L- 1) and Cd (100 mg L- 1) respectively increases the rate of fluoride sorption by a factor of ~ 28.3 and ~ 10.9, the maximum sorption capacity is increased by ~ 2.2 and ~ 1.7. The presence of Ba (100 mg L- 1) initially inhibited fluoride removal and very poor fits were obtained for all models. Fitting was best described with a biphasic sigmoidal model with the degree of inhibition decreasing with increasing temperature suggesting that at least two processes are involved with fluoride sorption onto natural zeolite in the presence of Ba.
Flexible and fixed mathematical models describing growth patterns of chukar partridges
NASA Astrophysics Data System (ADS)
Aygün, Ali; Narinç, Doǧan
2016-04-01
In animal science, the nonlinear regression models for growth curve analysis ofgrowth patterns are separated into two groups called fixed and flexible according to their point of inflection. The aims of this study were to compare fixed and flexible growth functions and to determine the best fit model for the growth data of chukar partridges. With this aim, the growth data of partridges were modeled with widely used models, such as Gompertz, Logistic, Von Bertalanffy as well as the flexible functions, such as, Richards, Janoschek, Levakovich. So as to evaluate growth functions, the R2 (coefficient of determination), adjusted R2 (adjusted coefficient of determination), MSE (mean square error), AIC (Akaike's information criterion) and BIC (Bayesian information criterion) goodness of fit criteria were used. It has been determined that the best fit model from the point of chukar partridge growth data according to mentioned goodness of fit criteria is Janoschek function which has a flexible structure. The Janoschek model is not only important because it has a higher number of parameters with biological meaning than the other functions (the mature weight and initial weight parameters), but also because it was not previously used in the modeling of the chukar partridge growth.
Evaluating Key Watershed Components of Low Flow Regimes in New England Streams.
Morrison, Alisa C; Gold, Arthur J; Pelletier, Marguerite C
2016-05-01
Water resource managers seeking to optimize stream ecosystem services and abstractions of water from watersheds need an understanding of the importance of land use, physical and climatic characteristics, and hydrography on different low flow components of stream hydrographs. Within 33 USGS gaged watersheds of southern New England, we assessed relationships between watershed variables and a set of low flow parameters by using an information-theoretical approach. The key variables identified by the Akaike Information Criteria (AIC) weighting factors as generating positive relationships with low flow events included percent stratified drift, mean elevation, drainage area, and mean August precipitation. The extent of wetlands in the watershed was negatively related to low flow magnitudes. Of the various land use variables, the percentage of developed land was found to have the highest importance and a negative relationship on low flow magnitudes, but was less important than wetlands and physical and climatic features. Our results suggest that management practices aimed to sustain low flows in fluvial systems can benefit from attention to specific watershed features. We draw attention to the finding that streams located in watersheds with high proportions of wetlands may require more stringent approaches to withdrawals to sustain fluvial ecosystems during drought periods, particularly in watersheds with extensive development and limited deposits of stratified drift. PMID:27136170
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes. PMID:27600968
Multimodel Predictive System for Carbon Dioxide Solubility in Saline Formation Waters
Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K
2013-02-05
The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO{sub 2} solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304–433 K, pressure range 74–500 bar, and salt concentration range 0–7 m (NaCl equivalent), using 173 published CO{sub 2} solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO{sub 2} solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO{sub 2} solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.
Modeling Dark Energy Through AN Ising Fluid with Network Interactions
NASA Astrophysics Data System (ADS)
Luongo, Orlando; Tommasini, Damiano
2014-12-01
We show that the dark energy (DE) effects can be modeled by using an Ising perfect fluid with network interactions, whose low redshift equation of state (EoS), i.e. ω0, becomes ω0 = -1 as in the ΛCDM model. In our picture, DE is characterized by a barotropic fluid on a lattice in the equilibrium configuration. Thus, mimicking the spin interaction by replacing the spin variable with an occupational number, the pressure naturally becomes negative. We find that the corresponding EoS mimics the effects of a variable DE term, whose limiting case reduces to the cosmological constant Λ. This permits us to avoid the introduction of a vacuum energy as DE source by hand, alleviating the coincidence and fine tuning problems. We find fairly good cosmological constraints, by performing three tests with supernovae Ia (SNeIa), baryonic acoustic oscillation (BAO) and cosmic microwave background (CMB) measurements. Finally, we perform the Akaike information criterion (AIC) and Bayesian information criterion (BIC) selection criteria, showing that our model is statistically favored with respect to the Chevallier-Polarsky-Linder (CPL) parametrization.
NASA Astrophysics Data System (ADS)
Sun, W.; Chiang, Y.; Chang, F.
2010-12-01
Evaporation is a substantial factor in hydrological circle, moreover a significant reference to the management of both water resources and agricultural irrigation. In general, evaporation can be directly measured by evaporation pan. As for its estimation, the accuracy of traditional empirical equation is not very precise. Therefore, in this study the Dynamic Factor Analysis (DFA) is first applied to investigating the interaction and the tendency of each gauging station. Additionally, the analysis can effectively establish the common trend at each gauging station by evaluating the corresponding AIC (Akaike Information Criterion) values. Furthermore, the meteorological factors such as relative humidity and temperature are also conducted to identify the explanatory variables which have higher relation to evaporation. These variables are further used as inputs to the Back-Propagation Neural Network (BPNN) and are expected to provide meaningful information for successfully estimating evaporation. The applicability and reliability of the BPNN was demonstrated by comparing its performance with that of empirical formula. Keywords: Evaporation, Dynamic Factor Analysis, Artificial Neural Network.
Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging
Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.
2006-03-31
One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.
Selecting best-fit models for estimating the body mass from 3D data of the human calcaneus.
Jung, Go-Un; Lee, U-Young; Kim, Dong-Ho; Kwak, Dai-Soon; Ahn, Yong-Woo; Han, Seung-Ho; Kim, Yi-Suk
2016-05-01
Body mass (BM) estimation could facilitate the interpretation of skeletal materials in terms of the individual's body size and physique in forensic anthropology. However, few metric studies have tried to estimate BM by focusing on prominent biomechanical properties of the calcaneus. The purpose of this study was to prepare best-fit models for estimating BM from the 3D human calcaneus by two major linear regression analysis (the heuristic statistical and all-possible-regressions techniques) and validate the models through predicted residual sum of squares (PRESS) statistics. A metric analysis was conducted based on 70 human calcaneus samples (29 males and 41 females) taken from 3D models in the Digital Korean Database and 10 variables were measured for each sample. Three best-fit models were postulated by F-statistics, Mallows' Cp, and Akaike information criterion (AIC) and Bayes information criterion (BIC) for each available candidate models. Finally, the most accurate regression model yields lowest %SEE and 0.843 of R(2). Through the application of leave-one-out cross validation, the predictive power was indicated a high level of validation accuracy. This study also confirms that the equations for estimating BM using 3D models of human calcaneus will be helpful to establish identification in forensic cases with consistent reliability. PMID:26970867
Calu-3 model under AIC and LCC conditions and application for protein permeability studies.
Marušić, Maja; Djurdjevič, Ida; Drašlar, Kazimir; Caserman, Simon
2014-01-01
Broad area of respiratory epithelium with mild surface conditions is an attractive possibility when trans-mucosal delivery of protein drugs is considered. A mucus and cellular barrier of respiratory epithelium can be modelled in vitro by Calu-3 cell line. We have monitored morphology and barrier properties of Calu-3 culture on permeable supports while developing into liquid covered or air interfaced and mucus lined cellular barrier. Besides morphological differences, cultures differed in electrical resistance and permeability to proteins as well. The accelerated permeability to proteins in these models, due to permeability modulator MP C16, was examined. The effect on electrical resistance of cellular layer was rapid in both cultures suggesting easy access of MP C16 to cells even though its overall impact on cell permeability was strongly reduced in mucus covered culture. Differences in properties of the two models enable better understanding of protein transmucosal permeability, suggesting route of transport and MP C16 modulator action. PMID:24664333
INFORMATION COLLECTION RULE INFORMATION SYSTEM
Resource Purpose:The Information Collection Rule (ICR) Information System was developed to store and distribute the information collected in the ICR for DBPs and microbiological research. It is a research database. The information system consists of our parts: laboratory...
2014-05-01
Effective disaster management requires systems for data acquisition and information management that enable responders to rapidly collect, process, interpret, distribute, and access the data and information required for disaster management. Effective information sharing depends on the types of users, the type of damage, alterations of the functional status of the affected society, and how the information is structured. Those in need of information should be provided with the information necessary for their tasks and not be overloaded with unnecessary information that could serve as a distraction. Such information systems must be designed and exercised. To disseminate and share data with the relevant users, all disaster responses must include effective and reliable information systems. This information includes that acquired from repeated assessments in terms of available and needed human and material resources, which resources no longer are needed, and the status of the relief and recovery workers. It is through this information system that vital decisions are made that are congruent with the overall picture as perceived by the most relevant coordination and control centre. It is essential that information systems be designed and tested regularly as part of preparedness. Such systems must have the capacity to acquire, classify, and present information in an organised and useful manner. PMID:24785813
NASA Astrophysics Data System (ADS)
Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.
2012-04-01
In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with
... saved articles window. My Saved Articles » My ACS » Informed Consent Download Printable Version [PDF] » ( En español ) Learn about informed consent, a process you go through before getting a ...
ERIC Educational Resources Information Center
Anderson, Byron
2007-01-01
As communication technologies change, so do libraries. Library instruction programs are now focused on teaching information literacy, a term that may just as well be referred to as information "literacies." The new media age involves information in a wide variety of mediums. Educators everywhere are realizing media's power to communicate and…
ERIC Educational Resources Information Center
Graves, Eric
2013-01-01
This dissertation introduces the concept of Information Integrity, which is the detection and possible correction of information manipulation by any intermediary node in a communication system. As networks continue to grow in complexity, information theoretic security has failed to keep pace. As a result many parties whom want to communicate,…
Population demographics of two local South Carolina mourning dove populations
McGowan, D.P., Jr.; Otis, D.L.
1998-01-01
The mourning dove (Zenaida macroura) call-count index had a significant (P 2,300 doves and examined >6,000 individuals during harvest bag checks. An age-specific band recovery model with time- and area-specific recovery rates, and constant survival rates, was chosen for estimation via Akaike's Information Criterion (AIC), likelihood ratio, and goodness-of-fit criteria. After-hatching-year (AHY) annual survival rate was 0.359 (SE = 0.056), and hatching-year (HY) annual survival rate was 0.118 (SE = 0.042). Average estimated recruitment per adult female into the prehunting season population was 3.40 (SE = 1.25) and 2.32 (SE = 0.46) for the 2 study areas. Our movement data support earlier hypotheses of nonmigratory breeding and harvested populations in South Carolina. Low survival rates and estimated population growth rate in the study areas may be representative only of small-scale areas that are heavily managed for dove hunting. Source-sink theory was used to develop a model of region-wide populations that is composed of source areas with positive growth rates and sink areas of declining growth. We suggest management of mourning doves in the Southeast might benefit from improved understanding of local population dynamics, as opposed to regional-scale population demographics.
Copulation patterns in captive hamadryas baboons: a quantitative analysis.
Nitsch, Florian; Stueckle, Sabine; Stahl, Daniel; Zinner, Dietmar
2011-10-01
For primates, as for many other vertebrates, copulation which results in ejaculation is a prerequisite for reproduction. The probability of ejaculation is affected by various physiological and social factors, for example reproductive state of male and female and operational sex-ratio. In this paper, we present quantitative and qualitative data on patterns of sexual behaviour in a captive group of hamadryas baboons (Papio hamadryas), a species with a polygynous-monandric mating system. We observed more than 700 copulations and analysed factors that can affect the probability of ejaculation. Multilevel logistic regression analysis and Akaike's information criterion (AIC) model selection procedures revealed that the probability of successful copulation increased as the size of female sexual swellings increased, indicating increased probability of ovulation, and as the number of females per one-male unit (OMU) decreased. In contrast, occurrence of female copulation calls, sex of the copulation initiator, and previous male aggression toward females did not affect the probability of ejaculation. Synchrony of oestrus cycles also had no effect (most likely because the sample size was too small). We also observed 29 extra-group copulations by two non-adult males. Our results indicate that male hamadryas baboons copulated more successfully around the time of ovulation and that males in large OMUs with many females may be confronted by time or energy-allocation problems. PMID:21710159
Ru, Dafu; Mao, Kangshan; Zhang, Lei; Wang, Xiaojuan; Lu, Zhiqiang; Sun, Yongshuai
2016-06-01
Hybridization and introgression are believed to play important roles in plant evolution. However, few empirical studies have been designed to clarify the ways in which these processes complicate taxonomic delimitation. Recent phylogenetic studies based on a number of different DNA fragments have indicated that Picea brachytyla in the eastern Qinghai-Tibet Plateau is polyphyletic, a finding that contrasts with traditional taxonomy based on morphological traits. We aimed to test this conflict using transcriptomic data from 26 trees collected from multiple localities for this and related species. Our phylogenomic analyses suggest that the sampled trees of P. brachytyla cluster into two distinct lineages corresponding to the two taxonomically recognized intraspecific varieties: var. brachytyla and var. complanata. However, var. complanata nested within Picea likiangensis and was sister to one of its three varieties, while var. brachytyla comprised an isolated lineage. The polyphyletic origin hypothesis was further supported by likelihood tree comparisons using Akaike's information criterion (AIC) and by coalescent analyses under the snapp model. However, our abba-baba and ∂a∂i analyses suggest that gene flow between these two independently evolved lineages has been extensive and bidirectional. Introgression, as well as parallel evolution in the arid habitats common to both lineages, may have given rise to their morphological similarity. Our study highlights the importance of genomic evidence and the use of newly developed coalescent analysis methods for clarifying the evolutionary complexity of certain plant taxa. PMID:27093071
Land-use and land-cover change in Western Ghats of India.
Kale, Manish P; Chavan, Manoj; Pardeshi, Satish; Joshi, Chitiz; Verma, Prabhakar A; Roy, P S; Srivastav, S K; Srivastava, V K; Jha, A K; Chaudhari, Swapnil; Giri, Yogesh; Krishna Murthy, Y V N
2016-07-01
The Western Ghats (WG) of India, one of the hottest biodiversity hotspots in the world, has witnessed major land-use and land-cover (LULC) change in recent times. The present research was aimed at studying the patterns of LULC change in WG during 1985-1995-2005, understanding the major drivers that caused such change, and projecting the future (2025) spatial distribution of forest using coupled logistic regression and Markov model. The International Geosphere Biosphere Program (IGBP) classification scheme was mainly followed in LULC characterization and change analysis. The single-step Markov model was used to project the forest demand. The spatial allocation of such forest demand was based on the predicted probabilities derived through logistic regression model. The R statistical package was used to set the allocation rules. The projection model was selected based on Akaike information criterion (AIC) and area under receiver operating characteristic (ROC) curve. The actual and projected areas of forest in 2005 were compared before making projection for 2025. It was observed that forest degradation has reduced from 1985-1995 to 1995-2005. The study obtained important insights about the drivers and their impacts on LULC simulations. To the best of our knowledge, this is the first attempt where projection of future state of forest in entire WG is made based on decadal LULC and socio-economic datasets at the Taluka (sub-district) level. PMID:27256392
Ramey, Andrew M.; Ely, Craig R.; Schmutz, Joel A.; Pearce, John M.; Heard, Darryl J.
2012-01-01
Tundra swans (Cygnus columbianus) are broadly distributed in North America, use a wide variety of habitats, and exhibit diverse migration strategies. We investigated patterns of hematozoa infection in three populations of tundra swans that breed in Alaska using satellite tracking to infer host movement and molecular techniques to assess the prevalence and genetic diversity of parasites. We evaluated whether migratory patterns and environmental conditions at breeding areas explain the prevalence of blood parasites in migratory birds by contrasting the fit of competing models formulated in an occupancy modeling framework and calculating the detection probability of the top model using Akaike Information Criterion (AIC). We described genetic diversity of blood parasites in each population of swans by calculating the number of unique parasite haplotypes observed. Blood parasite infection was significantly different between populations of Alaska tundra swans, with the highest estimated prevalence occurring among birds occupying breeding areas with lower mean daily wind speeds and higher daily summer temperatures. Models including covariates of wind speed and temperature during summer months at breeding grounds better predicted hematozoa prevalence than those that included annual migration distance or duration. Genetic diversity of blood parasites in populations of tundra swans appeared to be relative to hematozoa prevalence. Our results suggest ecological conditions at breeding grounds may explain differences of hematozoa infection among populations of tundra swans that breed in Alaska.
Gutreuter, S.; Vallazza, J.M.; Knights, B.C.
2006-01-01
We provide the first evidence for chronic effects of disturbance by commercial vessels on the spatial distribution and abundance of fishes in the channels of a large river. Most of the world's large rivers are intensively managed to satisfy increasing demands for commercial shipping, but little research has been conducted to identify and alleviate any adverse consequences of commercial navigation. We used a combination of a gradient sampling design incorporating quasicontrol areas with Akaike's information criterion (AIC)-weighted model averaging to estimate effects of disturbances by commercial vessels on fishes in the upper Mississippi River. Species density, which mainly measured species evenness, decreased with increasing disturbance frequency. The most abundant species - gizzard shad (Dorosoma cepedianum) and freshwater drum (Aplodinotus grunniens) - and the less abundant shovelnose sturgeon (Scaphirhynchus platorhynchus) and flathead catfish (Pylodictis olivaris) were seemingly unaffected by traffic disturbance. In contrast, the relative abundance of the toothed herrings (Hiodon spp.), redhorses (Moxostoma spp.), buffaloes (Ictiobus spp.), channel catfish (Ictalurus punctatus), sauger (Sander canadensis), and white bass (Morone chrysops) decreased with increasing traffic in the navigation channel. We hypothesized that the combination of alteration of hydraulic features within navigation channels and rehabilitation of secondary channels might benefit channel-dependent species. ?? 2006 NRC.
Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J.; Munch, Stephan; Skaug, Hans J.
2014-01-01
The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish. PMID:25211603
NASA Astrophysics Data System (ADS)
Wang, Y.; Jia, B.; Xie, Y.
2015-12-01
The Bermuda High (BH) is a key driver of large-scale circulation patterns for Southeastern Texas and other Gulf coast states in summer, with the expected influence on surface ozone through its modulation of marine air inflow with lower ozone background from the Gulf of Mexico. We develop a statistical relationship through multiple linear regression (MLR) to quantify the impact of the BH variations on surface ozone variability during the ozone season in the Houston-Galveston-Brazoria (HGB) area, a major ozone nonattainment region on the Gulf Coast. We find that the variability in BH location, represented by a longitude index of the BH west edge (BH-Lon) in the MLR, explains 50-60% of the year-to-year variability in monthly mean ozone over HGB for Jun and July during 1998-2013, and the corresponding figure for Aug and Sep is 20%. Additional 30%-40% of the ozone variability for Aug and Sep can be explained by the variability in BH strength, represented by two BH intensity indices (BHI) in the MLR, but its contribution is only 5% for June and not significant for July. Including a maximum Through stepwise regression based on Akaike Information Criterion (AIC), the MLR model captures 58~72% of monthly ozone variability during Jun-Sep with a cross-validation R2 of 0.5. This observation-derived statistical relationship will be valuable to constrain model simulations of ozone variability attributable to large-scale circulation patterns.
ToPS: a framework to manipulate probabilistic models of sequence data.
Kashiwabara, André Yoshiaki; Bonadio, Igor; Onuchic, Vitor; Amado, Felipe; Mathias, Rafael; Durham, Alan Mitchell
2013-01-01
Discrete Markovian models can be used to characterize patterns in sequences of values and have many applications in biological sequence analysis, including gene prediction, CpG island detection, alignment, and protein profiling. We present ToPS, a computational framework that can be used to implement different applications in bioinformatics analysis by combining eight kinds of models: (i) independent and identically distributed process; (ii) variable-length Markov chain; (iii) inhomogeneous Markov chain; (iv) hidden Markov model; (v) profile hidden Markov model; (vi) pair hidden Markov model; (vii) generalized hidden Markov model; and (viii) similarity based sequence weighting. The framework includes functionality for training, simulation and decoding of the models. Additionally, it provides two methods to help parameter setting: Akaike and Bayesian information criteria (AIC and BIC). The models can be used stand-alone, combined in Bayesian classifiers, or included in more complex, multi-model, probabilistic architectures using GHMMs. In particular the framework provides a novel, flexible, implementation of decoding in GHMMs that detects when the architecture can be traversed efficiently. PMID:24098098
Estimating annual survival and movement rates of adults within a metapopulation of roseate terns
Spendelow, J.A.; Nichols, J.D.; Nisbet, I.C.T.; Hays, H.; Cormons, G.D.; Burger, J.; Safina, C.; Hines, J.E.; Gochfeld, M.
1995-01-01
Several multistratum capture-recapture models were used to test various hypotheses about possible geographic and temporal variation in survival, movement, and recapture/resighting probabilities of 2399 adult Roseate Terns (Sterna dougallii) color-banded from 1988 to 1992 at the sites of the four largest breeding colonies of this species in the northeastern USA. Linear-logistic ultrastructural models also were developed to investigate possible correlates of geographic variation in movement probabilities. Based on goodness-of-fit tests and comparisons of Akaike's Information Criterion (AIC) values, the fully parameterized model (Model A) with time- and location-specific survival, movement, and capture probabilities, was selected as the most appropriate model for this metapopulation structure. With almost all movement accounted for, on average gt 90% of the surviving adults from each colony site returned to the same site the following year. Variations in movement probabilities were more closely associated with the identity of the destination colony site than with either the identity of the colony site of origin or the distance between colony sites. The average annual survival estimates (0.740.84) of terns from all four sites indicate a high rate of annual mortality relative to that of other species of marine birds.
Accretion Timescales from Kepler AGN
NASA Astrophysics Data System (ADS)
Kasliwal, Vishal P.; Vogeley, Michael S.; Richards, Gordon T.
2015-01-01
We constrain AGN accretion disk variability mechanisms using the optical light curves of AGN observed by Kepler. AGN optical fluxes are known to exhibit stochastic variations on timescales of hours, days, months and years. The excellent sampling properties of the original Kepler mission - high S/N ratio (105), short sampling interval (30 minutes), and long sampling duration (~ 3.5 years) - allow for a detailed examination of the differences between the variability processes present in various sub-types of AGN such as Type I and II Seyferts, QSOs, and Blazars. We model the flux data using the Auto-Regressive Moving Average (ARMA) representation from the field of time series analysis. We use the Kalman filter to determine optimal mode parameters and use the Akaike Information Criteria (AIC) to select the optimal model. We find that optical light curves from Kepler AGN cannot be fit by low order statistical models such as the popular AR(1) process or damped random walk. Kepler light curves exhibit complicated power spectra and are better modeled by higher order ARMA processes. We find that Kepler AGN typically exhibit power spectra that change from a bending power law (PSD ~ 1/fa) to a flat power spectrum on timescales in the range of ~ 5 - 100 days consistent with the orbital and thermal timescales of a typical 107 solar mass black hole.
NASA Astrophysics Data System (ADS)
Snedden, Gregg A.; Steyer, Gregory D.
2013-02-01
Understanding plant community zonation along estuarine stress gradients is critical for effective conservation and restoration of coastal wetland ecosystems. We related the presence of plant community types to estuarine hydrology at 173 sites across coastal Louisiana. Percent relative cover by species was assessed at each site near the end of the growing season in 2008, and hourly water level and salinity were recorded at each site Oct 2007-Sep 2008. Nine plant community types were delineated with k-means clustering, and indicator species were identified for each of the community types with indicator species analysis. An inverse relation between salinity and species diversity was observed. Canonical correspondence analysis (CCA) effectively segregated the sites across ordination space by community type, and indicated that salinity and tidal amplitude were both important drivers of vegetation composition. Multinomial logistic regression (MLR) and Akaike's Information Criterion (AIC) were used to predict the probability of occurrence of the nine vegetation communities as a function of salinity and tidal amplitude, and probability surfaces obtained from the MLR model corroborated the CCA results. The weighted kappa statistic, calculated from the confusion matrix of predicted versus actual community types, was 0.7 and indicated good agreement between observed community types and model predictions. Our results suggest that models based on a few key hydrologic variables can be valuable tools for predicting vegetation community development when restoring and managing coastal wetlands.
IDF relationships using bivariate copula for storm events in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Ariff, N. M.; Jemain, A. A.; Ibrahim, K.; Wan Zin, W. Z.
2012-11-01
SummaryIntensity-duration-frequency (IDF) curves are used in many hydrologic designs for the purpose of water managements and flood preventions. The IDF curves available in Malaysia are those obtained from univariate analysis approach which only considers the intensity of rainfalls at fixed time intervals. As several rainfall variables are correlated with each other such as intensity and duration, this paper aims to derive IDF points for storm events in Peninsular Malaysia by means of bivariate frequency analysis. This is achieved through utilizing the relationship between storm intensities and durations using the copula method. Four types of copulas; namely the Ali-Mikhail-Haq (AMH), Frank, Gaussian and Farlie-Gumbel-Morgenstern (FGM) copulas are considered because the correlation between storm intensity, I, and duration, D, are negative and these copulas are appropriate when the relationship between the variables are negative. The correlations are attained by means of Kendall's τ estimation. The analysis was performed on twenty rainfall stations with hourly data across Peninsular Malaysia. Using Akaike's Information Criteria (AIC) for testing goodness-of-fit, both Frank and Gaussian copulas are found to be suitable to represent the relationship between I and D. The IDF points found by the copula method are compared to the IDF curves yielded based on the typical IDF empirical formula of the univariate approach. This study indicates that storm intensities obtained from both methods are in agreement with each other for any given storm duration and for various return periods.
Negative binomial models for abundance estimation of multiple closed populations
Boyce, Mark S.; MacKenzie, Darry I.; Manly, Bryan F.J.; Haroldson, Mark A.; Moody, David W.
2001-01-01
Counts of uniquely identified individuals in a population offer opportunities to estimate abundance. However, for various reasons such counts may be burdened by heterogeneity in the probability of being detected. Theoretical arguments and empirical evidence demonstrate that the negative binomial distribution (NBD) is a useful characterization for counts from biological populations with heterogeneity. We propose a method that focuses on estimating multiple populations by simultaneously using a suite of models derived from the NBD. We used this approach to estimate the number of female grizzly bears (Ursus arctos) with cubs-of-the-year in the Yellowstone ecosystem, for each year, 1986-1998. Akaike's Information Criteria (AIC) indicated that a negative binomial model with a constant level of heterogeneity across all years was best for characterizing the sighting frequencies of female grizzly bears. A lack-of-fit test indicated the model adequately described the collected data. Bootstrap techniques were used to estimate standard errors and 95% confidence intervals. We provide a Monte Carlo technique, which confirms that the Yellowstone ecosystem grizzly bear population increased during the period 1986-1998.
Chu, Liang-Hui; Chen, Bor-Sen
2008-01-01
Background Cancer is caused by genetic abnormalities, such as mutations of oncogenes or tumor suppressor genes, which alter downstream signal transduction pathways and protein-protein interactions. Comparisons of the interactions of proteins in cancerous and normal cells can shed light on the mechanisms of carcinogenesis. Results We constructed initial networks of protein-protein interactions involved in the apoptosis of cancerous and normal cells by use of two human yeast two-hybrid data sets and four online databases. Next, we applied a nonlinear stochastic model, maximum likelihood parameter estimation, and Akaike Information Criteria (AIC) to eliminate false-positive protein-protein interactions in our initial protein interaction networks by use of microarray data. Comparisons of the networks of apoptosis in HeLa (human cervical carcinoma) cells and in normal primary lung fibroblasts provided insight into the mechanism of apoptosis and allowed identification of potential drug targets. The potential targets include BCL2, caspase-3 and TP53. Our comparison of cancerous and normal cells also allowed derivation of several party hubs and date hubs in the human protein-protein interaction networks involved in caspase activation. Conclusion Our method allows identification of cancer-perturbed protein-protein interactions involved in apoptosis and identification of potential molecular targets for development of anti-cancer drugs. PMID:18590547
Spatiotemporal analysis of aquifers salinization in coastal area of Yunlin, Taiwan
NASA Astrophysics Data System (ADS)
Chen, P.-C.; Tan, Y.-C.
2012-04-01
In the past, time and space characteristics often discussed separately. This study adopts regionalized variables theory, and describes the water quality in terms of its structure in time and space to assess the situation of Yunlin. This study applied the Quantum Bayesian Maximum Entropy Toolbox (QtBME), which is a spatiotemporal statistics function, can be applied to estimate and map a non-stationary and non-homogeneous spatiotemporal process under the platform of Quantum GIS (QGIS) software. Kernel smoothing method is used to divide the original process into a deterministic trend and a stationary and homogeneous spatiotemporal process, assuming that a spatiotemporal process can be divided into high and low frequency. The covariance model of the process of high frequency is selected objectively by particle swarm optimization (PSO) method and Akaike's information criterion (AIC). Bayesian maximum entropy method is then applied to spatiotemporal mapping of the variable of interest. In this study, QtBME estimated the situation of aquifers salinization at Yunlin coastal area in 1992 to 2010. Finally, one investigated the rainfall and aquifers salinization on the degree of impact.
Determination of Original Infection Source of H7N9 Avian Influenza by Dynamical Model
NASA Astrophysics Data System (ADS)
Zhang, Juan; Jin, Zhen; Sun, Gui-Quan; Sun, Xiang-Dong; Wang, You-Ming; Huang, Baoxu
2014-05-01
H7N9, a newly emerging virus in China, travels among poultry and human. Although H7N9 has not aroused massive outbreaks, recurrence in the second half of 2013 makes it essential to control the spread. It is believed that the most effective control measure is to locate the original infection source and cut off the source of infection from human. However, the original infection source and the internal transmission mechanism of the new virus are not totally clear. In order to determine the original infection source of H7N9, we establish a dynamical model with migratory bird, resident bird, domestic poultry and human population, and view migratory bird, resident bird, domestic poultry as original infection source respectively to fit the true dynamics during the 2013 pandemic. By comparing the date fitting results and corresponding Akaike Information Criterion (AIC) values, we conclude that migrant birds are most likely the original infection source. In addition, we obtain the basic reproduction number in poultry and carry out sensitivity analysis of some parameters.
Mapping the mean monthly precipitation of a small island using kriging with external drifts
NASA Astrophysics Data System (ADS)
Cantet, Philippe
2015-09-01
This study focuses on the spatial distribution of mean annual and monthly precipitation in a small island (1128 km2) named Martinique, located in the Lesser Antilles. Only 35 meteorological stations are available on the territory, which has a complex topography. With a digital elevation model (DEM), 17 covariates that are likely to explain precipitation were built. Several interpolation methods, such as regression-kriging (MLRK, PCRK,and PLSK) and external drift kriging (EDK) were tested using a cross-validation procedure. For the regression methods, predictors were chosen by established techniques whereas a new approach is proposed to select external drifts in a kriging which is based on a stepwise model selection by the Akaike Information Criterion (AIC). The prediction accuracy was assessed at validation sites with three different skill scores. Results show that using methods with no predictors such as inverse distance weighting (IDW) or universal kriging (UK) is inappropriate in such a territory. EDK appears to outperform regression methods for any criteria, and selecting predictors by our approach improves the prediction of mean annual precipitation compared to kriging with only elevation as drift. Finally, the predicting performance was also studied by varying the size of the training set leading to less conclusive results for EDK and its performance. Nevertheless, the proposed method seems to be a good way to improve the mapping of climatic variables in a small island.
Lee-Carter state space modeling: Application to the Malaysia mortality data
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-06-01
This article presents an approach that formalizes the Lee-Carter (LC) model as a state space model. Maximum likelihood through Expectation-Maximum (EM) algorithm was used to estimate the model. The methodology is applied to Malaysia's total population mortality data. Malaysia's mortality data was modeled based on age specific death rates (ASDR) data from 1971-2009. The fitted ASDR are compared to the actual observed values. However, results from the comparison of the fitted and actual values between LC-SS model and the original LC model shows that the fitted values from the LC-SS model and original LC model are quite close. In addition, there is not much difference between the value of root mean squared error (RMSE) and Akaike information criteria (AIC) from both models. The LC-SS model estimated for this study can be extended for forecasting ASDR in Malaysia. Then, accuracy of the LC-SS compared to the original LC can be further examined by verifying the forecasting power using out-of-sample comparison.
Extensions to minimum relative entropy inversion for noisy data
NASA Astrophysics Data System (ADS)
Ulrych, Tadeusz J.; Woodbury, Allan D.
2003-12-01
Minimum relative entropy (MRE) and Tikhonov regularization (TR) were compared by Neupauer et al. [Water Resour. Res. 36 (2000) 2469] on the basis of an example plume source reconstruction problem originally proposed by Skaggs and Kabala [Water Resour. Res. 30 (1994) 71] and a boxcar-like function. Although Neupauer et al. [Water Resour. Res. 36 (2000) 2469] were careful in their conclusions to note the basis of these comparisons, we show that TR does not perform well on problems in which delta-like sources are convolved with diffuse-groundwater contamination response functions, particularly in the presence of noise. We also show that it is relatively easy to estimate an appropriate value for ɛ, the hyperparameter needed in the minimum relative entropy solution for the inverse problem in the presence of noise. This can be estimated in a variety of ways, including estimation from the data themselves, analysis of data residuals, and a rigorous approach using the real cepstrum and the Akaike Information Criterion (AIC). Regardless of the approach chosen, for the sample problem reported herein, excellent resolution of multiple delta-like spikes is produced from MRE from noisy, diffuse data. The usefulness of MRE for noisy inverse problems has been demonstrated.
Xia, Sheng-Xuan; Zhai, Xiang; Wang, Ling-Ling; Sun, Bin; Liu, Jian-Qiang; Wen, Shuang-Chun
2016-08-01
To achieve plasmonically induced transparency (PIT), general near-field plasmonic systems based on couplings between localized plasmon resonances of nanostructures rely heavily on the well-designed interantenna separations. However, the implementation of such devices and techniques encounters great difficulties mainly to due to very small sized dimensions of the nanostructures and gaps between them. Here, we propose and numerically demonstrate that PIT can be achieved by using two graphene layers that are composed of a upper sinusoidally curved layer and a lower planar layer, avoiding any pattern of the graphene sheets. Both the analytical fitting and the Akaike Information Criterion (AIC) method are employed efficiently to distinguish the induced window, which is found to be more likely caused by Autler-Townes splitting (ATS) instead of electromagnetically induced transparency (EIT). Besides, our results show that the resonant modes cannot only be tuned dramatically by geometrically changing the grating amplitude and the interlayer spacing, but also by dynamically varying the Fermi energy of the graphene sheets. Potential applications of the proposed system could be expected on various photonic functional devices, including optical switches, plasmonic sensors. PMID:27505756
Harezlak, J; Cohen, R; Gongvatana, A; Taylor, M; Buchthal, S; Schifitto, G; Zhong, J; Daar, E S; Alger, J R; Brown, M; Singer, E J; Campbell, T B; McMahon, D; So, Y T; Yiannoutsos, C T; Navia, B A
2014-06-01
The reasons for persistent brain dysfunction in chronically HIV-infected persons on stable combined antiretroviral therapies (CART) remain unclear. Host and viral factors along with their interactions were examined in 260 HIV-infected subjects who underwent magnetic resonance spectroscopy (MRS). Metabolite concentrations (NAA/Cr, Cho/Cr, MI/Cr, and Glx/Cr) were measured in the basal ganglia, the frontal white matter, and gray matter, and the best predictive models were selected using a bootstrap-enhanced Akaike information criterion (AIC). Depending on the metabolite and brain region, age, race, HIV RNA concentration, ADC stage, duration of HIV infection, nadir CD4, and/or their interactions were predictive of metabolite concentrations, particularly the basal ganglia NAA/Cr and the mid-frontal NAA/Cr and Glx/Cr, whereas current CD4 and the CPE index rarely or did not predict these changes. These results show for the first time that host and viral factors related to both current and past HIV status contribute to persisting cerebral metabolite abnormalities and provide a framework for further understanding neurological injury in the setting of chronic and stable disease. PMID:24696364
Estimation of exposure to toxic releases using spatial interaction modeling
2011-01-01
Background The United States Environmental Protection Agency's Toxic Release Inventory (TRI) data are frequently used to estimate a community's exposure to pollution. However, this estimation process often uses underdeveloped geographic theory. Spatial interaction modeling provides a more realistic approach to this estimation process. This paper uses four sets of data: lung cancer age-adjusted mortality rates from the years 1990 through 2006 inclusive from the National Cancer Institute's Surveillance Epidemiology and End Results (SEER) database, TRI releases of carcinogens from 1987 to 1996, covariates associated with lung cancer, and the EPA's Risk-Screening Environmental Indicators (RSEI) model. Results The impact of the volume of carcinogenic TRI releases on each county's lung cancer mortality rates was calculated using six spatial interaction functions (containment, buffer, power decay, exponential decay, quadratic decay, and RSEI estimates) and evaluated with four multivariate regression methods (linear, generalized linear, spatial lag, and spatial error). Akaike Information Criterion values and P values of spatial interaction terms were computed. The impacts calculated from the interaction models were also mapped. Buffer and quadratic interaction functions had the lowest AIC values (22298 and 22525 respectively), although the gains from including the spatial interaction terms were diminished with spatial error and spatial lag regression. Conclusions The use of different methods for estimating the spatial risk posed by pollution from TRI sites can give different results about the impact of those sites on health outcomes. The most reliable estimates did not always come from the most complex methods. PMID:21418644
Towards a Model Selection Rule for Quantum State Tomography
NASA Astrophysics Data System (ADS)
Scholten, Travis; Blume-Kohout, Robin
Quantum tomography on large and/or complex systems will rely heavily on model selection techniques, which permit on-the-fly selection of small efficient statistical models (e.g. small Hilbert spaces) that accurately fit the data. Many model selection tools, such as hypothesis testing or Akaike's AIC, rely implicitly or explicitly on the Wilks Theorem, which predicts the behavior of the loglikelihood ratio statistic (LLRS) used to choose between models. We used Monte Carlo simulations to study the behavior of the LLRS in quantum state tomography, and found that it disagrees dramatically with Wilks' prediction. We propose a simple explanation for this behavior; namely, that boundaries (in state space and between models) play a significant role in determining the distribution of the LLRS. The resulting distribution is very complex, depending strongly both on the true state and the nature of the data. We consider a simplified model that neglects anistropy in the Fisher information, derive an almost analytic prediction for the mean value of the LLRS, and compare it to numerical experiments. While our simplified model outperforms the Wilks Theorem, it still does not predict the LLRS accurately, implying that alternative methods may be necessary for tomographic model selection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.
NASA Astrophysics Data System (ADS)
Tanaka, Shojiro; Nishii, Ryuei
2005-10-01
Deforestation is a result of complex causality chains in most cases. But identification of limited number of factors shall provide comprehensive general understanding of the vital phenomenon at a broad scale, as well as projection for the future. Only two factors -- human population size (N) and relief energy (R: difference of minimum altitude from the maximum in a sampled area) -- were found to give sufficient elucidation of deforestation by nonlinear logit regression models, whose functional forms were suggested by step functions fitted to one-kilometer square high precision grid-cell data in Japan (n=6825). Likelihood with spatial dependency was derived, and several deforestation models were selected for the application to East Asia by calculating relative appropriateness to data. For the measure of appropriateness, Akaike's Information Criterion (AIC) was used. Logit model is employed so as to avoid anomaly in asymptotic lower and upper bounds. Therefore the forest areal rate, 0 < F < 1. To formulate East-Asian dataset, landcover dataset estimated from NOAA observations available at UNEP, Tsukuba for F, gridded population of the world of CIESIN, US for N, and GTOPO30 of USGS for R, were used. The resolutions were matched by taking their common multiple of 20 minutes square. It was suggested that data of full forest coverage, F=1.0, which were not dealt in calculations due to logit transformation this time, should give important role in stabilizing parameter estimations.
Li, Cheng-Wei; Chen, Bor-Sen
2009-01-01
In order to investigate the possible mechanisms for eve stripe formation of Drosophila embryo, a spatio-temporal gene/protein interaction network model is proposed to mimic dynamic behaviors of protein synthesis, protein decay, mRNA decay, protein diffusion, transcription regulations and autoregulation to analyze the interplay of genes and proteins at different compartments in early embryogenesis. In this study, we use the maximum likelihood (ML) method to identify the stochastic 3-D Embryo Space-Time (3-DEST) dynamic model for gene/protein interaction network via 3-D mRNA and protein expression data and then use the Akaike Information Criterion (AIC) to prune the gene/protein interaction network. The identified gene/protein interaction network allows us not only to analyze the dynamic interplay of genes and proteins on the border of eve stripes but also to infer that eve stripes are established and maintained by network motifs built by the cooperation between transcription regulations and diffusion mechanisms in early embryogenesis. Literature reference with the wet experiments of gene mutations provides a clue for validating the identified network. The proposed spatio-temporal dynamic model can be extended to gene/protein network construction of different biological phenotypes, which depend on compartments, e.g. postnatal stem/progenitor cell differentiation. PMID:20054403
The optimal number of lymph nodes removed in maximizing the survival of breast cancer patients
NASA Astrophysics Data System (ADS)
Peng, Lim Fong; Taib, Nur Aishah; Mohamed, Ibrahim; Daud, Noorizam
2014-07-01
The number of lymph nodes removed is one of the important predictors for survival in breast cancer study. Our aim is to determine the optimal number of lymph nodes to be removed for maximizing the survival of breast cancer patients. The study population consists of 873 patients with at least one of axillary nodes involved among 1890 patients from the University of Malaya Medical Center (UMMC) breast cancer registry. For this study, the Chi-square test of independence is performed to determine the significant association between prognostic factors and survival status, while Wilcoxon test is used to compare the estimates of the hazard functions of the two or more groups at each observed event time. Logistic regression analysis is then conducted to identify important predictors of survival. In particular, Akaike's Information Criterion (AIC) are calculated from the logistic regression model for all thresholds of node involved, as an alternative measure for the Wald statistic (χ2), in order to determine the optimal number of nodes that need to be removed to obtain the maximum differential in survival. The results from both measurements are compared. It is recommended that, for this particular group, the minimum of 10 nodes should be removed to maximize survival of breast cancer patients.
Jusko, Todd A; Oktapodas, Marina; Palkovičová Murinová, L'ubica; Babinská, Katarina; Babjaková, Jana; Verner, Marc-André; DeWitt, Jamie C; Thevenet-Morrison, Kelly; Čonka, Kamil; Drobná, Beata; Chovancová, Jana; Thurston, Sally W; Lawrence, B Paige; Dozier, Ann M; Järvinen, Kirsi M; Patayová, Henrieta; Trnovec, Tomáš; Legler, Juliette; Hertz-Picciotto, Irva; Lamoree, Marja H
2016-07-01
To determine demographic, reproductive, and maternal dietary factors that predict perfluoroalkyl substance (PFAS) concentrations in breast milk, we measured perfluorooctane sulfonic (PFOS) and perfluorooctanoic acid (PFOA) concentrations, using liquid chromatography-mass spectrometry, in 184 colostrum samples collected from women participating in a cohort study in Eastern Slovakia between 2002 and 2004. During their hospital delivery stay, mothers completed a food frequency questionnaire, and demographic and reproductive data were also collected. PFOS and PFOA predictors were identified by optimizing multiple linear regression models using Akaike's information criterion (AIC). The geometric mean concentration in colostrum was 35.3 pg/mL for PFOS and 32.8 pg/mL for PFOA. In multivariable models, parous women had 40% lower PFOS (95% CI: -56 to -17%) and 40% lower PFOA (95% CI: -54 to -23%) concentrations compared with nulliparous women. Moreover, fresh/frozen fish consumption, longer birth intervals, and Slovak ethnicity were associated with higher PFOS and PFOA concentrations in colostrum. These results will help guide the design of future epidemiologic studies examining milk PFAS concentrations in relation to health end points in children. PMID:27244128
Sartini, Claudio; Zauli Sajani, Stefano; Ricciardelli, Isabella; Delgado-Saborit, Juana Mari; Scotto, Fabiana; Trentini, Arianna; Ferrari, Silvia; Poluzzi, Vanes
2013-10-01
The aim of this study was to investigate the influence of an urban area on ultrafine particle (UFP) concentration in nearby surrounding areas. We assessed how downwind and upwind conditions affect the UFP concentration at a site placed a few kilometres from the city border. Secondarily, we investigated the relationship among other meteorological factors, temporal variables and UFP. Data were collected for 44 days during 2008 and 2009 at a rural site placed about 3 kilometres from Bologna, in northern Italy. Measurements were performed using a spectrometer (FMPS TSI 3091). The average UFP number concentration was 11 776 (±7836) particles per cm(3). We analysed the effect of wind direction in a multivariate Generalized Additive Model (GAM) adjusted for the principal meteorological parameters and temporal trends. An increase of about 25% in UFP levels was observed when the site was downwind of the urban area, compared with the levels observed when wind blew from rural areas. The size distribution of particles was also affected by the wind direction, showing higher concentration of small size particles when the wind blew from the urban area. The GAM showed a good fit to the data (R(2) = 0.81). Model choice was via Akaike Information Criteria (AIC). The analysis also revealed that an approach based on meteorological data plus temporal trends improved the goodness of the fit of the model. In addition, the findings contribute to evidence on effects of exposure to ultrafine particles on a population living in city surroundings. PMID:24077061
NASA Astrophysics Data System (ADS)
Li, Gang; Chen, Xinjun; Feng, Bo
2008-11-01
Although chub mackerel ( Scomber japonicus) is a primary pelagic fish species, we have only limited knowledge on its key life history processes. The present work studied the age and growth of chub mackerel in the East China and Yellow Seas. Age was determined by interpreting and counting growth rings on the sagitta otoliths of 252 adult fish caught by the Chinese commercial purse seine fleet during the period from November 2006 to January 2007 and 150 juveniles from bottom trawl surveys on the spawning ground in May 2006. The difference between the assumed birth date of 1st April and date of capture was used to adjust the age determined from counting the number of complete translucent rings. The parameters of three commonly used growth models, the von Bertalanffy, Logistic and Gompertz models, were estimated using the maximum likelihood method. Based on the Akaike Information Criterion ( AIC), the von Bertalanffy growth model was found to be the most appropriate model. The size-at-age and size-at-maturity values were also found to decrease greatly compared with the results achieved in the 1950s, which was caused by heavy exploitation over the last few decades.
Maakip, Ismail; Keegel, Tessa; Oakman, Jodi
2016-03-01
Musculoskeletal disorders (MSDs) are a major occupational health issue for workers in developed and developing countries, including Malaysia. Most research related to MSDs has been undertaken in developed countries; given the different regulatory and cultural practices it is plausible that contributions of hazard and risk factors may be different. A population of Malaysian public service office workers were surveyed (N = 417, 65.5% response rate) to determine prevalence and associated predictors of MSD discomfort. The 6-month period prevalence of MSD discomfort was 92.8% (95%CI = 90.2-95.2%). Akaike's Information Criterion (AIC) analyses was used to compare a range of models and determine a model of best fit. Contributions associated with MSD discomfort in the final model consisted of physical demands (61%), workload (14%), gender (13%), work-home balance (9%) and psychosocial factors (3%). Factors associated with MSD discomfort were similar in developed and developing countries but the relative contribution of factors was different, providing insight into future development of risk management strategies. PMID:26499952
Snedden, Gregg A.; Steyer, Gregory D.
2013-01-01
Understanding plant community zonation along estuarine stress gradients is critical for effective conservation and restoration of coastal wetland ecosystems. We related the presence of plant community types to estuarine hydrology at 173 sites across coastal Louisiana. Percent relative cover by species was assessed at each site near the end of the growing season in 2008, and hourly water level and salinity were recorded at each site Oct 2007–Sep 2008. Nine plant community types were delineated with k-means clustering, and indicator species were identified for each of the community types with indicator species analysis. An inverse relation between salinity and species diversity was observed. Canonical correspondence analysis (CCA) effectively segregated the sites across ordination space by community type, and indicated that salinity and tidal amplitude were both important drivers of vegetation composition. Multinomial logistic regression (MLR) and Akaike's Information Criterion (AIC) were used to predict the probability of occurrence of the nine vegetation communities as a function of salinity and tidal amplitude, and probability surfaces obtained from the MLR model corroborated the CCA results. The weighted kappa statistic, calculated from the confusion matrix of predicted versus actual community types, was 0.7 and indicated good agreement between observed community types and model predictions. Our results suggest that models based on a few key hydrologic variables can be valuable tools for predicting vegetation community development when restoring and managing coastal wetlands.
Mixture regression models for closed population capture-recapture data.
Tounkara, Fodé; Rivest, Louis-Paul
2015-09-01
In capture-recapture studies, the use of individual covariates has been recommended to get stable population estimates. However, some residual heterogeneity might still exist and ignoring such heterogeneity could lead to underestimating the population size (N). In this work, we explore two new models with capture probabilities depending on both covariates and unobserved random effects, to estimate the size of a population. Inference techniques including Horvitz-Thompson estimate and confidence intervals for the population size, are derived. The selection of a particular model is carried out using the Akaike information criterion (AIC). First, we extend the random effect model of Darroch et al. (1993, Journal of American Statistical Association 88, 1137-1148) to handle unit level covariates and discuss its limitations. The second approach is a generalization of the traditional zero-truncated binomial model that includes a random effect to account for an unobserved heterogeneity. This approach provides useful tools for inference about N, since key quantities such as moments, likelihood functions and estimates of N and their standard errors have closed form expressions. Several models for the unobserved heterogeneity are available and the marginal capture probability is expressed using the Logit and the complementary Log-Log link functions. The sensitivity of the inference to the specification of a model is also investigated through simulations. A numerical example is presented. We compare the performance of the proposed estimator with that obtained under model Mh of Huggins (1989 Biometrika 76, 130-140). PMID:25963047
Silva, Fabrício Drummond; dos Santos, Alcione Miranda; Corrêa, Rita da Graça Carvalhal Frazão; Caldas, Arlene de Jesus Mendes
2016-02-01
This study analyzed the relationship between rainfall, temperature and occurrence of dengue cases. Ecological study performed with autochthonous dengue cases reported during 2003 to 2010 in São Luís, Maranhão. Data of rainfall and temperature were collected monthly. The monthly incidence of dengue cases was calculated by year/100,000 inhabitants. In order to identify the influence of climate variables and dengue cases different distributed lag models using negative binomial distribution were considered. Model selection was based on the lowest AIC (Akaike Information Criterion). Thirteen thousand, four hundred forty-four cases of dengue between 2003 and 2010 were reported, with peaks in 2005, 2007 and 2010. The correlation between rainfall and the occurrence of dengue cases showed increase in the first months after the rainy months. Occurrence of dengue cases was observed during all the period of study. Only rainfall-lag per three months showed a positive association with the number of cases dengue. Thus, this municipality is considered as an endemic and epidemic site. In addition, the relation between rainfall and dengue cases was significant with a lag of three months. These results should be useful to the future development of politics healthy for dengue prevention and control. PMID:26910171
Resolution of direction of arrival and number of signal(s) in a highly noisy environment
NASA Astrophysics Data System (ADS)
Beyon, Jeffrey Y.; Thomopoulos, Stelios C.
1998-07-01
The majority of Direction-of-Arrival (DOA) estimation methods studied in the literature work effectively in relatively strong signal power environment [positive dB of Array- Signal-to-Noise-Ratio (ASNR)]. In weak power signal environments, conventional beamformer-based and subspace-based methods fail to estimate the DOA correctly. The MaxMax method allows to maintain accurate estimates of the DOA even in extremely noisy environments (-10 dB of ASNR). The method is reviewed and its performance is compared with that of the Conventional Beamformer, Capon's Beamformer, MUSIC, ESPRIT, and Min-Norm methods. In contrast with the subspace-based methods which entirely depend on the full rank signal covariance matrix, the MaxMax method does not. Hence, the performance of the method remains superior to that of the others without adjusting the algorithm to the characteristics of source signals such as multipath or singlepath. If the signal power is so weak that its presence is almost negligible, Akaike's Information Criterion (AIC) or Minimum Description Length (MDL) do not yield correct estimates the number of signal paths. A new 'spatial sampling' technique and its performance are presented for estimating the number of signals in case of strongly suppressed signal power.
Seismic hazard assessment in central Ionian Islands area (Greece) based on stress release models
NASA Astrophysics Data System (ADS)
Votsi, Irene; Tsaklidis, George; Papadimitriou, Eleftheria
2011-08-01
The long-term probabilistic seismic hazard of central Ionian Islands (Greece) is studied through the application of stress release models. In order to identify statistically distinct regions, the study area is divided into two subareas, namely Kefalonia and Lefkada, on the basis of seismotectonic properties. Previous results evidenced the existence of stress transfer and interaction between the Kefalonia and Lefkada fault segments. For the consideration of stress transfer and interaction, the linked stress release model is applied. A new model is proposed, where the hazard rate function in terms of X(t) has the form of the Weibull distribution. The fitted models are evaluated through residual analysis and the best of them is selected through the Akaike information criterion. Based on AIC, the results demonstrate that the simple stress release model fits the Ionian data better than the non-homogeneous Poisson and the Weibull models. Finally, the thinning simulation method is applied in order to produce simulated data and proceed to forecasting.
Nakamae, Mika; Yamashita, Mariko; Koh, Hideo; Nishimoto, Mitsutaka; Hayashi, Yoshiki; Nakane, Takahiko; Nakashima, Yasuhiro; Hirose, Asao; Hino, Masayuki; Nakamae, Hirohisa
2016-06-01
Some studies on the predictive value of determining pulmonary function prior to allogeneic hematopoietic cell transplantation (allo-HCT) have shown a significant association between pulmonary function test (PFT) parameters and pulmonary complications, and mortality. However, the percentage of patients showing abnormalities in pretransplant PFT parameters is low. We comprehensively evaluated the effect of pretransplant PFT parameters, including a marker of small airway disease (ratio of the airflow rate of 50% vital capacity to the airflow rate of 25% vital capacity (V˙50/V˙25), on outcomes in 206 evaluable patients who underwent allo-HCT at our institute. Notable among the significant parameters in a univariable analysis, V˙50/V˙25 was the most powerful indicator of survival following allo-HCT (delta-Akaike information criterion [∆AIC] = 12.47, ∆χ(2) = 14.47; P = 0.0001). Additionally, a pretransplant lung function score (pLFS) established by applying three parameters with superior predictive values including V˙50/V˙25 represented a better discriminating variable for the prediction of survival. Our data demonstrate that a pLFS incorporating a parameter of small airway disease, rather than the parameters of central airway obstruction, may be useful for predicting patient survival following allo-HCT. PMID:27018997
Olken, Benjamin A.; Singhal, Monica
2011-01-01
Informal payments are a frequently overlooked source of local public finance in developing countries. We use microdata from ten countries to establish stylized facts on the magnitude, form, and distributional implications of this “informal taxation.” Informal taxation is widespread, particularly in rural areas, with substantial in-kind labor payments. The wealthy pay more, but pay less in percentage terms, and informal taxes are more regressive than formal taxes. Failing to include informal taxation underestimates household tax burdens and revenue decentralization in developing countries. We discuss various explanations for and implications of these observed stylized facts. PMID:22199993
"Information, Information Everywhere and Not..."
ERIC Educational Resources Information Center
Wright, Paula
Demographic and economic materials relevant to rural economic development are the focus of this description of the types of information that are collected by the U.S. Bureau of the Census and how this information can be accessed. Information provided on demographic materials includes collection methods--the census, surveys, and administrative…
Manion, F.; Hsieh, K.; Harris, M.
2015-01-01
Summary Background Despite efforts to provide standard definitions of terms such as “medical record”, “computer-based patient record”, “electronic medical record” and “electronic health record”, the terms are still used interchangeably. Initiatives like data and information governance, research biorepositories, and learning health systems require availability and reuse of data, as well as common understandings of the scope for specific purposes. Lacking widely shared definitions, utilization of the afore-mentioned terms in research informed consent documents calls to question whether all participants in the research process — patients, information technology and regulatory staff, and the investigative team — fully understand what data and information they are asking to obtain and agreeing to share. Objectives This descriptive study explored the terminology used in research informed consent documents when describing patient data and information, asking the question “Does the use of the term “medical record” in the context of a research informed consent document accurately represent the scope of the data involved?” Methods Informed consent document templates found on 17 Institutional Review Board (IRB) websites with Clinical and Translational Science Awards (CTSA) were searched for terms that appeared to be describing the data resources to be accessed. The National Library of Medicine’s (NLM) Terminology Services was searched for definitions provided by key standards groups that deposit terminologies with the NLM. Discussion The results suggest research consent documents are using outdated terms to describe patient information, health care terminology systems need to consider the context of research for use cases, and that there is significant work to be done to assure the HIPAA Omnibus Rule is applied to contemporary activities such as biorepositories and learning health systems. Conclusions “Medical record”, a term used extensively
MH2c: Characterization of major histocompatibility α-helices - an information criterion approach
NASA Astrophysics Data System (ADS)
Hischenhuber, B.; Frommlet, F.; Schreiner, W.; Knapp, B.
2012-07-01
Major histocompatibility proteins share a common overall structure or peptide binding groove. Two binding groove domains, on the same chain for major histocompatibility class I or on two different chains for major histocompatibility class II, contribute to that structure that consists of two α-helices (“wall”) and a sheet of eight anti-parallel beta strands (“floor”). Apart from the peptide presented in the groove, the major histocompatibility α-helices play a central role for the interaction with the T cell receptor. This study presents a generalized mathematical approach for the characterization of these helices. We employed polynomials of degree 1 to 7 and splines with 1 to 2 nodes based on polynomials of degree 1 to 7 on the α-helices projected on their principal components. We evaluated all models with a corrected Akaike Information Criterion to determine which model represents the α-helices in the best way without overfitting the data. This method is applicable for both the stationary and the dynamic characterization of α-helices. By deriving differential geometric parameters from these models one obtains a reliable method to characterize and compare α-helices for a broad range of applications. Catalogue identifier: AELX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 327 565 No. of bytes in distributed program, including test data, etc.: 17 433 656 Distribution format: tar.gz Programming language: Matlab Computer: Personal computer architectures Operating system: Windows, Linux, Mac (all systems on which Matlab can be installed) RAM: Depends on the trajectory size, min. 1 GB (Matlab) Classification: 2.1, 4.9, 4.14 External routines: Curve Fitting Toolbox and Statistic Toolbox of
Hartle, J.B. Isaac Newton Institute for the Mathematical Sciences, University of Cambridge, Cambridge CB3 0EH )
1995-02-15
In usual quantum theory, the information available about a quantum system is defined in terms of the density matrix describing it on a spacelike surface. This definition must be generalized for extensions of quantum theory which neither require, nor always permit, a notion of state on a spacelike surface. In particular, it must be generalized for the generalized quantum theories appropriate when spacetime geometry fluctuates quantum mechanically or when geometry is fixed but not foliable by spacelike surfaces. This paper introduces a four-dimensional notion of the information available about a quantum system's boundary conditions in the various sets of decohering, coarse-grained histories it may display. This spacetime notion of information coincides with the familiar one when quantum theory [ital is] formulable in terms of states on spacelike surfaces but generalizes this notion when it cannot be so formulated. The idea of spacetime information is applied in several contexts: When spacetime geometry is fixed the information available through alternatives restricted to a fixed spacetime region is defined. The information available through histories of alternatives of general operators is compared to that obtained from the more limited coarse grainings of sum-over-histories quantum mechanics that refer only to coordinates. The definition of information is considered in generalized quantum theories. We consider as specific examples time-neutral quantum mechanics with initial and final conditions, quantum theories with nonunitary evolution, and the generalized quantum frameworks appropriate for quantum spacetime. In such theories complete information about a quantum system is not necessarily available on any spacelike surface but must be searched for throughout spacetime. The information loss commonly associated with the evolution of pure states into mixed states'' in black hole evaporation is thus not in conflict with the principles of generalized quantum mechanics.
NASA Technical Reports Server (NTRS)
Holden, Kritina; Sandor, A.; Thompson, S. G.; McCann, R. S.; Kaiser, M. K.; Begault, D. R.; Adelstein, B. D.; Beutter, B. R.; Stone, L. S.
2008-01-01
The goal of the Information Presentation Directed Research Project (DRP) is to address design questions related to the presentation of information to the crew on flight vehicles, surface landers and habitats, and during extra-vehicular activities (EVA). Designers of displays and controls for exploration missions must be prepared to select the text formats, label styles, alarms, electronic procedure designs, and cursor control devices that provide for optimal crew performance on exploration tasks. The major areas of work, or subtasks, within the Information Presentation DRP are: 1) Controls, 2) Displays, 3) Procedures, and 4) EVA Operations.
Rodríguez, C R; González Parra, E; Martínez Castelao, A
2008-01-01
- Basic law 41/2002 on patient autonomy regulates the rights and obligations of patients, users and professionals, as well as those of public and private health care centers and services. This regulation refers to patient autonomy, the right to information and essential clinical documentation. - This law establishes the minimum requirements for the information the patient should receive and the decision making in which the patient should take part. Diagnostic tests are performed and therapeutic decisions are taken in the ACKD unit in which patient information is an essential and mandatory requirement according to this law. PMID:19018748
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will
CMT data inversion using a Bayesian information criterion to estimate seismogenic stress fields
NASA Astrophysics Data System (ADS)
Terakawa, Toshiko; Matsu'ura, Mitsuhiro
2008-02-01
We developed an inversion method to estimate the stress fields related to earthquake generation (seismogenic stress fields) from the centroid moment tensors (CMT) of seismic events by using Akaike's Bayesian information criterion (ABIC). On the idea that the occurrence of an earthquake releases some part of the seismogenic stress field around its hypocentre, we define the CMT of a seismic event by a weighted volume integral of the true but unknown seismogenic stress field. Representing each component of the seismogenic stress field by the superposition of a finite number of 3-D basis functions (tri-cubic B-splines), we obtain a set of linear observation equations to be solved for the expansion coefficients (model parameters). We introduce prior constraint on the roughness of the seismogenic stress field and combine it with observed data to construct a Bayesian model with hierarchic, highly flexible structure controlled by hyper-parameters. The optimum values of the hyper-parameters are objectively determined form observed data by using ABIC. Given the optimum values of the hyper-parameters, we can obtain the best estimates of model parameters by using a maximum likelihood algorithm. We tested the validity of the inversion method through numerical experiments on two synthetic CMT data sets, assuming the distribution of fault orientations to be aligned with the maximum shear stress plane in one case and to be random in the other case. Then we applied the inversion method to actual CMT data in northeast Japan, and obtained the pattern of the seismogenic stress field consistent with geophysical and geological observations.
Hunt, D.N.
1997-02-01
The Information Engineering thrust area develops information technology to support the programmatic needs of Lawrence Livermore National Laboratory`s Engineering Directorate. Progress in five programmatic areas are described in separate reports contained herein. These are entitled Three-dimensional Object Creation, Manipulation, and Transport, Zephyr:A Secure Internet-Based Process to Streamline Engineering Procurements, Subcarrier Multiplexing: Optical Network Demonstrations, Parallel Optical Interconnect Technology Demonstration, and Intelligent Automation Architecture.
NASA Technical Reports Server (NTRS)
Holden, K.L.; Boyer, J.L.; Sandor, A.; Thompson, S.G.; McCann, R.S.; Begault, D.R.; Adelstein, B.D.; Beutter, B.R.; Stone, L.S.
2009-01-01
The goal of the Information Presentation Directed Research Project (DRP) is to address design questions related to the presentation of information to the crew. The major areas of work, or subtasks, within this DRP are: 1) Displays, 2) Controls, 3) Electronic Procedures and Fault Management, and 4) Human Performance Modeling. This DRP is a collaborative effort between researchers at Johnson Space Center and Ames Research Center.
Rodríguez Maniega, José Antonio; Trío Maseda, Reyes
2005-03-01
The arrival of victims of the terrorist attacks of 11 March at the hospital put the efficiency of its information systems to the test. To be most efficient, these systems should be simple and directed, above all, to the follow-up of victims and to providing the necessary information to patients and families. A specific and easy to use system is advisable. PMID:15771852
When Information Improves Information Security
NASA Astrophysics Data System (ADS)
Grossklags, Jens; Johnson, Benjamin; Christin, Nicolas
This paper presents a formal, quantitative evaluation of the impact of bounded-rational security decision-making subject to limited information and externalities. We investigate a mixed economy of an individual rational expert and several naïve near-sighted agents. We further model three canonical types of negative externalities (weakest-link, best shot and total effort), and study the impact of two information regimes on the threat level agents are facing.
ERIC Educational Resources Information Center
McKay, Martin D.; Stout, J. David
1999-01-01
Discusses access to Internet resources in school libraries, including the importance of evaluating content and appropriate use. The following online services that provide current factual information from legitimate resources are described: SIRS (Social Issues Resource Series), InfoTrac, EBSCO Host, SearchBank, and the Electric Library. (MES)
... page: https://www.nlm.nih.gov/medlineplus/copyright.html Copyright Information To use the sharing features on this page, please enable JavaScript. MedlinePlus contains both copyrighted and non-copyrighted material. Restrictions may apply when linking to ...
ERIC Educational Resources Information Center
Lloyd, Annemaree; Somerville, Margaret
2006-01-01
Purpose: The purpose of this article is to explore the contribution that an information literacy approach to the empirical study of workplace learning can make to how people understand and conceptualise workplace learning. Design/methodology/approach: Three cohorts of fire-fighters working in two regional locations in NSW, Australia were…
ERIC Educational Resources Information Center
Tufte, Edward R.
This book presents over 400 illustrations of complex data that show how the dimensionality and density of portrayals can be enhanced. Practical advice on how to explain complex materials by visual means is given, and examples illustrate the fundamental principles of information display. Design strategies presented are exemplified in maps, the…
ERIC Educational Resources Information Center
Jennings, Carol Ann; McDonald, Sandy
This publication contains instructional materials for teacher and student use for a course in information processing. The materials are written in terms of student performance using measurable objectives. The course includes 10 units. Each instructional unit contains some or all of the basic components of a unit of instruction: performance…
ERIC Educational Resources Information Center
Scofield, James
Newspaper librarians discussed the public use of their newspapers' libraries. Policies run the gamut from well-staffed public information services, within or outside the newspaper library, to no service at all to those outside the staff of the paper. Problems of dealing with tax and law enforcement agencies were covered, as well as cooperative…
Teaching Information Skills: Recording Information.
ERIC Educational Resources Information Center
Pappas, Marjorie L.
2002-01-01
Discusses how to teach students in primary and intermediate grades to record and organize information. Highlights include developing a research question; collaborative planning between teachers and library media specialists; consistency of data entry; and an example of a unit on animal migration based on an appropriate Web site. (LRW)
Information services and information processing
NASA Technical Reports Server (NTRS)
1975-01-01
Attempts made to design and extend space system capabilities are reported. Special attention was given to establishing user needs for information or services which might be provided by space systems. Data given do not attempt to detail scientific, technical, or economic bases for the needs expressed by the users.
Comparison of six statistical approaches in the selection of appropriate fish growth models
NASA Astrophysics Data System (ADS)
Zhu, Lixin; Li, Lifang; Liang, Zhenlin
2009-09-01
The performance of six statistical approaches, which can be used for selection of the best model to describe the growth of individual fish, was analyzed using simulated and real length-at-age data. The six approaches include coefficient of determination ( R 2), adjusted coefficient of determination (adj.- R 2), root mean squared error (RMSE), Akaike’s information criterion (AIC), bias correction of AIC (AIC c ) and Bayesian information criterion (BIC). The simulation data were generated by five growth models with different numbers of parameters. Four sets of real data were taken from the literature. The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data. The best supported model by the data was identified using each of the six approaches. The results show that R 2 and RMSE have the same properties and perform worst. The sample size has an effect on the performance of adj.- R 2, AIC, AIC c and BIC. Adj.- R 2 does better in small samples than in large samples. AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large. AIC c and BIC have best performance in small and large sample cases, respectively. Use of AIC c or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.
Information management - Assessing the demand for information
NASA Technical Reports Server (NTRS)
Rogers, William H.
1991-01-01
Information demand is defined in terms of both information content (what information) and form (when, how, and where it is needed). Providing the information richness required for flight crews to be informed without overwhelming their information processing capabilities will require a great deal of automated intelligence. It is seen that the essence of this intelligence is comprehending and capturing the demand for information.
Change in BMI Accurately Predicted by Social Exposure to Acquaintances
Oloritun, Rahman O.; Ouarda, Taha B. M. J.; Moturu, Sai; Madan, Anmol; Pentland, Alex (Sandy); Khayal, Inas
2013-01-01
Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R2. This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends. PMID
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia
2003-01-01
The objective of GRC CNIS/IE work is to build a plug-n-play infrastructure that provides the Grand Challenge Applications with a suite of tools for coupling codes together, numerical zooming between fidelity of codes and gaining deployment of these simulations onto the Information Power Grid. The GRC CNIS/IE work will streamline and improve this process by providing tighter integration of various tools through the use of object oriented design of component models and data objects and through the use of CORBA (Common Object Request Broker Architecture).
NASA Astrophysics Data System (ADS)
Seiders, Barbara; McQuerry, Dennis; Ferryman, Thomas A.; Whitney, Paul D.; Rybka, Anthony
2002-07-01
Biological weapons are within reach of individuals, small groups, terrorist organizations, as well as nations. With pervasive integration of civilian and military populations worldwide, the ill winds of biological warfare stand to affect military troops and civilians alike. A variety of technologies are emerging - such as pathogen detection devices, streaming internet characterization tools, information exploitation techniques, automated feature extraction, and ubiquitous wireless communication - that can help. These technologies, if taken together within an integrated analytical framework, could make possible the monitoring of diverse parameters that may indicate a change in the state of health of a given population - either the emergence of a naturally occurring disease or the outbreak of a disease as a result of hostile intent. This presentation will discuss the application of new information surveillance tools and technologies as they apply to health and disease monitoring, particularly within the context of potential terrorist or hostile nation use of biological warfare. Although discussed within the specific context of health surveillance, the tools and processes described here are generally applicable within other domains of subject matter expertise.
Seiders, Barbara AB; McQuerry, Dennis L.; Ferryman, Thomas A.; Whitney, Paul D.; Rybka, Anthony J.
2002-07-15
Biological weapons are within reach of individuals, small groups, terrorist organizations, as well as nations. With pervasive integration of civilian and military populations worldwide, the ill winds of biological warfare stand to affect military troops and civilians alike. A variety of technologies are emerging - such as pathogen detection devices, streaming internet characterization tools, information exploitation techniques, automated feature extraction, and ubiquitous wireless communication - that can help. These technologies, if taken together within an integrated analytical framework, could make possible the monitoring of diverse parameters that may indicate a change in the state of health of a given population - either the emergence of a naturally occurring disease or the outbreak of a disease as a result of hostile intent. This presentation will discuss the application of new information surveillance tools and technologies as they apply to health and disease monitoring, particularly within the context of potential terrorist or hostile nation use of biological warfare. Although discussed within the specific context of health surveillance, the tools and processes described here are generally applicable within other domains of subject matter expertise.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case
Stoecker, Nora Kathleen
2014-03-01
A Systems Analysis Group has existed at Sandia National Laboratories since at least the mid-1950s. Much of the groups work output (reports, briefing documents, and other materials) has been retained, along with large numbers of related documents. Over time the collection has grown to hundreds of thousands of unstructured documents in many formats contained in one or more of several different shared drives or SharePoint sites, with perhaps five percent of the collection still existing in print format. This presents a challenge. How can the group effectively find, manage, and build on information contained somewhere within such a large set of unstructured documents? In response, a project was initiated to identify tools that would be able to meet this challenge. This report documents the results found and recommendations made as of August 2013.
Steevenson, Grania
2006-08-01
Disclosure of information prior to consent is a very complex area of medical ethics. On the surface it would seem to be quite clear cut, but on closer inspection the scope for 'grey areas' is vast. In practice, however, it could be argued that the number of cases that result in complaint or litigation is comparatively small. However, this does not mean that wrong decisions or unethical scenarios do not occur. It would seem that in clinical practice these ethical grey areas concerning patients' full knowledge of their condition or treatment are quite common. One of the barometers for how much disclosure should be given prior to consent could be the feedback obtained from the patients. Are they asking relevant questions pertinent to their condition and do they show a good understanding of the options available? This should be seen as a positive trait and should be welcomed by the healthcare professionals. Ultimately it gives patients greater autonomy and the healthcare professional can expand and build on the patient's knowledge as well as allay fears perhaps based on wrongly held information. Greater communication with the patient would help the healthcare professional pitch their explanations at the right level. Every case and scenario is different and unique and deserves to be treated as such. Studies have shown that most patients can understand their medical condition and treatment provided communication has been thorough (Gillon 1996). It is in the patients' best interests to feel comfortable with the level of disclosure offered to them. It can only foster greater trust and respect between them and the healthcare profession which has to be mutually beneficial to both parties. PMID:16939165
NASA Astrophysics Data System (ADS)
Villas Boas, M. D.; Olivera, F.; Azevedo, J. S.
2013-12-01
The evaluation of water quality through 'indexes' is widely used in environmental sciences. There are a number of methods available for calculating water quality indexes (WQI), usually based on site-specific parameters. In Brazil, WQI were initially used in the 1970s and were adapted from the methodology developed in association with the National Science Foundation (Brown et al, 1970). Specifically, the WQI 'IQA/SCQA', developed by the Institute of Water Management of Minas Gerais (IGAM), is estimated based on nine parameters: Temperature Range, Biochemical Oxygen Demand, Fecal Coliforms, Nitrate, Phosphate, Turbidity, Dissolved Oxygen, pH and Electrical Conductivity. The goal of this study was to develop a model for calculating the IQA/SCQA, for the Piabanha River basin in the State of Rio de Janeiro (Brazil), using only the parameters measurable by a Multiparameter Water Quality Sonde (MWQS) available in the study area. These parameters are: Dissolved Oxygen, pH and Electrical Conductivity. The use of this model will allow to further the water quality monitoring network in the basin, without requiring significant increases of resources. The water quality measurement with MWQS is less expensive than the laboratory analysis required for the other parameters. The water quality data used in the study were obtained by the Geological Survey of Brazil in partnership with other public institutions (i.e. universities and environmental institutes) as part of the project "Integrated Studies in Experimental and Representative Watersheds". Two models were developed to correlate the values of the three measured parameters and the IQA/SCQA values calculated based on all nine parameters. The results were evaluated according to the following validation statistics: coefficient of determination (R2), Root Mean Square Error (RMSE), Akaike information criterion (AIC) and Final Prediction Error (FPE). The first model was a linear stepwise regression between three independent variables
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-01-01
Background: Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. Methods: TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Results: Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. Conclusions: The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute
Ventilation/Perfusion Positron Emission Tomography—Based Assessment of Radiation Injury to Lung
Siva, Shankar; Hardcastle, Nicholas; Kron, Tomas; Bressel, Mathias; Callahan, Jason; MacManus, Michael P.; Shaw, Mark; Plumridge, Nikki; Hicks, Rodney J.; Steinfort, Daniel; Ball, David L.; Hofman, Michael S.
2015-10-01
Purpose: To investigate {sup 68}Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). Methods and Materials: In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation and perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). Results: A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r{sup 2}=0.99, P<.01), with ventilation strongly negatively linear (r{sup 2}=0.95, P<.01). Logistic models did not provide a better fit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Conclusions: Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET
Projecting climate-driven increases in North American fire activity
NASA Astrophysics Data System (ADS)
Wang, D.; Morton, D. C.; Collatz, G. J.
2013-12-01
Climate regulates fire activity through controls on vegetation productivity (fuels), lightning ignitions, and conditions governing fire spread. In many regions of the world, human management also influences the timing, duration, and extent of fire activity. These coupled interactions between human and natural systems make fire a complex component of the Earth system. Satellite data provide valuable information on the spatial and temporal dynamics of recent fire activity, as active fires, burned area, and land cover information can be combined to separate wildfires from intentional burning for agriculture and forestry. Here, we combined satellite-derived burned area data with land cover and climate data to assess fire-climate relationships in North America between 2000-2012. We used the latest versions of the Global Fire Emissions Database (GFED) burned area product and Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate data to develop regional relationships between burned area and potential evaporation (PE), an integrated dryness metric. Logistic regression models were developed to link burned area with PE and individual climate variables during and preceding the fire season, and optimal models were selected based on Akaike Information Criterion (AIC). Overall, our model explained 85% of the variance in burned area since 2000 across North America. Fire-climate relationships from the era of satellite observations provide a blueprint for potential changes in fire activity under scenarios of climate change. We used that blueprint to evaluate potential changes in fire activity over the next 50 years based on twenty models from the Coupled Model Intercomparison Project Phase 5 (CMIP5). All models suggest an increase of PE under low and high emissions scenarios (Representative Concentration Pathways (RCP) 4.5 and 8.5, respectively), with largest increases in projected burned area across the western US and central Canada. Overall, near
National Health Information Center
... About ODPHP National Health Information Center National Health Information Center The National Health Information Center (NHIC) is ... of interest View the NHO calendar . Federal Health Information Centers and Clearinghouses Federal Health Information Centers and ...
Predictive Information: Status or Alert Information?
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Bruneau, Daniel; Press, Hayes N.
2008-01-01
Previous research investigating the efficacy of predictive information for detecting and diagnosing aircraft system failures found that subjects like to have predictive information concerning when a parameter would reach an alert range. This research focused on where the predictive information should be located, whether the information should be more closely associated with the parameter information or with the alert information. Each subject saw 3 forms of predictive information: (1) none, (2) a predictive alert message, and (3) predictive information on the status display. Generally, subjects performed better and preferred to have predictive information available although the difference between status and alert predictive information was minimal. Overall, for detection and recalling what happened, status predictive information is best; however for diagnosis, alert predictive information holds a slight edge.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1978-01-01
Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.
Detection of temporal changes in earthquake rates
NASA Astrophysics Data System (ADS)
Touati, S.
2012-12-01
Many statistical analyses of earthquake rates and time-dependent forecasting of future rates involve the detection of changes in the basic rate of events, independent of the fluctuations caused by aftershock sequences. We examine some of the statistical techniques for inferring these changes, using both real and synthetic earthquake data to check the statistical significance of these inferences. One common method is to use the Akaike Information Criterion (AIC) to choose between a single model and a double model with a changepoint; this criterion evaluates the strength of the fit and incorporates a penalty for the extra parameters. We test this method on many realisations of the ETAS model, with and without changepoints present, to see how often it chooses the correct model. A more rigorous method is to calculate the Bayesian evidence, or marginal likelihood, for each model and then compare these. The evidence is essentially the likelihood of the model integrated over the whole of the model space, giving a measure of how likely the data is for that model. It does not rely on estimation of best-fit parameters, making it a better comparator than the AIC; Occam's razor also arises naturally in this process due to the fact that more complex models tend to be able to explain a larger range of observations, and therefore the relative likelihood of any particular observations will be smaller than for a simpler model. Evidence can be calculated using Markov Chain Monte Carlo techniques. We compare these two approaches on synthetic data. We also look at the 1997-98 Colfiorito sequence in Umbria-Marche, Italy, using maximum likelihood to fit the ETAS model and then simulating the ETAS model to create synthetic versions of the catalogue for comparison. We simulate using ensembles of parameter values sampled from the posterior for each parameter, with the largest events artificially inserted, to compare the resultant event rates, inter-event time distributions and other
2013-01-01
Background The primary study objective was to examine whether the presence of food retailers surrounding schools was associated with students’ lunchtime eating behaviours. The secondary objective was to determine whether measures of the food retail environment around schools captured using road network or circular buffers were more strongly related to eating behaviours while at school. Methods Grade 9 and 10 students (N=6,971) who participated in the 2009/10 Canadian Health Behaviour in School Aged Children Survey were included in this study. The outcome was determined by students’ self-reports of where they typically ate their lunch during school days. Circular and road network-based buffers were created for a 1 km distance surrounding 158 schools participating in the HBSC. The addresses of fast food restaurants, convenience stores and coffee/donut shops were mapped within the buffers. Multilevel logistic regression was used to determine whether there was a relationship between the presence of food retailers near schools and students regularly eating their lunch at a fast food restaurant, snack-bar or café. The Akaike Information Criteria (AIC) value, a measure of goodness-of-fit, was used to determine the optimal buffer type. Results For the 1 km circular buffers, students with 1–2 (OR= 1.10, 95% CI: 0.57-2.11), 3–4 (OR=1.45, 95% CI: 0.75-2.82) and ≥5 nearby food retailers (OR=2.94, 95% CI: 1.71-5.09) were more likely to eat lunch at a food retailer compared to students with no nearby food retailers. The relationships were slightly stronger when assessed via 1 km road network buffers, with a greater likelihood of eating at a food retailer for 1–2 (OR=1.20, 95% CI:0.74-1.95), 3–4 (OR=3.19, 95% CI: 1.66-6.13) and ≥5 nearby food retailers (OR=3.54, 95% CI: 2.08-6.02). Road network buffers appeared to provide a better measure of the food retail environment, as indicated by a lower AIC value (3332 vs. 3346). Conclusions There was a strong
Human Benzene Metabolism Following Occupational and Environmental Exposures
Rappaport, Stephen M.; Kim, Sungkyoon; Lan, Qing; Li, Guilan; Vermeulen, Roel; Waidyanatha, Suramya; Zhang, Luoping; Yin, Songnian; Smith, Martyn T.; Rothman, Nathaniel
2011-01-01
We previously reported evidence that humans metabolize benzene via two enzymes, including a hitherto unrecognized high-affinity enzyme that was responsible for an estimated 73 percent of total urinary metabolites [sum of phenol (PH), hydroquinone (HQ), catechol (CA), E,E-muconic acid (MA), and S-phenylmercapturic acid (SPMA)] in nonsmoking females exposed to benzene at sub-saturating (ppb) air concentrations. Here, we used the same Michaelis-Menten-like kinetic models to individually analyze urinary levels of PH, HQ, CA and MA from 263 nonsmoking Chinese women (179 benzene-exposed workers and 84 control workers) with estimated benzene air concentrations ranging from less than 0.001 ppm to 299 ppm. One model depicted benzene metabolism as a single enzymatic process (1-enzyme model) and the other as two enzymatic processes which competed for access to benzene (2-enzyme model). We evaluated model fits based upon the difference in values of Akaike’s Information Criterion (ΔAIC), and we gauged the weights of evidence favoring the two models based upon the associated Akaike weights and Evidence Ratios. For each metabolite, the 2-enzyme model provided a better fit than the 1-enzyme model with ΔAIC values decreasing in the order 9.511 for MA, 7.379 for PH, 1.417 for CA, and 0.193 for HQ. The corresponding weights of evidence favoring the 2-enzyme model (Evidence Ratios) were: 116.2:1 for MA, 40.0:1 for PH, 2.0:1 for CA and 1.1:1 for HQ. These results indicate that our earlier findings from models of total metabolites were driven largely by MA, representing the ring-opening pathway, and by PH, representing the ring-hydroxylation pathway. The predicted percentage of benzene metabolized by the putative high-affinity enzyme at an air concentration of 0.001 ppm was 88% based upon urinary MA and was 80% based upon urinary PH. As benzene concentrations increased, the respective percentages of benzene metabolized to MA and PH by the high-affinity enzyme decreased successively
Denis-Robichaud, J; Dubuc, J
2015-10-01
The objectives of this observational study were to identify the optimal diagnostic criteria for purulent vaginal discharge (PVD) and cytological endometritis (ENDO) using vaginal discharge, endometrial cytology, and leukocyte esterase (LE) tests, and to quantify their effect on subsequent reproductive performance. Data generated from 1,099 untreated Holstein cows (28 herds) enrolled in a randomized clinical trial were used in this study. Cows were examined at 35 (± 7) d in milk for PVD using vaginal discharge scoring and for ENDO using endometrial cytology and LE testing. Optimal combinations of diagnostic criteria were determined based on the lowest Akaike information criterion (AIC) to predict pregnancy status at first service. Once identified, these criteria were used to quantify the effect of PVD and ENDO on pregnancy risk at first service and on pregnancy hazard until 200 d in milk (survival analysis). Predicting ability of these diagnostic criteria was determined using area under the curve (AUC) values. The prevalence of PVD and ENDO was calculated as well as the agreement between endometrial cytology and LE. The optimal diagnostic criteria (lowest AIC) identified in this study were purulent vaginal discharge or worse (≥ 4), ≥ 6% polymorphonuclear leukocytes (PMNL) by endometrial cytology, and small amounts of leukocytes or worse (≥ 1) by LE testing. When using the combination of vaginal discharge and PMNL percentage as diagnostic tools (n = 1,099), the prevalences of PVD and ENDO were 17.1 and 36.2%, respectively. When using the combination of vaginal discharge and LE (n = 915), the prevalences of PVD and ENDO were 17.1 and 48.4%. The optimal strategies for predicting pregnancy status at first service were the use of LE only (AUC = 0.578) and PMNL percentage only (AUC = 0.575). Cows affected by PVD and ENDO had 0.36 and 0.32 times the odds, respectively, of being pregnant at first service when using PMNL percentage compared with that of unaffected