Sample records for optimizing statistical analyses

  1. Cocaine profiling for strategic intelligence, a cross-border project between France and Switzerland: part II. Validation of the statistical methodology for the profiling of cocaine.

    PubMed

    Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P

    2008-05-20

    Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.

  2. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  3. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  4. Comparative effectiveness research methodology using secondary data: A starting user's guide.

    PubMed

    Sun, Maxine; Lipsitz, Stuart R

    2018-04-01

    The use of secondary data, such as claims or administrative data, in comparative effectiveness research has grown tremendously in recent years. We believe that the current review can help investigators relying on secondary data to (1) gain insight into both the methodologies and statistical methods, (2) better understand the necessity of a rigorous planning before initiating a comparative effectiveness investigation, and (3) optimize the quality of their investigations. Specifically, we review concepts of adjusted analyses and confounders, methods of propensity score analyses, and instrumental variable analyses, risk prediction models (logistic and time-to-event), decision-curve analysis, as well as the interpretation of the P value and hypothesis testing. Overall, we hope that the current review article can help research investigators relying on secondary data to perform comparative effectiveness research better understand the necessity of a rigorous planning before study start, and gain better insight in the choice of statistical methods so as to optimize the quality of the research study. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Dark matter constraints from a joint analysis of dwarf Spheroidal galaxy observations with VERITAS

    DOE PAGES

    Archambault, S.; Archer, A.; Benbow, W.; ...

    2017-04-05

    We present constraints on the annihilation cross section of weakly interacting massive particles dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events.

  6. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  7. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  8. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  9. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  10. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.

  11. Statistical mechanical analysis of linear programming relaxation for combinatorial optimization problems.

    PubMed

    Takabe, Satoshi; Hukushima, Koji

    2016-05-01

    Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.

  12. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  13. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  14. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  15. permGPU: Using graphics processing units in RNA microarray association studies.

    PubMed

    Shterev, Ivo D; Jung, Sin-Ho; George, Stephen L; Owzar, Kouros

    2010-06-16

    Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.

  16. Experimental Investigation and Optimization of Response Variables in WEDM of Inconel - 718

    NASA Astrophysics Data System (ADS)

    Karidkar, S. S.; Dabade, U. A.

    2016-02-01

    Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of optimal machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as Taguchi methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective optimization technique, Grey Relational Analysis (GRA), shows optimal machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for optimal response variables considered for the experimental analysis.

  17. Influence of study goals on study design and execution.

    PubMed

    Kirklin, J W; Blackstone, E H; Naftel, D C; Turner, M E

    1997-12-01

    From the viewpoint of a clinician who makes recommendations to patients about choosing from the multiple possible management schemes, quantitative information derived from statistical analyses of observational studies is useful. Although random assignment of therapy is optimal, appropriately performed studies in which therapy has been nonrandomly "assigned" are considered acceptable, albeit occasionally with limitations in inferences. The analyses are considered most useful when they generate multivariable equations suitable for predicting time-related outcomes in individual patients. Graphic presentations improve communication with patients and facilitate truly informed consent.

  18. Prognostic value of inflammation-based scores in patients with osteosarcoma

    PubMed Central

    Liu, Bangjian; Huang, Yujing; Sun, Yuanjue; Zhang, Jianjun; Yao, Yang; Shen, Zan; Xiang, Dongxi; He, Aina

    2016-01-01

    Systemic inflammation responses have been associated with cancer development and progression. C-reactive protein (CRP), Glasgow prognostic score (GPS), neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), lymphocyte-monocyte ratio (LMR), and neutrophil-platelet score (NPS) have been shown to be independent risk factors in various types of malignant tumors. This retrospective analysis of 162 osteosarcoma cases was performed to estimate their predictive value of survival in osteosarcoma. All statistical analyses were performed by SPSS statistical software. Receiver operating characteristic (ROC) analysis was generated to set optimal thresholds; area under the curve (AUC) was used to show the discriminatory abilities of inflammation-based scores; Kaplan-Meier analysis was performed to plot the survival curve; cox regression models were employed to determine the independent prognostic factors. The optimal cut-off points of NLR, PLR, and LMR were 2.57, 123.5 and 4.73, respectively. GPS and NLR had a markedly larger AUC than CRP, PLR and LMR. High levels of CRP, GPS, NLR, PLR, and low level of LMR were significantly associated with adverse prognosis (P < 0.05). Multivariate Cox regression analyses revealed that GPS, NLR, and occurrence of metastasis were top risk factors associated with death of osteosarcoma patients. PMID:28008988

  19. Chasing the peak: optimal statistics for weak shear analyses

    NASA Astrophysics Data System (ADS)

    Smit, Merijn; Kuijken, Konrad

    2018-01-01

    Context. Weak gravitational lensing analyses are fundamentally limited by the intrinsic distribution of galaxy shapes. It is well known that this distribution of galaxy ellipticity is non-Gaussian, and the traditional estimation methods, explicitly or implicitly assuming Gaussianity, are not necessarily optimal. Aims: We aim to explore alternative statistics for samples of ellipticity measurements. An optimal estimator needs to be asymptotically unbiased, efficient, and robust in retaining these properties for various possible sample distributions. We take the non-linear mapping of gravitational shear and the effect of noise into account. We then discuss how the distribution of individual galaxy shapes in the observed field of view can be modeled by fitting Fourier modes to the shear pattern directly. This allows scientific analyses using statistical information of the whole field of view, instead of locally sparse and poorly constrained estimates. Methods: We simulated samples of galaxy ellipticities, using both theoretical distributions and data for ellipticities and noise. We determined the possible bias Δe, the efficiency η and the robustness of the least absolute deviations, the biweight, and the convex hull peeling (CHP) estimators, compared to the canonical weighted mean. Using these statistics for regression, we have shown the applicability of direct Fourier mode fitting. Results: We find an improved performance of all estimators, when iteratively reducing the residuals after de-shearing the ellipticity samples by the estimated shear, which removes the asymmetry in the ellipticity distributions. We show that these estimators are then unbiased in the absence of noise, and decrease noise bias by more than 30%. Our results show that the CHP estimator distribution is skewed, but still centered around the underlying shear, and its bias least affected by noise. We find the least absolute deviations estimator to be the most efficient estimator in almost all cases, except in the Gaussian case, where it's still competitive (0.83 < η < 5.1) and therefore robust. These results hold when fitting Fourier modes, where amplitudes of variation in ellipticity are determined to the order of 10-3. Conclusions: The peak of the ellipticity distribution is a direct tracer of the underlying shear and unaffected by noise, and we have shown that estimators that are sensitive to a central cusp perform more efficiently, potentially reducing uncertainties by more 0% and significantly decreasing noise bias. These results become increasingly important, as survey sizes increase and systematic issues in shape measurements decrease.

  20. A systematic optimization for graphene-based supercapacitors

    NASA Astrophysics Data System (ADS)

    Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho

    2017-08-01

    Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF  =  0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.

  1. Analyses of global sea surface temperature 1856-1991

    NASA Astrophysics Data System (ADS)

    Kaplan, Alexey; Cane, Mark A.; Kushnir, Yochanan; Clement, Amy C.; Blumenthal, M. Benno; Rajagopalan, Balaji

    1998-08-01

    Global analyses of monthly sea surface temperature (SST) anomalies from 1856 to 1991 are produced using three statistically based methods: optimal smoothing (OS), the Kaiman filter (KF) and optimal interpolation (OI). Each of these is accompanied by estimates of the error covariance of the analyzed fields. The spatial covariance function these methods require is estimated from the available data; the timemarching model is a first-order autoregressive model again estimated from data. The data input for the analyses are monthly anomalies from the United Kingdom Meteorological Office historical sea surface temperature data set (MOHSST5) [Parker et al., 1994] of the Global Ocean Surface Temperature Atlas (GOSTA) [Bottomley et al., 1990]. These analyses are compared with each other, with GOSTA, and with an analysis generated by projection (P) onto a set of empirical orthogonal functions (as in Smith et al. [1996]). In theory, the quality of the analyses should rank in the order OS, KF, OI, P, and GOSTA. It is found that the first four give comparable results in the data-rich periods (1951-1991), but at times when data is sparse the first three differ significantly from P and GOSTA. At these times the latter two often have extreme and fluctuating values, prima facie evidence of error. The statistical schemes are also verified against data not used in any of the analyses (proxy records derived from corals and air temperature records from coastal and island stations). We also present evidence that the analysis error estimates are indeed indicative of the quality of the products. At most times the OS and KF products are close to the OI product, but at times of especially poor coverage their use of information from other times is advantageous. The methods appear to reconstruct the major features of the global SST field from very sparse data. Comparison with other indications of the El Niño-Southern Oscillation cycle show that the analyses provide usable information on interannual variability as far back as the 1860s.

  2. Assessing the significance of pedobarographic signals using random field theory.

    PubMed

    Pataky, Todd C

    2008-08-07

    Traditional pedobarographic statistical analyses are conducted over discrete regions. Recent studies have demonstrated that regionalization can corrupt pedobarographic field data through conflation when arbitrary dividing lines inappropriately delineate smooth field processes. An alternative is to register images such that homologous structures optimally overlap and then conduct statistical tests at each pixel to generate statistical parametric maps (SPMs). The significance of SPM processes may be assessed within the framework of random field theory (RFT). RFT is ideally suited to pedobarographic image analysis because its fundamental data unit is a lattice sampling of a smooth and continuous spatial field. To correct for the vast number of multiple comparisons inherent in such data, recent pedobarographic studies have employed a Bonferroni correction to retain a constant family-wise error rate. This approach unfortunately neglects the spatial correlation of neighbouring pixels, so provides an overly conservative (albeit valid) statistical threshold. RFT generally relaxes the threshold depending on field smoothness and on the geometry of the search area, but it also provides a framework for assigning p values to suprathreshold clusters based on their spatial extent. The current paper provides an overview of basic RFT concepts and uses simulated and experimental data to validate both RFT-relevant field smoothness estimations and RFT predictions regarding the topological characteristics of random pedobarographic fields. Finally, previously published experimental data are re-analysed using RFT inference procedures to demonstrate how RFT yields easily understandable statistical results that may be incorporated into routine clinical and laboratory analyses.

  3. Optimisation of critical medium components and culture conditions for enhanced biomass and lipid production in the oleaginous diatom Navicula phyllepta: a statistical approach.

    PubMed

    Sabu, Sanyo; Singh, Isaac Sarojini Bright; Joseph, Valsamma

    2017-12-01

    Diatoms hold great promise as potential sources of biofuel production. In the present study, the biomass and lipid production in the marine diatom Navicula phyllepta, isolated from Cochin estuary, India and identified as a potential biodiesel feedstock, were optimized using Plackett-Burman (PB) statistical experimental design followed by central composite design (CCD) and response surface methodology (RSM). The growth analyses of the isolate in different nitrogen sources, salinities and five different enriched sea water media showed the best growth in the cheapest medium with minimum components using urea as nitrogen source at salinity between 25 and 40 g kg -1 . Plackett-Burman experimental analyses for screening urea, sodium metasilicate, sodium dihydrogen phosphate, ferric chloride, salinity, temperature, pH and agitation influencing lipid and biomass production showed that silicate and temperature had a positive coefficient on biomass production, and temperature had a significant positive coefficient, while urea and phosphate showed a negative coefficient on lipid content. A 2 4 factorial central composite design (FCCD) was used to optimize the concentration of the factors selected. The optimized media resulted in 1.62-fold increase (64%) in biomass (1.2 ± 0.08 g L -1 ) and 1.2-fold increase (22%) in estimated total lipid production (0.11 ± 0.003 g L -1 ) compared to original media within 12 days of culturing. A significantly higher biomass and lipid production in the optimized medium demands further development of a two-stage strategy of biomass production followed by induction of high lipid production under nutrient limitation or varying culture conditions for large-scale production of biodiesel from the marine diatom.

  4. Dispositional optimism and physical wellbeing: the relevance of culture, gender, and socioeconomic status.

    PubMed

    Khallad, Yacoub

    2013-01-01

    The present study examined the relationship between dispositional optimism and physical wellbeing (as reflected in physical symptom reporting) in two groups of American and Jordanian college students. It also assessed moderation effects of culture, gender, and socioeconomic status (SES). Participants were administered a questionnaire consisting of items pertaining to dispositional optimism (as measured by the Revised Life Orientation Test, LOT-R) along with items assessing physical symptom reporting and sociodemographic factors (e.g., gender, socioeconomic status). The results revealed significant negative correlations between dispositional optimism and physical symptom reporting for both American and Jordanian participants, although the magnitude of the correlation for the American group was noticeably larger than that for the Jordanian group. The results also showed that women, especially Jordanians, were more likely than men to report physical symptoms. Among Jordanians, physical symptom reporting was more common among those of lower SES. No statistically significant differences in physical symptom reporting were found between American men and women or between the two cultural groups. Multiple regression analyses revealed no statistically significant interactions between optimism and cultural background, optimism and gender, or optimism and SES. Overall, the results suggest that optimism is the factor most predictive of physical symptom reporting, followed by SES and gender. These results corroborate previous findings on the relationship between dispositional optimism and physical wellbeing, and point to crosscultural differences in relationship patterns. These differences suggest that although personality characteristics such as optimism may play an important role in the physical wellbeing of both Western and non-Western groups, the influence of sociodemographic factors such as gender and SES and their interaction with cultural variables must not be overlooked.

  5. Perspectives on clinical trial data transparency and disclosure.

    PubMed

    Alemayehu, Demissie; Anziano, Richard J; Levenstein, Marcia

    2014-09-01

    The increased demand for transparency and disclosure of data from clinical trials sponsored by pharmaceutical companies poses considerable challenges and opportunities from a statistical perspective. A central issue is the need to protect patient privacy and adhere to Good Clinical and Statistical Practices, while ensuring access to patient-level data from clinical trials to the wider research community. This paper offers options to navigate this dilemma and balance competing priorities, with emphasis on the role of good clinical and statistical practices as proven safeguards for scientific integrity, the importance of adopting best practices for reporting of data from secondary analyses, and the need for optimal collaboration among stakeholders to facilitate data sharing. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Evaluating optimal therapy robustness by virtual expansion of a sample population, with a case study in cancer immunotherapy

    PubMed Central

    Barish, Syndi; Ochs, Michael F.; Sontag, Eduardo D.; Gevertz, Jana L.

    2017-01-01

    Cancer is a highly heterogeneous disease, exhibiting spatial and temporal variations that pose challenges for designing robust therapies. Here, we propose the VEPART (Virtual Expansion of Populations for Analyzing Robustness of Therapies) technique as a platform that integrates experimental data, mathematical modeling, and statistical analyses for identifying robust optimal treatment protocols. VEPART begins with time course experimental data for a sample population, and a mathematical model fit to aggregate data from that sample population. Using nonparametric statistics, the sample population is amplified and used to create a large number of virtual populations. At the final step of VEPART, robustness is assessed by identifying and analyzing the optimal therapy (perhaps restricted to a set of clinically realizable protocols) across each virtual population. As proof of concept, we have applied the VEPART method to study the robustness of treatment response in a mouse model of melanoma subject to treatment with immunostimulatory oncolytic viruses and dendritic cell vaccines. Our analysis (i) showed that every scheduling variant of the experimentally used treatment protocol is fragile (nonrobust) and (ii) discovered an alternative region of dosing space (lower oncolytic virus dose, higher dendritic cell dose) for which a robust optimal protocol exists. PMID:28716945

  7. Analysis methodology and development of a statistical tool for biodistribution data from internal contamination with actinides.

    PubMed

    Lamart, Stephanie; Griffiths, Nina M; Tchitchek, Nicolas; Angulo, Jaime F; Van der Meeren, Anne

    2017-03-01

    The aim of this work was to develop a computational tool that integrates several statistical analysis features for biodistribution data from internal contamination experiments. These data represent actinide levels in biological compartments as a function of time and are derived from activity measurements in tissues and excreta. These experiments aim at assessing the influence of different contamination conditions (e.g. intake route or radioelement) on the biological behavior of the contaminant. The ever increasing number of datasets and diversity of experimental conditions make the handling and analysis of biodistribution data difficult. This work sought to facilitate the statistical analysis of a large number of datasets and the comparison of results from diverse experimental conditions. Functional modules were developed using the open-source programming language R to facilitate specific operations: descriptive statistics, visual comparison, curve fitting, and implementation of biokinetic models. In addition, the structure of the datasets was harmonized using the same table format. Analysis outputs can be written in text files and updated data can be written in the consistent table format. Hence, a data repository is built progressively, which is essential for the optimal use of animal data. Graphical representations can be automatically generated and saved as image files. The resulting computational tool was applied using data derived from wound contamination experiments conducted under different conditions. In facilitating biodistribution data handling and statistical analyses, this computational tool ensures faster analyses and a better reproducibility compared with the use of multiple office software applications. Furthermore, re-analysis of archival data and comparison of data from different sources is made much easier. Hence this tool will help to understand better the influence of contamination characteristics on actinide biokinetics. Our approach can aid the optimization of treatment protocols and therefore contribute to the improvement of the medical response after internal contamination with actinides.

  8. The effect of statistical noise on IMRT plan quality and convergence for MC-based and MC-correction-based optimized treatment plans.

    PubMed

    Siebers, Jeffrey V

    2008-04-04

    Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.

  9. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  10. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The

    2015-11-01

    While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.

  11. Dose-mass inverse optimization for minimally moving thoracic lesions

    NASA Astrophysics Data System (ADS)

    Mihaylov, I. B.; Moros, E. G.

    2015-05-01

    In the past decade, several different radiotherapy treatment plan evaluation and optimization schemes have been proposed as viable approaches, aiming for dose escalation or an increase of healthy tissue sparing. In particular, it has been argued that dose-mass plan evaluation and treatment plan optimization might be viable alternatives to the standard of care, which is realized through dose-volume evaluation and optimization. The purpose of this investigation is to apply dose-mass optimization to a cohort of lung cancer patients and compare the achievable healthy tissue sparing to that one achievable through dose-volume optimization. Fourteen non-small cell lung cancer (NSCLC) patient plans were studied retrospectively. The range of tumor motion was less than 0.5 cm and motion management in the treatment planning process was not considered. For each case, dose-volume (DV)-based and dose-mass (DM)-based optimization was performed. Nine-field step-and-shoot IMRT was used, with all of the optimization parameters kept the same between DV and DM optimizations. Commonly used dosimetric indices (DIs) such as dose to 1% the spinal cord volume, dose to 50% of the esophageal volume, and doses to 20 and 30% of healthy lung volumes were used for cross-comparison. Similarly, mass-based indices (MIs), such as doses to 20 and 30% of healthy lung masses, 1% of spinal cord mass, and 33% of heart mass, were also tallied. Statistical equivalence tests were performed to quantify the findings for the entire patient cohort. Both DV and DM plans for each case were normalized such that 95% of the planning target volume received the prescribed dose. DM optimization resulted in more organs at risk (OAR) sparing than DV optimization. The average sparing of cord, heart, and esophagus was 23, 4, and 6%, respectively. For the majority of the DIs, DM optimization resulted in lower lung doses. On average, the doses to 20 and 30% of healthy lung were lower by approximately 3 and 4%, whereas lung volumes receiving 2000 and 3000 cGy were lower by 3 and 2%, respectively. The behavior of MIs was very similar. The statistical analyses of the results again indicated better healthy anatomical structure sparing with DM optimization. The presented findings indicate that dose-mass-based optimization results in statistically significant OAR sparing as compared to dose-volume-based optimization for NSCLC. However, the sparing is case-dependent and it is not observed for all tallied dosimetric endpoints.

  12. Artificial Intelligence versus Statistical Modeling and Optimization of Cholesterol Oxidase Production by using Streptomyces Sp.

    PubMed Central

    Niwas, Ram; Osama, Khwaja; Khan, Saif; Haque, Shafiul; Tripathi, C. K. M.; Mishra, B. N.

    2015-01-01

    Cholesterol oxidase (COD) is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM), artificial neural network (ANN) and genetic algorithm (GA) have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD) and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL) obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500. PMID:26368924

  13. Artificial Intelligence versus Statistical Modeling and Optimization of Cholesterol Oxidase Production by using Streptomyces Sp.

    PubMed

    Pathak, Lakshmi; Singh, Vineeta; Niwas, Ram; Osama, Khwaja; Khan, Saif; Haque, Shafiul; Tripathi, C K M; Mishra, B N

    2015-01-01

    Cholesterol oxidase (COD) is a bi-functional FAD-containing oxidoreductase which catalyzes the oxidation of cholesterol into 4-cholesten-3-one. The wider biological functions and clinical applications of COD have urged the screening, isolation and characterization of newer microbes from diverse habitats as a source of COD and optimization and over-production of COD for various uses. The practicability of statistical/ artificial intelligence techniques, such as response surface methodology (RSM), artificial neural network (ANN) and genetic algorithm (GA) have been tested to optimize the medium composition for the production of COD from novel strain Streptomyces sp. NCIM 5500. All experiments were performed according to the five factor central composite design (CCD) and the generated data was analysed using RSM and ANN. GA was employed to optimize the models generated by RSM and ANN. Based upon the predicted COD concentration, the model developed with ANN was found to be superior to the model developed with RSM. The RSM-GA approach predicted maximum of 6.283 U/mL COD production, whereas the ANN-GA approach predicted a maximum of 9.93 U/mL COD concentration. The optimum concentrations of the medium variables predicted through ANN-GA approach were: 1.431 g/50 mL soybean, 1.389 g/50 mL maltose, 0.029 g/50 mL MgSO4, 0.45 g/50 mL NaCl and 2.235 ml/50 mL glycerol. The experimental COD concentration was concurrent with the GA predicted yield and led to 9.75 U/mL COD production, which was nearly two times higher than the yield (4.2 U/mL) obtained with the un-optimized medium. This is the very first time we are reporting the statistical versus artificial intelligence based modeling and optimization of COD production by Streptomyces sp. NCIM 5500.

  14. Optimizing the design of a reproduction toxicity test with the pond snail Lymnaea stagnalis.

    PubMed

    Charles, Sandrine; Ducrot, Virginie; Azam, Didier; Benstead, Rachel; Brettschneider, Denise; De Schamphelaere, Karel; Filipe Goncalves, Sandra; Green, John W; Holbech, Henrik; Hutchinson, Thomas H; Faber, Daniel; Laranjeiro, Filipe; Matthiessen, Peter; Norrgren, Leif; Oehlmann, Jörg; Reategui-Zirena, Evelyn; Seeland-Fremer, Anne; Teigeler, Matthias; Thome, Jean-Pierre; Tobor Kaplon, Marysia; Weltje, Lennart; Lagadic, Laurent

    2016-11-01

    This paper presents the results from two ring-tests addressing the feasibility, robustness and reproducibility of a reproduction toxicity test with the freshwater gastropod Lymnaea stagnalis (RENILYS strain). Sixteen laboratories (from inexperienced to expert laboratories in mollusc testing) from nine countries participated in these ring-tests. Survival and reproduction were evaluated in L. stagnalis exposed to cadmium, tributyltin, prochloraz and trenbolone according to an OECD draft Test Guideline. In total, 49 datasets were analysed to assess the practicability of the proposed experimental protocol, and to estimate the between-laboratory reproducibility of toxicity endpoint values. The statistical analysis of count data (number of clutches or eggs per individual-day) leading to ECx estimation was specifically developed and automated through a free web-interface. Based on a complementary statistical analysis, the optimal test duration was established and the most sensitive and cost-effective reproduction toxicity endpoint was identified, to be used as the core endpoint. This validation process and the resulting optimized protocol were used to consolidate the OECD Test Guideline for the evaluation of reproductive effects of chemicals in L. stagnalis. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Recovery and purification of chitosanase produced by Bacillus cereus using expanded bed adsorption and central composite design.

    PubMed

    de Araújo, Nathália Kelly; Pimentel, Vanessa Carvalho; da Silva, Nayane Macedo Portela; de Araújo Padilha, Carlos Eduardo; de Macedo, Gorete Ribeiro; Dos Santos, Everaldo Silvino

    2016-02-01

    This study presents a system for expanded bed adsorption for the purification of chitosanase from broth extract in a single step. A chitosanase-producing strain was isolated and identified as Bacillus cereus C-01 and used to produce chitosanases. The expanded bed adsorption conditions for chitosanase purification were optimized statistically using STREAMLINE(TM) DEAE and a homemade column (2.6 × 30.0 cm). Dependent variables were defined by the quality criteria purification factor (P) and enzyme yield to optimize the chromatographic process. Statistical analyses showed that the optimum conditions for the maximum P were 150 cm/h load flow velocity, 6.0 cm settled bed height, and 7.36 cm distributor height. Distributor height had a strong influence on the process, considerably affecting both the P and enzyme yield. Optimizing the purification variables resulted in an approximately 3.66-fold increase in the P compared with the value under nonoptimized conditions. This system is promising for the recovery of chitosanase from B. cereus C-01 and is economically viable because it promotes the reduction steps. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Beyond discrimination: A comparison of calibration methods and clinical usefulness of predictive models of readmission risk.

    PubMed

    Walsh, Colin G; Sharman, Kavya; Hripcsak, George

    2017-12-01

    Prior to implementing predictive models in novel settings, analyses of calibration and clinical usefulness remain as important as discrimination, but they are not frequently discussed. Calibration is a model's reflection of actual outcome prevalence in its predictions. Clinical usefulness refers to the utilities, costs, and harms of using a predictive model in practice. A decision analytic approach to calibrating and selecting an optimal intervention threshold may help maximize the impact of readmission risk and other preventive interventions. To select a pragmatic means of calibrating predictive models that requires a minimum amount of validation data and that performs well in practice. To evaluate the impact of miscalibration on utility and cost via clinical usefulness analyses. Observational, retrospective cohort study with electronic health record data from 120,000 inpatient admissions at an urban, academic center in Manhattan. The primary outcome was thirty-day readmission for three causes: all-cause, congestive heart failure, and chronic coronary atherosclerotic disease. Predictive modeling was performed via L1-regularized logistic regression. Calibration methods were compared including Platt Scaling, Logistic Calibration, and Prevalence Adjustment. Performance of predictive modeling and calibration was assessed via discrimination (c-statistic), calibration (Spiegelhalter Z-statistic, Root Mean Square Error [RMSE] of binned predictions, Sanders and Murphy Resolutions of the Brier Score, Calibration Slope and Intercept), and clinical usefulness (utility terms represented as costs). The amount of validation data necessary to apply each calibration algorithm was also assessed. C-statistics by diagnosis ranged from 0.7 for all-cause readmission to 0.86 (0.78-0.93) for congestive heart failure. Logistic Calibration and Platt Scaling performed best and this difference required analyzing multiple metrics of calibration simultaneously, in particular Calibration Slopes and Intercepts. Clinical usefulness analyses provided optimal risk thresholds, which varied by reason for readmission, outcome prevalence, and calibration algorithm. Utility analyses also suggested maximum tolerable intervention costs, e.g., $1720 for all-cause readmissions based on a published cost of readmission of $11,862. Choice of calibration method depends on availability of validation data and on performance. Improperly calibrated models may contribute to higher costs of intervention as measured via clinical usefulness. Decision-makers must understand underlying utilities or costs inherent in the use-case at hand to assess usefulness and will obtain the optimal risk threshold to trigger intervention with intervention cost limits as a result. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Simplification of the Kalman filter for meteorological data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1991-01-01

    The paper proposes a new statistical method of data assimilation that is based on a simplification of the Kalman filter equations. The forecast error covariance evolution is approximated simply by advecting the mass-error covariance field, deriving the remaining covariances geostrophically, and accounting for external model-error forcing only at the end of each forecast cycle. This greatly reduces the cost of computation of the forecast error covariance. In simulations with a linear, one-dimensional shallow-water model and data generated artificially, the performance of the simplified filter is compared with that of the Kalman filter and the optimal interpolation (OI) method. The simplified filter produces analyses that are nearly optimal, and represents a significant improvement over OI.

  18. Optimization of delignification of two Pennisetum grass species by NaOH pretreatment using Taguchi and ANN statistical approach.

    PubMed

    Mohaptra, Sonali; Dash, Preeti Krishna; Behera, Sudhanshu Shekar; Thatoi, Hrudayanath

    2016-01-01

    In the bioconversion of lignocelluloses for bioethanol, pretreatment seems to be the most important step which improves the elimination of the lignin and hemicelluloses content, exposing cellulose to further hydrolysis. The present study discusses the application of dynamic statistical techniques like the Taguchi method and artificial neural network (ANN) in the optimization of pretreatment of lignocellulosic biomasses such as Hybrid Napier grass (HNG) (Pennisetum purpureum) and Denanath grass (DG) (Pennisetum pedicellatum), using alkali sodium hydroxide. This study analysed and determined a parameter combination with a low number of experiments by using the Taguchi method in which both the substrates can be efficiently pretreated. The optimized parameters obtained from the L16 orthogonal array are soaking time (18 and 26 h), temperature (60°C and 55°C), and alkali concentration (1%) for HNG and DG, respectively. High performance liquid chromatography analysis of the optimized pretreated grass varieties confirmed the presence of glucan (47.94% and 46.50%), xylan (9.35% and 7.95%), arabinan (2.15% and 2.2%), and galactan/mannan (1.44% and 1.52%) for HNG and DG, respectively. Physicochemical characterization studies of native and alkali-pretreated grasses were carried out by scanning electron microscopy and Fourier transformation Infrared spectroscopy which revealed some morphological differences between the native and optimized pretreated samples. Model validation by ANN showed a good agreement between experimental results and the predicted responses.

  19. Predation and fragmentation portrayed in the statistical structure of prey time series

    PubMed Central

    Hendrichsen, Ditte K; Topping, Chris J; Forchhammer, Mads C

    2009-01-01

    Background Statistical autoregressive analyses of direct and delayed density dependence are widespread in ecological research. The models suggest that changes in ecological factors affecting density dependence, like predation and landscape heterogeneity are directly portrayed in the first and second order autoregressive parameters, and the models are therefore used to decipher complex biological patterns. However, independent tests of model predictions are complicated by the inherent variability of natural populations, where differences in landscape structure, climate or species composition prevent controlled repeated analyses. To circumvent this problem, we applied second-order autoregressive time series analyses to data generated by a realistic agent-based computer model. The model simulated life history decisions of individual field voles under controlled variations in predator pressure and landscape fragmentation. Analyses were made on three levels: comparisons between predated and non-predated populations, between populations exposed to different types of predators and between populations experiencing different degrees of habitat fragmentation. Results The results are unambiguous: Changes in landscape fragmentation and the numerical response of predators are clearly portrayed in the statistical time series structure as predicted by the autoregressive model. Populations without predators displayed significantly stronger negative direct density dependence than did those exposed to predators, where direct density dependence was only moderately negative. The effects of predation versus no predation had an even stronger effect on the delayed density dependence of the simulated prey populations. In non-predated prey populations, the coefficients of delayed density dependence were distinctly positive, whereas they were negative in predated populations. Similarly, increasing the degree of fragmentation of optimal habitat available to the prey was accompanied with a shift in the delayed density dependence, from strongly negative to gradually becoming less negative. Conclusion We conclude that statistical second-order autoregressive time series analyses are capable of deciphering interactions within and across trophic levels and their effect on direct and delayed density dependence. PMID:19419539

  20. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  1. Assay optimization: a statistical design of experiments approach.

    PubMed

    Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B

    2007-03-01

    With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.

  2. Optimization of laser butt welding parameters with multiple performance characteristics

    NASA Astrophysics Data System (ADS)

    Sathiya, P.; Abdul Jaleel, M. Y.; Katherasan, D.; Shanmugarajan, B.

    2011-04-01

    This paper presents a study carried out on 3.5 kW cooled slab laser welding of 904 L super austenitic stainless steel. The joints have butts welded with different shielding gases, namely argon, helium and nitrogen, at a constant flow rate. Super austenitic stainless steel (SASS) normally contains high amount of Mo, Cr, Ni, N and Mn. The mechanical properties are controlled to obtain good welded joints. The quality of the joint is evaluated by studying the features of weld bead geometry, such as bead width (BW) and depth of penetration (DOP). In this paper, the tensile strength and bead profiles (BW and DOP) of laser welded butt joints made of AISI 904 L SASS are investigated. The Taguchi approach is used as a statistical design of experiment (DOE) technique for optimizing the selected welding parameters. Grey relational analysis and the desirability approach are applied to optimize the input parameters by considering multiple output variables simultaneously. Confirmation experiments have also been conducted for both of the analyses to validate the optimized parameters.

  3. [Optimal versus maximal safety of the blood transfusion chain in The Netherlands; results of a conference. College for Blood Transfusion of the Dutch Red Cross].

    PubMed

    van der Poel, C L; de Boer, J E; Reesink, H W; Sibinga, C T

    1998-02-07

    An invitational conference was held on September 11, 1996 by the Medical Advisory Commission to the Blood Transfusion Council of the Netherlands Red Cross, addressing the issues of 'maximal' versus 'optimal' safety measures for the blood supply. Invited were blood transfusion specialists, clinicians, representatives of patient interest groups, the Ministry and Inspectorate of Health and members of parliament. Transfusion experts and clinicians were found to advocate an optimal course, following strategies of evidence-based medicine, cost-benefit analyses and medical technology assessment. Patient groups depending on blood products, such as haemophilia patients would rather opt for maximal safety. Insurance companies would choose likewise, to exclude any risk if possible. Health care juridical advisers would advise to choose for optimal safety, but to reserve funds covering the differences with 'maximal safety' in case of litigation. Politicians and the general public would sooner choose for maximal rather than optimal security. The overall impression persists that however small the statistical risk may be, in the eyes of many it is unacceptable. This view is very stubborn.

  4. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  5. OSPAR standard method and software for statistical analysis of beach litter data.

    PubMed

    Schulz, Marcus; van Loon, Willem; Fleet, David M; Baggelaar, Paul; van der Meulen, Eit

    2017-09-15

    The aim of this study is to develop standard statistical methods and software for the analysis of beach litter data. The optimal ensemble of statistical methods comprises the Mann-Kendall trend test, the Theil-Sen slope estimation, the Wilcoxon step trend test and basic descriptive statistics. The application of Litter Analyst, a tailor-made software for analysing the results of beach litter surveys, to OSPAR beach litter data from seven beaches bordering on the south-eastern North Sea, revealed 23 significant trends in the abundances of beach litter types for the period 2009-2014. Litter Analyst revealed a large variation in the abundance of litter types between beaches. To reduce the effects of spatial variation, trend analysis of beach litter data can most effectively be performed at the beach or national level. Spatial aggregation of beach litter data within a region is possible, but resulted in a considerable reduction in the number of significant trends. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Investigation of antimicrobial activity and statistical optimization of Bacillus subtilis SPB1 biosurfactant production in solid-state fermentation.

    PubMed

    Ghribi, Dhouha; Abdelkefi-Mesrati, Lobna; Mnif, Ines; Kammoun, Radhouan; Ayadi, Imen; Saadaoui, Imen; Maktouf, Sameh; Chaabouni-Ellouze, Semia

    2012-01-01

    During the last years, several applications of biosurfactants with medical purposes have been reported. Biosurfactants are considered relevant molecules for applications in combating many diseases. However, their use is currently extremely limited due to their high cost in relation to that of chemical surfactants. Use of inexpensive substrates can drastically decrease its production cost. Here, twelve solid substrates were screened for the production of Bacillus subtilis SPB1 biosurfactant and the maximum yield was found with millet. A Plackett-Burman design was then used to evaluate the effects of five variables (temperature, moisture, initial pH, inoculum age, and inoculum size). Statistical analyses showed that temperature, inoculum age, and moisture content had significantly positive effect on SPB1 biosurfactant production. Their values were further optimized using a central composite design and a response surface methodology. The optimal conditions of temperature, inoculum age, and moisture content obtained under the conditions of study were 37°C, 14 h, and 88%, respectively. The evaluation of the antimicrobial activity of this compound was carried out against 11 bacteria and 8 fungi. The results demonstrated that this biosurfactant exhibited an important antimicrobial activity against microorganisms with multidrug-resistant profiles. Its activity was very effective against Staphylococcus aureus, Staphylococcus xylosus, Enterococcus faecalis, Klebsiella pneumonia, and so forth.

  7. Single-case research design in pediatric psychology: considerations regarding data analysis.

    PubMed

    Cohen, Lindsey L; Feinstein, Amanda; Masuda, Akihiko; Vowles, Kevin E

    2014-03-01

    Single-case research allows for an examination of behavior and can demonstrate the functional relation between intervention and outcome in pediatric psychology. This review highlights key assumptions, methodological and design considerations, and options for data analysis. Single-case methodology and guidelines are reviewed with an in-depth focus on visual and statistical analyses. Guidelines allow for the careful evaluation of design quality and visual analysis. A number of statistical techniques have been introduced to supplement visual analysis, but to date, there is no consensus on their recommended use in single-case research design. Single-case methodology is invaluable for advancing pediatric psychology science and practice, and guidelines have been introduced to enhance the consistency, validity, and reliability of these studies. Experts generally agree that visual inspection is the optimal method of analysis in single-case design; however, statistical approaches are becoming increasingly evaluated and used to augment data interpretation.

  8. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  9. An initial-abstraction, constant-loss model for unit hydrograph modeling for applicable watersheds in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2007-01-01

    Estimation of representative hydrographs from design storms, which are known as design hydrographs, provides for cost-effective, riskmitigated design of drainage structures such as bridges, culverts, roadways, and other infrastructure. During 2001?07, the U.S. Geological Survey (USGS), in cooperation with the Texas Department of Transportation, investigated runoff hydrographs, design storms, unit hydrographs,and watershed-loss models to enhance design hydrograph estimation in Texas. Design hydrographs ideally should mimic the general volume, peak, and shape of observed runoff hydrographs. Design hydrographs commonly are estimated in part by unit hydrographs. A unit hydrograph is defined as the runoff hydrograph that results from a unit pulse of excess rainfall uniformly distributed over the watershed at a constant rate for a specific duration. A time-distributed, watershed-loss model is required for modeling by unit hydrographs. This report develops a specific time-distributed, watershed-loss model known as an initial-abstraction, constant-loss model. For this watershed-loss model, a watershed is conceptualized to have the capacity to store or abstract an absolute depth of rainfall at and near the beginning of a storm. Depths of total rainfall less than this initial abstraction do not produce runoff. The watershed also is conceptualized to have the capacity to remove rainfall at a constant rate (loss) after the initial abstraction is satisfied. Additional rainfall inputs after the initial abstraction is satisfied contribute to runoff if the rainfall rate (intensity) is larger than the constant loss. The initial abstraction, constant-loss model thus is a two-parameter model. The initial-abstraction, constant-loss model is investigated through detailed computational and statistical analysis of observed rainfall and runoff data for 92 USGS streamflow-gaging stations (watersheds) in Texas with contributing drainage areas from 0.26 to 166 square miles. The analysis is limited to a previously described, watershed-specific, gamma distribution model of the unit hydrograph. In particular, the initial-abstraction, constant-loss model is tuned to the gamma distribution model of the unit hydrograph. A complex computational analysis of observed rainfall and runoff for the 92 watersheds was done to determine, by storm, optimal values of initial abstraction and constant loss. Optimal parameter values for a given storm were defined as those values that produced a modeled runoff hydrograph with volume equal to the observed runoff hydrograph and also minimized the residual sum of squares of the two hydrographs. Subsequently, the means of the optimal parameters were computed on a watershed-specific basis. These means for each watershed are considered the most representative, are tabulated, and are used in further statistical analyses. Statistical analyses of watershed-specific, initial abstraction and constant loss include documentation of the distribution of each parameter using the generalized lambda distribution. The analyses show that watershed development has substantial influence on initial abstraction and limited influence on constant loss. The means and medians of the 92 watershed-specific parameters are tabulated with respect to watershed development; although they have considerable uncertainty, these parameters can be used for parameter prediction for ungaged watersheds. The statistical analyses of watershed-specific, initial abstraction and constant loss also include development of predictive procedures for estimation of each parameter for ungaged watersheds. Both regression equations and regression trees for estimation of initial abstraction and constant loss are provided. The watershed characteristics included in the regression analyses are (1) main-channel length, (2) a binary factor representing watershed development, (3) a binary factor representing watersheds with an abundance of rocky and thin-soiled terrain, and (4) curve numb

  10. Predicting the Effects of Powder Feeding Rates on Particle Impact Conditions and Cold Spray Deposited Coatings

    NASA Astrophysics Data System (ADS)

    Ozdemir, Ozan C.; Widener, Christian A.; Carter, Michael J.; Johnson, Kyle W.

    2017-10-01

    As the industrial application of the cold spray technology grows, the need to optimize both the cost and the quality of the process grows with it. Parameter selection techniques available today require the use of a coupled system of equations to be solved to involve the losses due to particle loading in the gas stream. Such analyses cause a significant increase in the computational time in comparison with calculations with isentropic flow assumptions. In cold spray operations, engineers and operators may, therefore, neglect the effects of particle loading to simplify the multiparameter optimization process. In this study, two-way coupled (particle-fluid) quasi-one-dimensional fluid dynamics simulations are used to test the particle loading effects under many potential cold spray scenarios. Output of the simulations is statistically analyzed to build regression models that estimate the changes in particle impact velocity and temperature due to particle loading. This approach eases particle loading optimization for more complete analysis on deposition cost and time. The model was validated both numerically and experimentally. Further numerical analyses were completed to test the particle loading capacity and limitations of a nozzle with a commonly used throat size. Additional experimentation helped document the physical limitations to high-rate deposition.

  11. Tightening force and torque of nonlocking screws in a reverse shoulder prosthesis.

    PubMed

    Terrier, A; Kochbeck, S H; Merlini, F; Gortchacow, M; Pioletti, D P; Farron, A

    2010-07-01

    Reversed shoulder arthroplasty is an accepted treatment for glenohumeral arthritis associated to rotator cuff deficiency. For most reversed shoulder prostheses, the baseplate of the glenoid component is uncemented and its primary stability is provided by a central peg and peripheral screws. Because of the importance of the primary stability for a good osteo-integration of the baseplate, the optimal fixation of the screws is crucial. In particular, the amplitude of the tightening force of the nonlocking screws is clearly associated to this stability. Since this force is unknown, it is currently not accounted for in experimental or numerical analyses. Thus, the primary goal of this work is to measure this tightening force experimentally. In addition, the tightening torque was also measured, to estimate an optimal surgical value. An experimental setup with an instrumented baseplate was developed to measure simultaneously the tightening force, tightening torque and screwing angle, of the nonlocking screws of the Aquealis reversed prosthesis. In addition, the amount of bone volume around each screw was measured with a micro-CT. Measurements were performed on 6 human cadaveric scapulae. A statistically correlated relationship (p<0.05, R=0.83) was obtained between the maximal tightening force and the bone volume. The relationship between the tightening torque and the bone volume was not statistically significant. The experimental relationship presented in this paper can be used in numerical analyses to improve the baseplate fixation in the glenoid bone. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  12. Fluoride content of tank water in Australia.

    PubMed

    Cochrane, N J; Hopcraft, M S; Tong, A C; Thean, H l; Thum, Y S; Tong, D E; Wen, J; Zhao, S C; Stanton, D P; Yuan, Y; Shen, P; Reynolds, E C

    2014-06-01

    The aims of this study were to: (1) analyse the fluoride content of tank water; (2) determine whether the method of water collection or storage influenced fluoride content; and (3) survey participant attitudes towards water fluoridation. Plastic tubes and a questionnaire were distributed through dentists to households with water tanks in Victoria. A midstream tank water sample was collected and fluoride analysed in triplicate using ion chromatography All samples (n = 123) contained negligible amounts of fluoride, with a mean fluoride concentration of <0.01 ppm (range: <0.01-0.18 ppm). No statistically significant association was found between fluoride content and variables investigated such as tank material, tank age, roof material and gutter material. Most people did not know whether their tank water contained fluoride and 40.8% preferred to have access to fluoridated water. The majority thought fluoride was safe and more than half of the respondents supported fluoridation. Fluoride content of tank water was well below the optimal levels for caries prevention. People who rely solely on tank water for drinking may require additional exposure to fluoride for optimal caries prevention. © 2014 Australian Dental Association.

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  14. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located onmore » a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.« less

  15. SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.

    PubMed

    Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S

    2012-06-01

    With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.

  16. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  17. Integrated design and manufacturing for the high speed civil transport (a combined aerodynamics/propulsion optimization study)

    NASA Technical Reports Server (NTRS)

    Baecher, Juergen; Bandte, Oliver; DeLaurentis, Dan; Lewis, Kemper; Sicilia, Jose; Soboleski, Craig

    1995-01-01

    This report documents the efforts of a Georgia Tech High Speed Civil Transport (HSCT) aerospace student design team in completing a design methodology demonstration under NASA's Advanced Design Program (ADP). Aerodynamic and propulsion analyses are integrated into the synthesis code FLOPS in order to improve its prediction accuracy. Executing the integrated product and process development (IPPD) methodology proposed at the Aerospace Systems Design Laboratory (ASDL), an improved sizing process is described followed by a combined aero-propulsion optimization, where the objective function, average yield per revenue passenger mile ($/RPM), is constrained by flight stability, noise, approach speed, and field length restrictions. Primary goals include successful demonstration of the application of the response surface methodolgy (RSM) to parameter design, introduction to higher fidelity disciplinary analysis than normally feasible at the conceptual and early preliminary level, and investigations of relationships between aerodynamic and propulsion design parameters and their effect on the objective function, $/RPM. A unique approach to aircraft synthesis is developed in which statistical methods, specifically design of experiments and the RSM, are used to more efficiently search the design space for optimum configurations. In particular, two uses of these techniques are demonstrated. First, response model equations are formed which represent complex analysis in the form of a regression polynomial. Next, a second regression equation is constructed, not for modeling purposes, but instead for the purpose of optimization at the system level. Such an optimization problem with the given tools normally would be difficult due to the need for hard connections between the various complex codes involved. The statistical methodology presents an alternative and is demonstrated via an example of aerodynamic modeling and planform optimization for a HSCT.

  18. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  19. Applications of spatial statistical network models to stream data

    USGS Publications Warehouse

    Isaak, Daniel J.; Peterson, Erin E.; Ver Hoef, Jay M.; Wenger, Seth J.; Falke, Jeffrey A.; Torgersen, Christian E.; Sowder, Colin; Steel, E. Ashley; Fortin, Marie-Josée; Jordan, Chris E.; Ruesch, Aaron S.; Som, Nicholas; Monestiez, Pascal

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for terrestrial applications and are not optimized for streams. A new class of spatial statistical model, based on valid covariance structures for stream networks, can be used with many common types of stream data (e.g., water quality attributes, habitat conditions, biological surveys) through application of appropriate distributions (e.g., Gaussian, binomial, Poisson). The spatial statistical network models account for spatial autocorrelation (i.e., nonindependence) among measurements, which allows their application to databases with clustered measurement locations. Large amounts of stream data exist in many areas where spatial statistical analyses could be used to develop novel insights, improve predictions at unsampled sites, and aid in the design of efficient monitoring strategies at relatively low cost. We review the topic of spatial autocorrelation and its effects on statistical inference, demonstrate the use of spatial statistics with stream datasets relevant to common research and management questions, and discuss additional applications and development potential for spatial statistics on stream networks. Free software for implementing the spatial statistical network models has been developed that enables custom applications with many stream databases.

  20. Optimization of a thermal hydrolysis process for sludge pre-treatment.

    PubMed

    Sapkaite, I; Barrado, E; Fdz-Polanco, F; Pérez-Elvira, S I

    2017-05-01

    At industrial scale, thermal hydrolysis is the most used process to enhance biodegradability of the sludge produced in wastewater treatment plants. Through statistically guided Box-Behnken experimental design, the present study analyses the effect of TH as pre-treatment applied to activated sludge. The selected process variables were temperature (130-180 °C), time (5-50 min) and decompression mode (slow or steam-explosion effect), and the parameters evaluated were sludge solubilisation and methane production by anaerobic digestion. A quadratic polynomial model was generated to compare the process performance for the 15 different combinations of operation conditions by modifying the process variables evaluated. The statistical analysis performed exhibited that methane production and solubility were significantly affected by pre-treatment time and temperature. During high intensity pre-treatment (high temperature and long times), the solubility increased sharply while the methane production exhibited the opposite behaviour, indicating the formation of some soluble but non-biodegradable materials. Therefore, solubilisation is not a reliable parameter to quantify the efficiency of a thermal hydrolysis pre-treatment, since it is not directly related to methane production. Based on the operational parameters optimization, the estimated optimal thermal hydrolysis conditions to enhance of sewage sludge digestion were: 140-170 °C heating temperature, 5-35min residence time, and one sudden decompression. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Learning optimal features for visual pattern recognition

    NASA Astrophysics Data System (ADS)

    Labusch, Kai; Siewert, Udo; Martinetz, Thomas; Barth, Erhardt

    2007-02-01

    The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii) propose to evaluate the different models by verifiable performance measures (iii) analyse the classification performance on images of handwritten digits (MNIST data base). We first employ the SPARSENET algorithm (Olshausen, 1998) to derive a local filter basis (on 13 × 13 pixels windows). We then filter the images in the database (28 × 28 pixels images of digits) and reduce the dimensionality of the resulting feature space by selecting the locally maximal filter responses. We then train a support vector machine on a training set to classify the digits and report results obtained on a separate test set. Currently, the best state-of-the-art result on the MNIST data base has an error rate of 0,4%. This result, however, has been obtained by using explicit knowledge that is specific to the data (elastic distortion model for digits). We here obtain an error rate of 0,55% which is second best but does not use explicit data specific knowledge. In particular it outperforms by far all methods that do not use data-specific knowledge.

  2. Investigation of Antimicrobial Activity and Statistical Optimization of Bacillus subtilis SPB1 Biosurfactant Production in Solid-State Fermentation

    PubMed Central

    Ghribi, Dhouha; Abdelkefi-Mesrati, Lobna; Mnif, Ines; Kammoun, Radhouan; Ayadi, Imen; Saadaoui, Imen; Maktouf, Sameh; Chaabouni-Ellouze, Semia

    2012-01-01

    During the last years, several applications of biosurfactants with medical purposes have been reported. Biosurfactants are considered relevant molecules for applications in combating many diseases. However, their use is currently extremely limited due to their high cost in relation to that of chemical surfactants. Use of inexpensive substrates can drastically decrease its production cost. Here, twelve solid substrates were screened for the production of Bacillus subtilis SPB1 biosurfactant and the maximum yield was found with millet. A Plackett-Burman design was then used to evaluate the effects of five variables (temperature, moisture, initial pH, inoculum age, and inoculum size). Statistical analyses showed that temperature, inoculum age, and moisture content had significantly positive effect on SPB1 biosurfactant production. Their values were further optimized using a central composite design and a response surface methodology. The optimal conditions of temperature, inoculum age, and moisture content obtained under the conditions of study were 37°C, 14 h, and 88%, respectively. The evaluation of the antimicrobial activity of this compound was carried out against 11 bacteria and 8 fungi. The results demonstrated that this biosurfactant exhibited an important antimicrobial activity against microorganisms with multidrug-resistant profiles. Its activity was very effective against Staphylococcus aureus, Staphylococcus xylosus, Enterococcus faecalis, Klebsiella pneumonia, and so forth. PMID:22536017

  3. Illustrating the practice of statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Christina A; Hamada, Michael S

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem andmore » incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.« less

  4. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    PubMed

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  5. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    PubMed

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Did recent world record marathon runners employ optimal pacing strategies?

    PubMed

    Angus, Simon D

    2014-01-01

    We apply statistical analysis of high frequency (1 km) split data for the most recent two world-record marathon runs: Run 1 (2:03:59, 28 September 2008) and Run 2 (2:03:38, 25 September 2011). Based on studies in the endurance cycling literature, we develop two principles to approximate 'optimal' pacing in the field marathon. By utilising GPS and weather data, we test, and then de-trend, for each athlete's field response to gradient and headwind on course, recovering standardised proxies for power-based pacing traces. The resultant traces were analysed to ascertain if either runner followed optimal pacing principles; and characterise any deviations from optimality. Whereas gradient was insignificant, headwind was a significant factor in running speed variability for both runners, with Runner 2 targeting the (optimal) parallel variation principle, whilst Runner 1 did not. After adjusting for these responses, neither runner followed the (optimal) 'even' power pacing principle, with Runner 2's macro-pacing strategy fitting a sinusoidal oscillator with exponentially expanding envelope whilst Runner 1 followed a U-shaped, quadratic form. The study suggests that: (a) better pacing strategy could provide elite marathon runners with an economical pathway to significant performance improvements at world-record level; and (b) the data and analysis herein is consistent with a complex-adaptive model of power regulation.

  7. Testing theoretical relationships: factors influencing positive health practices (PHP) in Filipino college students.

    PubMed

    Ayres, Cynthia; Mahat, Ganga; Atkins, Robert

    2013-01-01

    To examine variables influencing the positive health practices (PHP) of Filipino college students to gain a better understanding of health practices in this ethnic/racial group. Cross-sectional study tested theoretical relationships postulated among (a) PHP, (b) social support (SS), (c) optimism, and (d) acculturation. A sample of Filipino college students (N = 226) aged 18 to 21 was obtained in June 2009. Participants completed 4 instruments. Statistical analyses were performed using SPSS 16.0. Positive correlations were found between PHP and SS (r = .39, p = .01) and optimism and PHP (r = .36, p = .01). No correlation was found between PHP and acculturation. Optimism and SS predicted performance of PHP (R (2) = .18, F[2, 221] = 24.927, p < .001). A difference was found in acculturation levels between participants who grew up in the United States (t[223] = 4.5, p < .001) and those who did not. Findings help health practitioners and educators to better understand the underlying factors that influence PHP in this population.

  8. Growth kinetics of Chlorococcum humicola - A potential feedstock for biomass with biofuel properties.

    PubMed

    Thomas, Jibu; Jayachithra, E V

    2015-11-01

    Economically viable production facilities for microalgae depend on the optimization of growth parameters with regard to nutrient requirements. Using microalgae to treat industrial effluents containing heavy metals presents an alternative to the current practice of using physical and chemical methods. Present work focuses on the statistical optimization of growth of Chlorococcum humicola to ascertain the maximum production of biomass. Plackett Burman design was carried out to screen the significant variables influencing biomass production. Further, Response Surface Methodology was employed to optimize the effect of inoculum, light intensity and pH on net biomass yield. Optimum conditions for maximum biomass yield were identified to be inoculum at 15%, light intensity to be 1500lx and pH 8.5. Theoretical and predicted values were in agreement and thus the model was found to be significant. Gas chromatography analyses of the FAME derivatives showed a high percentage of saturated fatty acids thereby confirming the biofuel properties of the oil derived from algal biomass. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Are head-to-head trials of biologics needed? The role of value of information methods in arthritis research.

    PubMed

    Welton, Nicky J; Madan, Jason; Ades, Anthony E

    2011-09-01

    Reimbursement decisions are typically based on cost-effectiveness analyses. While a cost-effectiveness analysis can identify the optimum strategy, there is usually some degree of uncertainty around this decision. Sources of uncertainty include statistical sampling error in treatment efficacy measures, underlying baseline risk, utility measures and costs, as well as uncertainty in the structure of the model. The optimal strategy is therefore only optimal on average, and a decision to adopt this strategy might still be the wrong decision if all uncertainty could be eliminated. This means that there is a quantifiable expected (average) loss attaching to decisions made under uncertainty, and hence a value in collecting information to reduce that uncertainty. Value of information (VOI) analyses can be used to provide guidance on whether more research would be cost-effective, which particular model inputs (parameters) have the most bearing on decision uncertainty, and can also help with the design and sample size of further research. Here, we introduce the key concepts in VOI analyses, and highlight the inputs required to calculate it. The adoption of the new biologic treatments for RA and PsA tends to be based on placebo-controlled trials. We discuss the possible role of VOI analyses in deciding whether head-to-head comparisons of the biologic therapies should be carried out, illustrating with examples from other fields. We emphasize the need for a model of the natural history of RA and PsA, which reflects a consensus view.

  10. A cross-sectional study of vitamin D levels in a large cohort of patients with rheumatic diseases.

    PubMed

    Nikiphorou, Elena; Uksila, Jaakko; Sokka, Tuulikki

    2018-03-01

    The objective of this study is to examine 25-hydroxyvitamin D [25(OH)D] (D-25) levels and associations with patient- and disease-related factors in rheumatic diseases. This is a register-based study of D-25 levels in adult patients seen at the Central Finland Hospital rheumatology clinic (January 2011-April 2015). Demographic, clinical, laboratory, and patient-reported outcomes (PROs) were collected as part of the normal infrastructure of the outpatient clinic and examined for their association with D-25 level. Statistical analysis included descriptive statistics and univariable and multivariable regression analyses adjusting for age and gender. D-25 was measured in 3203 patients (age range 15-91 years, mean 54; 68% female) with diagnoses including RA (n = 1386), unspecified arthralgia/myalgia (n = 413), and connective tissues diseases (n = 213). The overall D-25 mean (SD) level was 78 (31) and median (IQR) 75 (55, 97). At baseline, 17.8% had D-25 deficiency, and only 1.6% severe deficiency  (< 25 nmol/l); 34%/49% had sufficient/optimal D-25 levels. Higher D-25 levels were associated with older age, lower BMI, and regular exercise (all p < 0.001) among other factors. In multivariable analyses, younger age, non-white background, higher BMI, smoking, less frequent exercise (p < 0.001), and first visit to the clinic (p = 0.033) remained significantly associated with D-25 deficiency. Among those with sub-optimal D-25 levels, 64% had improved to sufficient/optimal levels after a median (IQR) of 13 (7.8, 22) months. The proportion of patients with D-25 deficiency in this study was generally low. Older patients had considerably higher D-25 levels compared to younger patients. Lower physical exercise and higher BMI were associated with higher risk of deficiency. The study supports the benefit of strategies to help minimize the risk of D-25 deficiency.

  11. Optimizing Cold Water Immersion for Exercise-Induced Hyperthermia: A Meta-analysis.

    PubMed

    Zhang, Yang; Davis, Jon-Kyle; Casa, Douglas J; Bishop, Phillip A

    2015-11-01

    Cold water immersion (CWI) provides rapid cooling in events of exertional heat stroke. Optimal procedures for CWI in the field are not well established. This meta-analysis aimed to provide structured analysis of the effectiveness of CWI on the cooling rate in healthy adults subjected to exercise-induced hyperthermia. An electronic search (December 2014) was conducted using the PubMed and Web of Science. The mean difference of the cooling rate between CWI and passive recovery was calculated. Pooled analyses were based on a random-effects model. Sources of heterogeneity were identified through a mixed-effects model Q statistic. Inferential statistics aggregated the CWI cooling rate for extrapolation. Nineteen studies qualified for inclusion. Results demonstrate CWI elicited a significant effect: mean difference, 0.03°C·min(-1); 95% confidence interval, 0.03-0.04°C·min(-1). A conservative, observed estimate of the CWI cooling rate was 0.08°C·min(-1) across various conditions. CWI cooled individuals twice as fast as passive recovery. Subgroup analyses revealed that cooling was more effective (Q test P < 0.10) when preimmersion core temperature ≥38.6°C, immersion water temperature ≤10°C, ambient temperature ≥20°C, immersion duration ≤10 min, and using torso plus limbs immersion. There is insufficient evidence of effect using forearms/hands CWI for rapid cooling: mean difference, 0.01°C·min(-1); 95% confidence interval, -0.01°C·min(-1) to 0.04°C·min(-1). A combined data summary, pertaining to 607 subjects from 29 relevant studies, was presented for referencing the weighted cooling rate and recovery time, aiming for practitioners to better plan emergency procedures. An optimal procedure for yielding high cooling rates is proposed. Using prompt vigorous CWI should be encouraged for treating exercise-induced hyperthermia whenever possible, using cold water temperature (approximately 10°C) and maximizing body surface contact (whole-body immersion).

  12. Statistically Optimized Inversion Algorithm for Enhanced Retrieval of Aerosol Properties from Spectral Multi-Angle Polarimetric Satellite Observations

    NASA Technical Reports Server (NTRS)

    Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.

    2011-01-01

    The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.

  13. Mathematical Analysis of Vehicle Delivery Scale of Bike-Sharing Rental Nodes

    NASA Astrophysics Data System (ADS)

    Zhai, Y.; Liu, J.; Liu, L.

    2018-04-01

    Aiming at the lack of scientific and reasonable judgment of vehicles delivery scale and insufficient optimization of scheduling decision, based on features of the bike-sharing usage, this paper analyses the applicability of the discrete time and state of the Markov chain, and proves its properties to be irreducible, aperiodic and positive recurrent. Based on above analysis, the paper has reached to the conclusion that limit state (steady state) probability of the bike-sharing Markov chain only exists and is independent of the initial probability distribution. Then this paper analyses the difficulty of the transition probability matrix parameter statistics and the linear equations group solution in the traditional solving algorithm of the bike-sharing Markov chain. In order to improve the feasibility, this paper proposes a "virtual two-node vehicle scale solution" algorithm which considered the all the nodes beside the node to be solved as a virtual node, offered the transition probability matrix, steady state linear equations group and the computational methods related to the steady state scale, steady state arrival time and scheduling decision of the node to be solved. Finally, the paper evaluates the rationality and accuracy of the steady state probability of the proposed algorithm by comparing with the traditional algorithm. By solving the steady state scale of the nodes one by one, the proposed algorithm is proved to have strong feasibility because it lowers the level of computational difficulty and reduces the number of statistic, which will help the bike-sharing companies to optimize the scale and scheduling of nodes.

  14. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania.

    PubMed

    Bradford, Kathryn; Abrahams, Leslie; Hegglin, Miriam; Klima, Kelly

    2015-10-06

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare data sets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  15. A Heat Vulnerability Index and Adaptation Solutions for Pittsburgh, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Klima, K.; Abrahams, L.; Bradford, K.; Hegglin, M.

    2015-12-01

    With increasing evidence of global warming, many cities have focused attention on response plans to address their populations' vulnerabilities. Despite expected increased frequency and intensity of heat waves, the health impacts of such events in urban areas can be minimized with careful policy and economic investments. We focus on Pittsburgh, Pennsylvania and ask two questions. First, what are the top factors contributing to heat vulnerability and how do these characteristics manifest geospatially throughout Pittsburgh? Second, assuming the City wishes to deploy additional cooling centers, what placement will optimally address the vulnerability of the at risk populations? We use national census data, ArcGIS geospatial modeling, and statistical analysis to determine a range of heat vulnerability indices and optimal cooling center placement. We find that while different studies use different data and statistical calculations, all methods tested locate additional cooling centers at the confluence of the three rivers (Downtown), the northeast side of Pittsburgh (Shadyside/ Highland Park), and the southeast side of Pittsburgh (Squirrel Hill). This suggests that for Pittsburgh, a researcher could apply the same factor analysis procedure to compare datasets for different locations and times; factor analyses for heat vulnerability are more robust than previously thought.

  16. Assessment and improvement of statistical tools for comparative proteomics analysis of sparse data sets with few experimental replicates.

    PubMed

    Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard

    2013-09-06

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  17. A sigmoidal model for biosorption of heavy metal cations from aqueous media.

    PubMed

    Özen, Rümeysa; Sayar, Nihat Alpagu; Durmaz-Sam, Selcen; Sayar, Ahmet Alp

    2015-07-01

    A novel multi-input single output (MISO) black-box sigmoid model is developed to simulate the biosorption of heavy metal cations by the fission yeast from aqueous medium. Validation and verification of the model is done through statistical chi-squared hypothesis tests and the model is evaluated by uncertainty and sensitivity analyses. The simulated results are in agreement with the data of the studied system in which Schizosaccharomyces pombe biosorbs Ni(II) cations at various process conditions. Experimental data is obtained originally for this work using dead cells of an adapted variant of S. Pombe and represented by Freundlich isotherms. A process optimization scheme is proposed using the present model to build a novel application of a cost-merit objective function which would be useful to predict optimal operation conditions. Copyright © 2015. Published by Elsevier Inc.

  18. Siemens Immulite Aspergillus-specific IgG assay for chronic pulmonary aspergillosis diagnosis.

    PubMed

    Page, Iain D; Richardson, Malcolm D; Denning, David W

    2018-05-14

    Chronic pulmonary aspergillosis (CPA) complicates underlying lung disease, including treated tuberculosis. Measurement of Aspergillus-specific immunoglobulin G (IgG) is a key diagnostic step. Cutoffs have been proposed based on receiver operating characteristic (ROC) curve analyses comparing CPA cases to healthy controls, but performance in at-risk populations with underlying lung disease is unclear. We evaluated optimal cutoffs for the Siemens Immulite Aspergillus-specific IgG assay for CPA diagnosis in relation to large groups of healthy and diseased controls with treated pulmonary tuberculosis. Sera from 241 patients with CPA attending the UK National Aspergillosis Centre, 299 Ugandan blood donors (healthy controls), and 398 Ugandans with treated pulmonary tuberculosis (diseased controls) were tested. Radiological screening removed potential CPA cases from diseased controls (234 screened diseased controls). ROC curve analyses were performed and optimal cutoffs identified by Youden J statistic. CPA versus control ROC area under curve (AUC) results were: healthy controls 0.984 (95% confidence interval 0.972-0.997), diseased controls 0.972 (0.959-0.985), screened diseased controls 0.979 (0.967-0.992). Optimal cutoffs were: healthy controls 15 mg/l (94.6% sensitivity, 98% specificity), unscreened diseased controls 15 mg/l (94.6% sensitivity, 94.5% specificity), screened diseased controls 25 mg/l (92.9% sensitivity, 98.7% specificity). Results were similar in healthy and diseased controls. We advocate a cutoff of 20 mg/l as this is the midpoint of the range of optimal cutoffs. Cutoffs calculated in relation to healthy controls for other assays are likely to remain valid for use in a treated tuberculosis population.

  19. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic

    PubMed Central

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals. PMID:27413364

  1. A Fast Framework for Abrupt Change Detection Based on Binary Search Trees and Kolmogorov Statistic.

    PubMed

    Qi, Jin-Peng; Qi, Jie; Zhang, Qing

    2016-01-01

    Change-Point (CP) detection has attracted considerable attention in the fields of data mining and statistics; it is very meaningful to discuss how to quickly and efficiently detect abrupt change from large-scale bioelectric signals. Currently, most of the existing methods, like Kolmogorov-Smirnov (KS) statistic and so forth, are time-consuming, especially for large-scale datasets. In this paper, we propose a fast framework for abrupt change detection based on binary search trees (BSTs) and a modified KS statistic, named BSTKS (binary search trees and Kolmogorov statistic). In this method, first, two binary search trees, termed as BSTcA and BSTcD, are constructed by multilevel Haar Wavelet Transform (HWT); second, three search criteria are introduced in terms of the statistic and variance fluctuations in the diagnosed time series; last, an optimal search path is detected from the root to leaf nodes of two BSTs. The studies on both the synthetic time series samples and the real electroencephalograph (EEG) recordings indicate that the proposed BSTKS can detect abrupt change more quickly and efficiently than KS, t-statistic (t), and Singular-Spectrum Analyses (SSA) methods, with the shortest computation time, the highest hit rate, the smallest error, and the highest accuracy out of four methods. This study suggests that the proposed BSTKS is very helpful for useful information inspection on all kinds of bioelectric time series signals.

  2. Statistics for X-chromosome associations.

    PubMed

    Özbek, Umut; Lin, Hui-Min; Lin, Yan; Weeks, Daniel E; Chen, Wei; Shaffer, John R; Purcell, Shaun M; Feingold, Eleanor

    2018-06-13

    In a genome-wide association study (GWAS), association between genotype and phenotype at autosomal loci is generally tested by regression models. However, X-chromosome data are often excluded from published analyses of autosomes because of the difference between males and females in number of X chromosomes. Failure to analyze X-chromosome data at all is obviously less than ideal, and can lead to missed discoveries. Even when X-chromosome data are included, they are often analyzed with suboptimal statistics. Several mathematically sensible statistics for X-chromosome association have been proposed. The optimality of these statistics, however, is based on very specific simple genetic models. In addition, while previous simulation studies of these statistics have been informative, they have focused on single-marker tests and have not considered the types of error that occur even under the null hypothesis when the entire X chromosome is scanned. In this study, we comprehensively tested several X-chromosome association statistics using simulation studies that include the entire chromosome. We also considered a wide range of trait models for sex differences and phenotypic effects of X inactivation. We found that models that do not incorporate a sex effect can have large type I error in some cases. We also found that many of the best statistics perform well even when there are modest deviations, such as trait variance differences between the sexes or small sex differences in allele frequencies, from assumptions. © 2018 WILEY PERIODICALS, INC.

  3. Elemental composition of edible nuts: fast optimization and validation procedure of an ICP-OES method.

    PubMed

    Tošić, Snežana B; Mitić, Snežana S; Velimirović, Dragan S; Stojanović, Gordana S; Pavlović, Aleksandra N; Pecev-Marinković, Emilija T

    2015-08-30

    An inductively coupled plasma-optical emission spectrometry method for the speedy simultaneous detection of 19 elements in edible nuts (walnuts: Juglans nigra; almonds: Prunus dulcis; hazelnuts: Corylus avellana; Brazil nuts: Bertholletia excelsa; cashews: Anacardium occidentalle; pistachios: Pistacia vera; and peanuts: Arachis hypogaea) available on the Serbian markets, was optimized and validated through the selection of instrumental parameters and analytical lines free from spectral interference and with the lowest matrix effects. The analysed macro-elements were present in the following descending order: Na > Mg > Ca > K. Of all the trace elements, the tested samples showed the highest content of Fe. The micro-element Se was detected in all the samples of nuts. The toxic elements As, Cd and Pb were either not detected or the contents were below the limit of detection. One-way analysis of variance, Student's t-test, Tukey's HSD post hoc test and hierarchical agglomerative cluster analysis were applied in the statistical analysis of the results. Based on the detected content of analysed elements it can be concluded that nuts may be a good additional source of minerals as micronutrients. © 2014 Society of Chemical Industry.

  4. Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning.

    PubMed

    Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn

    2017-06-01

    To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.

  5. Optimization of cold-adapted lysozyme production from the psychrophilic yeast Debaryomyces hansenii using statistical experimental methods.

    PubMed

    Wang, Quanfu; Hou, Yanhua; Yan, Peisheng

    2012-06-01

    Statistical experimental designs were employed to optimize culture conditions for cold-adapted lysozyme production of a psychrophilic yeast Debaryomyces hansenii. In the first step of optimization using Plackett-Burman design (PBD), peptone, glucose, temperature, and NaCl were identified as significant variables that affected lysozyme production, the formula was further optimized using a four factor central composite design (CCD) to understand their interaction and to determine their optimal levels. A quadratic model was developed and validated. Compared to the initial level (18.8 U/mL), the maximum lysozyme production (65.8 U/mL) observed was approximately increased by 3.5-fold under the optimized conditions. Cold-adapted lysozymes production was first optimized using statistical experimental methods. A 3.5-fold enhancement of microbial lysozyme was gained after optimization. Such an improved production will facilitate the application of microbial lysozyme. Thus, D. hansenii lysozyme may be a good and new resource for the industrial production of cold-adapted lysozymes. © 2012 Institute of Food Technologists®

  6. Statistically optimal perception and learning: from behavior to neural representations

    PubMed Central

    Fiser, József; Berkes, Pietro; Orbán, Gergő; Lengyel, Máté

    2010-01-01

    Human perception has recently been characterized as statistical inference based on noisy and ambiguous sensory inputs. Moreover, suitable neural representations of uncertainty have been identified that could underlie such probabilistic computations. In this review, we argue that learning an internal model of the sensory environment is another key aspect of the same statistical inference procedure and thus perception and learning need to be treated jointly. We review evidence for statistically optimal learning in humans and animals, and reevaluate possible neural representations of uncertainty based on their potential to support statistically optimal learning. We propose that spontaneous activity can have a functional role in such representations leading to a new, sampling-based, framework of how the cortex represents information and uncertainty. PMID:20153683

  7. Quantifying side-chain conformational variations in protein structure

    PubMed Central

    Miao, Zhichao; Cao, Yang

    2016-01-01

    Protein side-chain conformation is closely related to their biological functions. The side-chain prediction is a key step in protein design, protein docking and structure optimization. However, side-chain polymorphism comprehensively exists in protein as various types and has been long overlooked by side-chain prediction. But such conformational variations have not been quantitatively studied and the correlations between these variations and residue features are vague. Here, we performed statistical analyses on large scale data sets and found that the side-chain conformational flexibility is closely related to the exposure to solvent, degree of freedom and hydrophilicity. These analyses allowed us to quantify different types of side-chain variabilities in PDB. The results underscore that protein side-chain conformation prediction is not a single-answer problem, leading us to reconsider the assessment approaches of side-chain prediction programs. PMID:27845406

  8. Quantifying side-chain conformational variations in protein structure

    NASA Astrophysics Data System (ADS)

    Miao, Zhichao; Cao, Yang

    2016-11-01

    Protein side-chain conformation is closely related to their biological functions. The side-chain prediction is a key step in protein design, protein docking and structure optimization. However, side-chain polymorphism comprehensively exists in protein as various types and has been long overlooked by side-chain prediction. But such conformational variations have not been quantitatively studied and the correlations between these variations and residue features are vague. Here, we performed statistical analyses on large scale data sets and found that the side-chain conformational flexibility is closely related to the exposure to solvent, degree of freedom and hydrophilicity. These analyses allowed us to quantify different types of side-chain variabilities in PDB. The results underscore that protein side-chain conformation prediction is not a single-answer problem, leading us to reconsider the assessment approaches of side-chain prediction programs.

  9. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources.

    PubMed

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-10-16

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spycher, Nicolas; Peiffer, Loic; Finsterle, Stefan

    GeoT implements the multicomponent geothermometry method developed by Reed and Spycher (1984, Geochim. Cosmichim. Acta 46 513–528) into a stand-alone computer program, to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization using existing parameter estimationmore » software, such as iTOUGH2, PEST, or UCODE. This integrated geothermometry approach presents advantages over classical geothermometers for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss.« less

  11. Quantifying side-chain conformational variations in protein structure.

    PubMed

    Miao, Zhichao; Cao, Yang

    2016-11-15

    Protein side-chain conformation is closely related to their biological functions. The side-chain prediction is a key step in protein design, protein docking and structure optimization. However, side-chain polymorphism comprehensively exists in protein as various types and has been long overlooked by side-chain prediction. But such conformational variations have not been quantitatively studied and the correlations between these variations and residue features are vague. Here, we performed statistical analyses on large scale data sets and found that the side-chain conformational flexibility is closely related to the exposure to solvent, degree of freedom and hydrophilicity. These analyses allowed us to quantify different types of side-chain variabilities in PDB. The results underscore that protein side-chain conformation prediction is not a single-answer problem, leading us to reconsider the assessment approaches of side-chain prediction programs.

  12. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  13. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  14. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  15. Response surface methodology as an approach to determine optimal activities of lipase entrapped in sol-gel matrix using different vegetable oils.

    PubMed

    Pinheiro, Rubiane C; Soares, Cleide M F; de Castro, Heizir F; Moraes, Flavio F; Zanin, Gisella M

    2008-03-01

    The conditions for maximization of the enzymatic activity of lipase entrapped in sol-gel matrix were determined for different vegetable oils using an experimental design. The effects of pH, temperature, and biocatalyst loading on lipase activity were verified using a central composite experimental design leading to a set of 13 assays and the surface response analysis. For canola oil and entrapped lipase, statistical analyses showed significant effects for pH and temperature and also the interactions between pH and temperature and temperature and biocatalyst loading. For the olive oil and entrapped lipase, it was verified that the pH was the only variable statistically significant. This study demonstrated that response surface analysis is a methodology appropriate for the maximization of the percentage of hydrolysis, as a function of pH, temperature, and lipase loading.

  16. Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE

    PubMed Central

    Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh

    2014-01-01

    AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients. PMID:24876923

  17. Improvement of the coagulation/flocculation process using a combination of Moringa oleifera lam with anionic polymer in water treatment.

    PubMed

    Bongiovani, Milene Carvalho; Camacho, Franciele Pereira; Nishi, Letícia; Coldebella, Priscila Ferri; Valverde, Karina Cardoso; Vieira, Angélica Marquetotti Salcedo; Bergamasco, Rosângela

    2014-01-01

    The objective of this study is to investigate the impacts of anionic polymer as a flocculant aid on the coagulation/flocculation performance with a saline solution of Moringa oleifera as a coagulant to provide larger flocs and decrease the time sedimentation. For the tests, raw water was used from Pirapó River Basin (Maringá, Paraná, Brazil). Optimization of coagulation/flocculation tests was initially performed in a jar-test with a dosage of M. oleifera Lam (crude extract--MO, oil-extracted with ethanol--MO (et) and hexane--MO (hex) 1% m/v) as the coagulant that ranged from 10 to 60 mg L(-1) and of the anionic polymer 0.1% as a flocculant aid with a dosage that ranged from 0 to 0.4 mg L(-1). The parameters analysed were colour, turbidity and compounds with absorption in UV254nm. In view of the statistical analysis results, MO (hex) with a dosage of 30 mg L(-1) was chosen as a coagulant for the next tests of coagulation/flocculation. When anionic polymer was used alone (0.0 mg L(-1) of MO (hex)), parameters were not removed and there was no generation of heavy flocs as compared with the combination of MO (hex) with the anionic polymer. Statistical analysis showed that MO (hex) obtained the highest removals of the parameters analysed in lower dosages and no significant increase in parameters removal was observed when the polymer dosage was increased. The efficacy of the coagulant +/- anionic polymer was optimal when 30mg L(-1) of MO (hex) was used as a coagulant and 0.1 mg L(-1) of the anionic polymer was used as a flocculant aid, decreasing the time sedimentation from 1 h to 15 min.

  18. Statistics for NAEG: past efforts, new results, and future plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given.

  19. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.

  20. Structural tailoring of counter rotation propfans

    NASA Technical Reports Server (NTRS)

    Brown, Kenneth W.; Hopkins, D. A.

    1989-01-01

    The STAT program was designed for the optimization of single rotation, tractor propfan designs. New propfan designs, however, generally consist of two counter rotating propfan rotors. STAT is constructed to contain two levels of analysis. An interior loop, consisting of accurate, efficient approximate analyses, is used to perform the primary propfan optimization. Once an optimum design has been obtained, a series of refined analyses are conducted. These analyses, while too computer time expensive for the optimization loop, are of sufficient accuracy to validate the optimized design. Should the design prove to be unacceptable, provisions are made for recalibration of the approximate analyses, for subsequent reoptimization.

  1. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  2. A Large Scale (N=400) Investigation of Gray Matter Differences in Schizophrenia Using Optimized Voxel-based Morphometry

    PubMed Central

    Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.

    2008-01-01

    Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428

  3. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  4. Spatial variation in the bacterial and denitrifying bacterial community in a biofilter treating subsurface agricultural drainage.

    PubMed

    Andrus, J Malia; Porter, Matthew D; Rodríguez, Luis F; Kuehlhorn, Timothy; Cooke, Richard A C; Zhang, Yuanhui; Kent, Angela D; Zilles, Julie L

    2014-02-01

    Denitrifying biofilters can remove agricultural nitrates from subsurface drainage, reducing nitrate pollution that contributes to coastal hypoxic zones. The performance and reliability of natural and engineered systems dependent upon microbially mediated processes, such as the denitrifying biofilters, can be affected by the spatial structure of their microbial communities. Furthermore, our understanding of the relationship between microbial community composition and function is influenced by the spatial distribution of samples.In this study we characterized the spatial structure of bacterial communities in a denitrifying biofilter in central Illinois. Bacterial communities were assessed using automated ribosomal intergenic spacer analysis for bacteria and terminal restriction fragment length polymorphism of nosZ for denitrifying bacteria.Non-metric multidimensional scaling and analysis of similarity (ANOSIM) analyses indicated that bacteria showed statistically significant spatial structure by depth and transect,while denitrifying bacteria did not exhibit significant spatial structure. For determination of spatial patterns, we developed a package of automated functions for the R statistical environment that allows directional analysis of microbial community composition data using either ANOSIM or Mantel statistics.Applying this package to the biofilter data, the flow path correlation range for the bacterial community was 6.4 m at the shallower, periodically in undated depth and 10.7 m at the deeper, continually submerged depth. These spatial structures suggest a strong influence of hydrology on the microbial community composition in these denitrifying biofilters. Understanding such spatial structure can also guide optimal sample collection strategies for microbial community analyses.

  5. Finite-key security analyses on passive decoy-state QKD protocols with different unstable sources

    PubMed Central

    Song, Ting-Ting; Qin, Su-Juan; Wen, Qiao-Yan; Wang, Yu-Kun; Jia, Heng-Yue

    2015-01-01

    In quantum communication, passive decoy-state QKD protocols can eliminate many side channels, but the protocols without any finite-key analyses are not suitable for in practice. The finite-key securities of passive decoy-state (PDS) QKD protocols with two different unstable sources, type-II parametric down-convention (PDC) and phase randomized weak coherent pulses (WCPs), are analyzed in our paper. According to the PDS QKD protocols, we establish an optimizing programming respectively and obtain the lower bounds of finite-key rates. Under some reasonable values of quantum setup parameters, the lower bounds of finite-key rates are simulated. The simulation results show that at different transmission distances, the affections of different fluctuations on key rates are different. Moreover, the PDS QKD protocol with an unstable PDC source can resist more intensity fluctuations and more statistical fluctuation. PMID:26471947

  6. Neuro-evolutionary computing paradigm for Painlevé equation-II in nonlinear optics

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Ahmad, Sufyan; Awais, Muhammad; Ul Islam Ahmad, Siraj; Asif Zahoor Raja, Muhammad

    2018-05-01

    The aim of this study is to investigate the numerical treatment of the Painlevé equation-II arising in physical models of nonlinear optics through artificial intelligence procedures by incorporating a single layer structure of neural networks optimized with genetic algorithms, sequential quadratic programming and active set techniques. We constructed a mathematical model for the nonlinear Painlevé equation-II with the help of networks by defining an error-based cost function in mean square sense. The performance of the proposed technique is validated through statistical analyses by means of the one-way ANOVA test conducted on a dataset generated by a large number of independent runs.

  7. Selected Bibliography on Optimizing Techniques in Statistics

    DTIC Science & Technology

    1981-08-01

    problems in business, industry and .ogovern nt ae f rmulated as optimization problem. Topics in optimization constitute an essential area of study in...numerical, iii) mathematical programming, and (iv) variational. We provide pertinent references with statistical applications Sin the above areas in Part I...TMS Advanced Studies in Managentnt Sciences, North-Holland PIIENli iiiany, Amsterdam. (To appear.) Spang, H. A. (1962). A review of minimization

  8. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  9. Confidence intervals for the between-study variance in random-effects meta-analysis using generalised heterogeneity statistics: should we use unequal tails?

    PubMed

    Jackson, Dan; Bowden, Jack

    2016-09-07

    Confidence intervals for the between study variance are useful in random-effects meta-analyses because they quantify the uncertainty in the corresponding point estimates. Methods for calculating these confidence intervals have been developed that are based on inverting hypothesis tests using generalised heterogeneity statistics. Whilst, under the random effects model, these new methods furnish confidence intervals with the correct coverage, the resulting intervals are usually very wide, making them uninformative. We discuss a simple strategy for obtaining 95 % confidence intervals for the between-study variance with a markedly reduced width, whilst retaining the nominal coverage probability. Specifically, we consider the possibility of using methods based on generalised heterogeneity statistics with unequal tail probabilities, where the tail probability used to compute the upper bound is greater than 2.5 %. This idea is assessed using four real examples and a variety of simulation studies. Supporting analytical results are also obtained. Our results provide evidence that using unequal tail probabilities can result in shorter 95 % confidence intervals for the between-study variance. We also show some further results for a real example that illustrates how shorter confidence intervals for the between-study variance can be useful when performing sensitivity analyses for the average effect, which is usually the parameter of primary interest. We conclude that using unequal tail probabilities when computing 95 % confidence intervals for the between-study variance, when using methods based on generalised heterogeneity statistics, can result in shorter confidence intervals. We suggest that those who find the case for using unequal tail probabilities convincing should use the '1-4 % split', where greater tail probability is allocated to the upper confidence bound. The 'width-optimal' interval that we present deserves further investigation.

  10. Vitamin B12 production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii: optimization of medium composition through statistical experimental designs.

    PubMed

    Kośmider, Alicja; Białas, Wojciech; Kubiak, Piotr; Drożdżyńska, Agnieszka; Czaczyk, Katarzyna

    2012-02-01

    A two-step statistical experimental design was employed to optimize the medium for vitamin B(12) production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii. In the first step, using Plackett-Burman design, five of 13 tested medium components (calcium pantothenate, NaH(2)PO(4)·2H(2)O, casein hydrolysate, glycerol and FeSO(4)·7H(2)O) were identified as factors having significant influence on vitamin production. In the second step, a central composite design was used to optimize levels of medium components selected in the first step. Valid statistical models describing the influence of significant factors on vitamin B(12) production were established for each optimization phase. The optimized medium provided a 93% increase in final vitamin concentration compared to the original medium. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Closed-Loop Optimal Control Implementations for Space Applications

    DTIC Science & Technology

    2016-12-01

    analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to feedback on the...through the analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering

  12. Optimizing Mass Spectrometry Analyses: A Tailored Review on the Utility of Design of Experiments.

    PubMed

    Hecht, Elizabeth S; Oberg, Ann L; Muddiman, David C

    2016-05-01

    Mass spectrometry (MS) has emerged as a tool that can analyze nearly all classes of molecules, with its scope rapidly expanding in the areas of post-translational modifications, MS instrumentation, and many others. Yet integration of novel analyte preparatory and purification methods with existing or novel mass spectrometers can introduce new challenges for MS sensitivity. The mechanisms that govern detection by MS are particularly complex and interdependent, including ionization efficiency, ion suppression, and transmission. Performance of both off-line and MS methods can be optimized separately or, when appropriate, simultaneously through statistical designs, broadly referred to as "design of experiments" (DOE). The following review provides a tutorial-like guide into the selection of DOE for MS experiments, the practices for modeling and optimization of response variables, and the available software tools that support DOE implementation in any laboratory. This review comes 3 years after the latest DOE review (Hibbert DB, 2012), which provided a comprehensive overview on the types of designs available and their statistical construction. Since that time, new classes of DOE, such as the definitive screening design, have emerged and new calls have been made for mass spectrometrists to adopt the practice. Rather than exhaustively cover all possible designs, we have highlighted the three most practical DOE classes available to mass spectrometrists. This review further differentiates itself by providing expert recommendations for experimental setup and defining DOE entirely in the context of three case-studies that highlight the utility of different designs to achieve different goals. A step-by-step tutorial is also provided.

  13. Optimizing Mass Spectrometry Analyses: A Tailored Review on the Utility of Design of Experiments

    PubMed Central

    Hecht, Elizabeth S.; Oberg, Ann L.; Muddiman, David

    2016-01-01

    SUMMARY Mass spectrometry (MS) has emerged as a tool that can analyze nearly all classes of molecules, with its scope rapidly expanding in the areas of post-translational modifications, MS instrumentation, and many others. Yet integration of novel analyte preparatory and purification methods with existing or novel mass spectrometers can introduce new challenges for MS sensitivity. The mechanisms that govern detection by MS are particularly complex and interdependent, including ionization efficiency, ion suppression, and transmission. Performance of both off-line and MS methods can be optimized separately or, when appropriate, simultaneously through statistical designs, broadly referred to as “design of experiments” (DOE). The following review provides a tutorial-like guide into the selection of DOE for MS experiments, the practices for modeling and optimization of response variables, and the available software tools that support DOE implementation in any laboratory. This review comes three years after the latest DOE review (Hibbert DB 2012), which provided a comprehensive overview on the types of designs available and their statistical construction. Since that time, new classes of DOE, such as the definitive screening design, have emerged and new calls have been made for mass spectrometrists to adopt the practice. Rather than exhaustively cover all possible designs, we have highlighted the three most practical DOE classes available to mass spectrometrists. This review further differentiates itself by providing expert recommendations for experimental setup and defining DOE entirely in the context of three case-studies that highlight the utility of different designs to achieve different goals. A step-by-step tutorial is also provided. PMID:26951559

  14. Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.

    PubMed

    Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone

    2017-12-26

    Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.

  15. Phi Index: A New Metric to Test the Flush Early and Avoid the Rush Hypothesis

    PubMed Central

    Samia, Diogo S. M.; Blumstein, Daniel T.

    2014-01-01

    Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the “Flush Early and Avoid the Rush” (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1∶1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis. PMID:25405872

  16. Phi index: a new metric to test the flush early and avoid the rush hypothesis.

    PubMed

    Samia, Diogo S M; Blumstein, Daniel T

    2014-01-01

    Optimal escape theory states that animals should counterbalance the costs and benefits of flight when escaping from a potential predator. However, in apparent contradiction with this well-established optimality model, birds and mammals generally initiate escape soon after beginning to monitor an approaching threat, a phenomena codified as the "Flush Early and Avoid the Rush" (FEAR) hypothesis. Typically, the FEAR hypothesis is tested using correlational statistics and is supported when there is a strong relationship between the distance at which an individual first responds behaviorally to an approaching predator (alert distance, AD), and its flight initiation distance (the distance at which it flees the approaching predator, FID). However, such correlational statistics are both inadequate to analyze relationships constrained by an envelope (such as that in the AD-FID relationship) and are sensitive to outliers with high leverage, which can lead one to erroneous conclusions. To overcome these statistical concerns we develop the phi index (Φ), a distribution-free metric to evaluate the goodness of fit of a 1:1 relationship in a constraint envelope (the prediction of the FEAR hypothesis). Using both simulation and empirical data, we conclude that Φ is superior to traditional correlational analyses because it explicitly tests the FEAR prediction, is robust to outliers, and it controls for the disproportionate influence of observations from large predictor values (caused by the constrained envelope in AD-FID relationship). Importantly, by analyzing the empirical data we corroborate the strong effect that alertness has on flight as stated by the FEAR hypothesis.

  17. CPR methodology with new steady-state criterion and more accurate statistical treatment of channel bow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgartner, S.; Bieli, R.; Bergmann, U. C.

    2012-07-01

    An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an optimized CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This ismore » considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. Emphasis is put on quantifying the statistical distribution of channel bow throughout the core using measurement data. The optimized CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (authors)« less

  18. Correcting for Optimistic Prediction in Small Data Sets

    PubMed Central

    Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.

    2014-01-01

    The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219

  19. Microarray-Based Comparative Genomic Hybridization Using Sex-Matched Reference DNA Provides Greater Sensitivity for Detection of Sex Chromosome Imbalances than Array-Comparative Genomic Hybridization with Sex-Mismatched Reference DNA

    PubMed Central

    Yatsenko, Svetlana A.; Shaw, Chad A.; Ou, Zhishuo; Pursley, Amber N.; Patel, Ankita; Bi, Weimin; Cheung, Sau Wai; Lupski, James R.; Chinault, A. Craig; Beaudet, Arthur L.

    2009-01-01

    In array-comparative genomic hybridization (array-CGH) experiments, the measurement of DNA copy number of sex chromosomal regions depends on the sex of the patient and the reference DNAs used. We evaluated the ability of bacterial artificial chromosomes/P1-derived artificial and oligonucleotide array-CGH analyses to detect constitutional sex chromosome imbalances using sex-mismatched reference DNAs. Twenty-two samples with imbalances involving either the X or Y chromosome, including deletions, duplications, triplications, derivative or isodicentric chromosomes, and aneuploidy, were analyzed. Although concordant results were obtained for approximately one-half of the samples when using sex-mismatched and sex-matched reference DNAs, array-CGH analyses with sex-mismatched reference DNAs did not detect genomic imbalances that were detected using sex-matched reference DNAs in 6 of 22 patients. Small duplications and deletions of the X chromosome were most difficult to detect in female and male patients, respectively, when sex-mismatched reference DNAs were used. Sex-matched reference DNAs in array-CGH analyses provides optimal sensitivity and enables an automated statistical evaluation for the detection of sex chromosome imbalances when compared with an experimental design using sex-mismatched reference DNAs. Using sex-mismatched reference DNAs in array-CGH analyses may generate false-negative, false-positive, and ambiguous results for sex chromosome-specific probes, thus masking potential pathogenic genomic imbalances. Therefore, to optimize both detection of clinically relevant sex chromosome imbalances and ensure proper experimental performance, we suggest that alternative internal controls be developed and used instead of using sex-mismatched reference DNAs. PMID:19324990

  20. Quantitative analysis of trace levels of surface contamination by X-ray photoelectron spectroscopy Part I: statistical uncertainty near the detection limit.

    PubMed

    Hill, Shannon B; Faradzhev, Nadir S; Powell, Cedric J

    2017-12-01

    We discuss the problem of quantifying common sources of statistical uncertainties for analyses of trace levels of surface contamination using X-ray photoelectron spectroscopy. We examine the propagation of error for peak-area measurements using common forms of linear and polynomial background subtraction including the correlation of points used to determine both background and peak areas. This correlation has been neglected in previous analyses, but we show that it contributes significantly to the peak-area uncertainty near the detection limit. We introduce the concept of relative background subtraction variance (RBSV) which quantifies the uncertainty introduced by the method of background determination relative to the uncertainty of the background area itself. The uncertainties of the peak area and atomic concentration and of the detection limit are expressed using the RBSV, which separates the contributions from the acquisition parameters, the background-determination method, and the properties of the measured spectrum. These results are then combined to find acquisition strategies that minimize the total measurement time needed to achieve a desired detection limit or atomic-percentage uncertainty for a particular trace element. Minimization of data-acquisition time is important for samples that are sensitive to x-ray dose and also for laboratories that need to optimize throughput.

  1. Cyber Risk Management for Critical Infrastructure: A Risk Analysis Model and Three Case Studies.

    PubMed

    Paté-Cornell, M-Elisabeth; Kuypers, Marshall; Smith, Matthew; Keller, Philip

    2018-02-01

    Managing cyber security in an organization involves allocating the protection budget across a spectrum of possible options. This requires assessing the benefits and the costs of these options. The risk analyses presented here are statistical when relevant data are available, and system-based for high-consequence events that have not happened yet. This article presents, first, a general probabilistic risk analysis framework for cyber security in an organization to be specified. It then describes three examples of forward-looking analyses motivated by recent cyber attacks. The first one is the statistical analysis of an actual database, extended at the upper end of the loss distribution by a Bayesian analysis of possible, high-consequence attack scenarios that may happen in the future. The second is a systems analysis of cyber risks for a smart, connected electric grid, showing that there is an optimal level of connectivity. The third is an analysis of sequential decisions to upgrade the software of an existing cyber security system or to adopt a new one to stay ahead of adversaries trying to find their way in. The results are distributions of losses to cyber attacks, with and without some considered countermeasures in support of risk management decisions based both on past data and anticipated incidents. © 2017 Society for Risk Analysis.

  2. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics.

    PubMed

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  3. Portfolio optimization problem with nonidentical variances of asset returns using statistical mechanical informatics

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-12-01

    The portfolio optimization problem in which the variances of the return rates of assets are not identical is analyzed in this paper using the methodology of statistical mechanical informatics, specifically, replica analysis. We defined two characteristic quantities of an optimal portfolio, namely, minimal investment risk and investment concentration, in order to solve the portfolio optimization problem and analytically determined their asymptotical behaviors using replica analysis. Numerical experiments were also performed, and a comparison between the results of our simulation and those obtained via replica analysis validated our proposed method.

  4. Optimization of the transmission of observable expectation values and observable statistics in continuous-variable teleportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albano Farias, L.; Stephany, J.

    2010-12-15

    We analyze the statistics of observables in continuous-variable (CV) quantum teleportation in the formalism of the characteristic function. We derive expressions for average values of output-state observables, in particular, cumulants which are additive in terms of the input state and the resource of teleportation. Working with a general class of teleportation resources, the squeezed-bell-like states, which may be optimized in a free parameter for better teleportation performance, we discuss the relation between resources optimal for fidelity and those optimal for different observable averages. We obtain the values of the free parameter of the squeezed-bell-like states which optimize the central momentamore » and cumulants up to fourth order. For the cumulants the distortion between in and out states due to teleportation depends only on the resource. We obtain optimal parameters {Delta}{sub (2)}{sup opt} and {Delta}{sub (4)}{sup opt} for the second- and fourth-order cumulants, which do not depend on the squeezing of the resource. The second-order central momenta, which are equal to the second-order cumulants, and the photon number average are also optimized by the resource with {Delta}{sub (2)}{sup opt}. We show that the optimal fidelity resource, which has been found previously to depend on the characteristics of input, approaches for high squeezing to the resource that optimizes the second-order momenta. A similar behavior is obtained for the resource that optimizes the photon statistics, which is treated here using the sum of the squared differences in photon probabilities of input versus output states as the distortion measure. This is interpreted naturally to mean that the distortions associated with second-order momenta dominate the behavior of the output state for large squeezing of the resource. Optimal fidelity resources and optimal photon statistics resources are compared, and it is shown that for mixtures of Fock states both resources are equivalent.« less

  5. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises

    PubMed Central

    Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667

  6. Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.

    PubMed

    Jin, Qiyu; Grama, Ion; Liu, Quansheng

    2017-01-01

    In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.

  7. Optimization and validation of CEDIA drugs of abuse immunoassay tests in serum on Hitachi 912.

    PubMed

    Kirschbaum, Katrin M; Musshoff, Frank; Schmithausen, Ricarda; Stockhausen, Sarah; Madea, Burkhard

    2011-10-10

    Due to sensitive limits of detection of chromatographic methods and low limit values regarding the screening of drugs under the terms of impairment in safe driving (§ 24a StVG, Street Traffic Law in Germany), preliminary immunoassay (IA) tests should be able to detect also low concentrations of legal and illegal drugs in serum in forensic cases. False-negatives should be avoided, the rate of false-positive samples should be low due to cost and time. An optimization of IA cutoff values and a validation of the assay is required for each laboratory. In a retrospective study results for serum samples containing amphetamine, methylenedioxy derivatives, cannabinoids, benzodiazepines, cocaine (metabolites), methadone and opiates obtained with CEDIA drugs of abuse reagents on a Hitachi 912 autoanalyzer were compared with quantitative results of chromatographic methods (gas or liquid chromatography coupled with mass spectrometry (GC/MS or LC/MS)). Firstly sensitivity, specificity, positive and negative predictive values and overall misclassification rates were evaluated by contingency tables and compared to ROC-analyses and Youden-Indices. Secondly ideal cutoffs were statistically calculated on the basis of sensitivity and specificity as decisive statistical criteria with focus on a high sensitivity (low rates of false-negatives), i.e. using the Youden-Index. Immunoassay (IA) and confirmatory results were available for 3014 blood samples. Sensitivity was 90% or more for nearly all analytes: amphetamines (IA cutoff 9.5 ng/ml), methylenedioxy derivatives (IA cutoff 5.5 ng/ml), cannabinoids (IA cutoff 14.5 ng/ml), benzodiazepines (IA cutoff >0 ng/ml). Test of opiates showed a sensitivity of 86% for a IA cutoff value of >0 ng/ml. Values for specificity ranged between 33% (methadone, IA cutoff 10 ng/ml) and 90% (cocaine, IA cutoff 20 ng/ml). Lower cutoff values as recommended by ROC analyses were chosen for most tests to decrease the rate of false-negatives. Analyses enabled the definition of cutoff values with good values for sensitivity. Small rates of false-positives can be accepted in forensic cases. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. SU-F-T-187: Quantifying Normal Tissue Sparing with 4D Robust Optimization of Intensity Modulated Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newpower, M; Ge, S; Mohan, R

    Purpose: To report an approach to quantify the normal tissue sparing for 4D robustly-optimized versus PTV-optimized IMPT plans. Methods: We generated two sets of 90 DVHs from a patient’s 10-phase 4D CT set; one by conventional PTV-based optimization done in the Eclipse treatment planning system, and the other by an in-house robust optimization algorithm. The 90 DVHs were created for the following scenarios in each of the ten phases of the 4DCT: ± 5mm shift along x, y, z; ± 3.5% range uncertainty and a nominal scenario. A Matlab function written by Gay and Niemierko was modified to calculate EUDmore » for each DVH for the following structures: esophagus, heart, ipsilateral lung and spinal cord. An F-test determined whether or not the variances of each structure’s DVHs were statistically different. Then a t-test determined if the average EUDs for each optimization algorithm were statistically significantly different. Results: T-test results showed each structure had a statistically significant difference in average EUD when comparing robust optimization versus PTV-based optimization. Under robust optimization all structures except the spinal cord received lower EUDs than PTV-based optimization. Using robust optimization the average EUDs decreased 1.45% for the esophagus, 1.54% for the heart and 5.45% for the ipsilateral lung. The average EUD to the spinal cord increased 24.86% but was still well below tolerance. Conclusion: This work has helped quantify a qualitative relationship noted earlier in our work: that robust optimization leads to plans with greater normal tissue sparing compared to PTV-based optimization. Except in the case of the spinal cord all structures received a lower EUD under robust optimization and these results are statistically significant. While the average EUD to the spinal cord increased to 25.06 Gy under robust optimization it is still well under the TD50 value of 66.5 Gy from Emami et al. Supported in part by the NCI U19 CA021239.« less

  9. Toward Enhancing the Enzymatic Activity of a Novel Fungal Polygalacturonase for Food Industry: Optimization and Biochemical Analyses.

    PubMed

    Shetaia, Yousseria M H; El-Baz, Ashraf F; ElMekawy, Ahmed M

    2017-08-11

    The review of literature and patents shows that enhancing the PG production and activity are still required to fulfill the increasing demands. A dual optimization process, which involved Plackett-Burman design (PBD), with seven factors, and response surface methodology, was applied to optimize the production of extracellular polygalacturonase (PG) enzyme produced by a novel strain of Aspergillus flavus isolated from rotten orange fruit. The fungal PG was purified and biochemically characterized. Three variables (harvesting time, pH and orange pomace concentration), that were verified to be significant by the PBD analysis, were comprehensively optimized via Box-Behnken design. According to this optimization, the highest PG activity (4073 U/mL) was obtained under pH 7 after 48 h using 40 g/L orange pomace as a substrate, with enhancement in PG activity by 51% compared to the first PBD optimization step. The specific activity of the purified PG was 1608 U/mg with polygalacturonic acid and its molecular weight was 55 kDa. The optimum pH was 5 with relative thermal stability (80%) at 50˚C after 30 min. The PG activity improved in the presence of Cu2+ and Ca2+, while Ba2+, Fe2+ and Zn2+ greatly inhibited the enzyme activity. The obvious Km and Vmax values were 0.8 mg/mL and 2000 µmol/min, respectively. This study is a starting point for initial research in the field of optimization and characterization of A. flavus PG. The statistical optimization of A. flavus PG and its biochemical characterization clearly revealed that this fungal strain can be a potential producer of PG which has a wide range of industrial applications. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Nonlinear Curve-Fitting Program

    NASA Technical Reports Server (NTRS)

    Everhart, Joel L.; Badavi, Forooz F.

    1989-01-01

    Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.

  11. The impact on midlevel vision of statistically optimal divisive normalization in V1.

    PubMed

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-07-15

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.

  12. Design of order statistics filters using feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Maslennikova, Yu. S.; Bochkarev, V. V.

    2016-08-01

    In recent years significant progress have been made in the development of nonlinear data processing techniques. Such techniques are widely used in digital data filtering and image enhancement. Many of the most effective nonlinear filters based on order statistics. The widely used median filter is the best known order statistic filter. Generalized form of these filters could be presented based on Lloyd's statistics. Filters based on order statistics have excellent robustness properties in the presence of impulsive noise. In this paper, we present special approach for synthesis of order statistics filters using artificial neural networks. Optimal Lloyd's statistics are used for selecting of initial weights for the neural network. Adaptive properties of neural networks provide opportunities to optimize order statistics filters for data with asymmetric distribution function. Different examples demonstrate the properties and performance of presented approach.

  13. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  14. Optimization of extraction and antioxidant activity of polysaccharides from Salvia miltiorrhiza Bunge residue.

    PubMed

    Jiang, Yuanyuan; Wang, Long; Zhang, Li; Wang, Tao; Zhou, Yonghong; Ding, Chunbang; Yang, Ruiwu; Wang, Xiaoli; Yu, Lin

    2015-08-01

    In this study, the process of extracting polysaccharides from Salvia miltiorrhiza Bunge residue was optimized by using a Box-Behnken design. Statistical analysis of the results showed that the linear and quadratic terms of the three variables of the extraction process had significant effects. The optimal conditions are as follows: extracting time of 2.6 h, extraction temperature of 89 °C, and ratio of water to raw material of 32 mL/g. Moreover, a new polysaccharide with antioxidant activity [i.e., SMWP-1 (∼5.27×10(5) Da)] was isolated from S. miltiorrhiza residue. The carbohydrate, uronic acid, and protein contents of SMWP-1 were 90.11%, 0.13%, and 0.53%, respectively. The SMWP-1 is composed of glucose, xylose, mannose, and galactose. The preliminary structural characterization of SMWP-1 was determined via Fourier transform infrared (FTIR) spectroscopy, and scanning electron microscopy (SEM) analyses. This polysaccharide exhibited strong reducing power and free-radical scavenging activities in vitro against 2,2-diphenyl-1-picrylhydrazyl, superoxide anion, and hydroxyl. Therefore, SMWP-1 can be investigated further as a novel natural antioxidant. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Optimization and physicochemical characterization of a triamcinolone acetonide-loaded NLC for ocular antiangiogenic applications.

    PubMed

    Araújo, J; Gonzalez-Mira, E; Egea, M A; Garcia, M L; Souto, E B

    2010-06-30

    The purpose of this study was to develop a novel nanostructured lipid carrier (NLC) for the intravitreal-targeting delivery of triamcinolone acetonide (TA) by direct ocular instillation. A five-level central composite rotable design was used to study the influence of four different variables on the physicochemical characteristics of NLCs. The analysis of variance (ANOVA) statistical test was used to assess the optimization of NLC production parameters. The systems were produced by high pressure homogenization using Precirol ATO5 and squalene as solid and liquid lipids respectively, and Lutrol F68 as surfactant. Homogenization at 600 bar for 3 cycles of the optimized formulation resulted in the production of small NLC (mean diameter < 200 nm) with a homogeneous particle size distribution (polydispersity index (PI) approximately 0.1), of negatively charged surface (approximately |45| mV) and high entrapment efficiency (approximately 95%). Surface morphology was assessed by SEM which revealed fairly spherical shape. DSC, WAXS and FT-IR analyses confirmed that TA was mostly entrapped into the NLC, characterized by an amorphous matrix. In vivo Draize test showed no signs of ocular toxicity. 2010 Elsevier B.V. All rights reserved.

  16. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  17. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  18. Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Sakata, Ayaka; Xu, Yingying

    2018-03-01

    We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data. We propose an approximate message passing (AMP) algorithm considering nonconvex regularization, namely SCAD-AMP, and analytically show that the stability condition corresponds to the de Almeida-Thouless condition in spin glass literature. Through asymptotic analysis, we show the correspondence between the density evolution of SCAD-AMP and the replica symmetric (RS) solution. Numerical experiments confirm that for a sufficiently large system size, SCAD-AMP achieves the optimal performance predicted by the replica method. Through replica analysis, a phase transition between replica symmetric and replica symmetry breaking (RSB) region is found in the parameter space of SCAD. The appearance of the RS region for a nonconvex penalty is a significant advantage that indicates the region of smooth landscape of the optimization problem. Furthermore, we analytically show that the statistical representation performance of the SCAD penalty is better than that of \

  19. Assessment of water quality monitoring for the optimal sensor placement in lake Yahuarcocha using pattern recognition techniques and geographical information systems.

    PubMed

    Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo

    2018-03-30

    Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.

  20. Microscopy as a statistical, Rényi-Ulam, half-lie game: a new heuristic search strategy to accelerate imaging.

    PubMed

    Drumm, Daniel W; Greentree, Andrew D

    2017-11-07

    Finding a fluorescent target in a biological environment is a common and pressing microscopy problem. This task is formally analogous to the canonical search problem. In ideal (noise-free, truthful) search problems, the well-known binary search is optimal. The case of half-lies, where one of two responses to a search query may be deceptive, introduces a richer, Rényi-Ulam problem and is particularly relevant to practical microscopy. We analyse microscopy in the contexts of Rényi-Ulam games and half-lies, developing a new family of heuristics. We show the cost of insisting on verification by positive result in search algorithms; for the zero-half-lie case bisectioning with verification incurs a 50% penalty in the average number of queries required. The optimal partitioning of search spaces directly following verification in the presence of random half-lies is determined. Trisectioning with verification is shown to be the most efficient heuristic of the family in a majority of cases.

  1. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  2. Histological Validity and Clinical Evidence for Use of Fractional Lasers for Acne Scars

    PubMed Central

    Sardana, Kabir; Garg, Vijay K; Arora, Pooja; Khurana, Nita

    2012-01-01

    Though fractional lasers are widely used for acne scars, very little clinical or histological data based on the objective clinical assessment or the depth of penetration of lasers on in vivo facial tissue are available. The depth probably is the most important aspect that predicts the improvement in acne scars but the studies on histology have little uniformity in terms of substrate (tissue) used, processing and stains used. The variability of the laser setting (dose, pulses and density) makes comparison of the studies difficult. It is easier to compare the end results, histological depth and clinical results. We analysed all the published clinical and histological studies on fractional lasers in acne scars and analysed the data, both clinical and histological, by statistical software to decipher their significance. On statistical analysis, the depth was found to be variable with the 1550-nm lasers achieving a depth of 679 μm versus 10,600 nm (895 μm) and 2940 nm (837 μm) lasers. The mean depth of penetration (in μm) in relation to the energy used, in millijoules (mj), varies depending on the laser studied. This was statistically found to be 12.9–28.5 for Er:glass, 3–54.38 for Er:YAG and 6.28–53.66 for CO2. The subjective clinical improvement was a modest 46%. The lack of objective evaluation of clinical improvement and scar-specific assessment with the lack of appropriate in vivo studies is a case for combining conventional modalities like subcision, punch excision and needling with fractional lasers to achieve optimal results. PMID:23060702

  3. Optimal spectral tracking--adapting to dynamic regime change.

    PubMed

    Brittain, John-Stuart; Halliday, David M

    2011-01-30

    Real world data do not always obey the statistical restraints imposed upon them by sophisticated analysis techniques. In spectral analysis for instance, an ergodic process--the interchangeability of temporal for spatial averaging--is assumed for a repeat-trial design. Many evolutionary scenarios, such as learning and motor consolidation, do not conform to such linear behaviour and should be approached from a more flexible perspective. To this end we previously introduced the method of optimal spectral tracking (OST) in the study of trial-varying parameters. In this extension to our work we modify the OST routines to provide an adaptive implementation capable of reacting to dynamic transitions in the underlying system state. In so doing, we generalise our approach to characterise both slow-varying and rapid fluctuations in time-series, simultaneously providing a metric of system stability. The approach is first applied to a surrogate dataset and compared to both our original non-adaptive solution and spectrogram approaches. The adaptive OST is seen to display fast convergence and desirable statistical properties. All three approaches are then applied to a neurophysiological recording obtained during a study on anaesthetic monitoring. Local field potentials acquired from the posterior hypothalamic region of a deep brain stimulation patient undergoing anaesthesia were analysed. The characterisation of features such as response delay, time-to-peak and modulation brevity are considered. Copyright © 2010 Elsevier B.V. All rights reserved.

  4. A Data-Driven Solution for Performance Improvement

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.

  5. On optimal current patterns for electrical impedance tomography.

    PubMed

    Demidenko, Eugene; Hartov, Alex; Soni, Nirmal; Paulsen, Keith D

    2005-02-01

    We develop a statistical criterion for optimal patterns in planar circular electrical impedance tomography. These patterns minimize the total variance of the estimation for the resistance or conductance matrix. It is shown that trigonometric patterns (Isaacson, 1986), originally derived from the concept of distinguishability, are a special case of our optimal statistical patterns. New optimal random patterns are introduced. Recovering the electrical properties of the measured body is greatly simplified when optimal patterns are used. The Neumann-to-Dirichlet map and the optimal patterns are derived for a homogeneous medium with an arbitrary distribution of the electrodes on the periphery. As a special case, optimal patterns are developed for a practical EIT system with a finite number of electrodes. For a general nonhomogeneous medium, with no a priori restriction, the optimal patterns for the resistance and conductance matrix are the same. However, for a homogeneous medium, the best current pattern is the worst voltage pattern and vice versa. We study the effect of the number and the width of the electrodes on the estimate of resistivity and conductivity in a homogeneous medium. We confirm experimentally that the optimal patterns produce minimum conductivity variance in a homogeneous medium. Our statistical model is able to discriminate between a homogenous agar phantom and one with a 2 mm air hole with error probability (p-value) 1/1000.

  6. Properties of nanocrystalline Si layers embedded in structure of solar cell

    NASA Astrophysics Data System (ADS)

    Jurečka, Stanislav; Imamura, Kentaro; Matsumoto, Taketoshi; Kobayashi, Hikaru

    2017-12-01

    Suppression of spectral reflectance from the surface of solar cell is necessary for achieving a high energy conversion efficiency. We developed a simple method for forming nanocrystalline layers with ultralow reflectance in a broad range of wavelengths. The method is based on metal assisted etching of the silicon surface. In this work, we prepared Si solar cell structures with embedded nanocrystalline layers. The microstructure of embedded layer depends on the etching conditions. We examined the microstructure of the etched layers by a transmission electron microscope and analysed the experimental images by statistical and Fourier methods. The obtained results provide information on the applied treatment operations and can be used to optimize the solar cell forming procedure.

  7. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 2: clinical trial results.

    PubMed

    Chenel, Marylore; Bouzom, François; Cazade, Fanny; Ogungbenro, Kayode; Aarons, Leon; Mentré, France

    2008-12-01

    To compare results of population PK analyses obtained with a full empirical design (FD) and an optimal sparse design (MD) in a Drug-Drug Interaction (DDI) study aiming to evaluate the potential CYP3A4 inhibitory effect of a drug in development, SX, on a reference substrate, midazolam (MDZ). Secondary aim was to evaluate the interaction of SX on MDZ in the in vivo study. Methods To compare designs, real data were analysed by population PK modelling technique using either FD or MD with NONMEM FOCEI for SX and with NONMEM FOCEI and MONOLIX SAEM for MDZ. When applicable a Wald test was performed to compare model parameter estimates, such as apparent clearance (CL/F), across designs. To conclude on the potential interaction of SX on MDZ PK, a Student paired test was applied to compare the individual PK parameters (i.e. log(AUC) and log(C(max))) obtained either by a non-compartmental approach (NCA) using FD or from empirical Bayes estimates (EBE) obtained after fitting the model separately on each treatment group using either FD or MD. For SX, whatever the design, CL/F was well estimated and no statistical differences were found between CL/F estimated values obtained with FD (CL/F = 8.2 l/h) and MD (CL/F = 8.2 l/h). For MDZ, only MONOLIX was able to estimate CL/F and to provide its standard error of estimation with MD. With MONOLIX, whatever the design and the administration setting, MDZ CL/F was well estimated and there were no statistical differences between CL/F estimated values obtained with FD (72 l/h and 40 l/h for MDZ alone and for MDZ with SX, respectively) and MD (77 l/h and 45 l/h for MDZ alone and for MDZ with SX, respectively). Whatever the approach, NCA or population PK modelling, and for the latter approach, whatever the design, MD or FD, comparison tests showed that there was a statistical difference (P < 0.0001) between individual MDZ log(AUC) obtained after MDZ administration alone and co-administered with SX. Regarding C(max), there was a statistical difference (P < 0.05) between individual MDZ log(C(max)) obtained under the 2 administration settings in all cases, except with the sparse design with MONOLIX. However, the effect on C(max) was small. Finally, SX was shown to be a moderate CYP3A4 inhibitor, which at therapeutic doses increased MDZ exposure by a factor of 2 in average and almost did not affect the C(max). The optimal sparse design enabled the estimation of CL/F of a CYP3A4 substrate and inhibitor when co-administered together and to show the interaction leading to the same conclusion as the full empirical design.

  8. Drug-drug interaction predictions with PBPK models and optimal multiresponse sampling time designs: application to midazolam and a phase I compound. Part 2: clinical trial results

    PubMed Central

    Chenel, Marylore; Bouzom, François; Cazade, Fanny; Ogungbenro, Kayode; Aarons, Leon; Mentré, France

    2008-01-01

    Purpose To compare results of population PK analyses obtained with a full empirical design (FD) and an optimal sparse design (MD) in a Drug-Drug Interaction (DDI) study aiming to evaluate the potential CYP3A4 inhibitory effect of a drug in development, SX, on a reference substrate, midazolam (MDZ). Secondary aim was to evaluate the interaction of SX on MDZ in the in vivo study. Methods To compare designs, real data were analysed by population PK modelling using either FD or MD with NONMEM FOCEI for SX and with NONMEM FOCEI and MONOLIX SAEM for MDZ. When applicable a Wald’s test was performed to compare model parameter estimates, such as apparent clearance (CL/F), across designs. To conclude on the potential interaction of SX on MDZ PK, a Student paired test was applied to compare the individual PK parameters (i.e. log(AUC) and log(Cmax)) obtained either by a non-compartmental approach (NCA) using FD or from empirical Bayes estimates (EBE) obtained after fitting the model separately on each treatment group using either FD or MD. Results For SX, whatever the design, CL/F was well estimated and no statistical differences were found between CL/F estimated values obtained with FD (CL/F = 8.2 L/h) and MD (CL/F = 8.2 L/h). For MDZ, only MONOLIX was able to estimate CL/F and to provide its standard error of estimation with MD. With MONOLIX, whatever the design and the administration setting, MDZ CL/F was well estimated and there were no statistical differences between CL/F estimated values obtained with FD (72 L/h and 40 L/h for MDZ alone and for MDZ with SX, respectively) and MD (77 L/h and 45 L/h for MDZ alone and for MDZ with SX, respectively). Whatever the approach, NCA or population PK modelling, and for the latter approach, whatever the design, MD or FD, comparison tests showed that there was a statistical difference (p<0.0001) between individual MDZ log(AUC) obtained after MDZ administration alone and co-administered with SX. Regarding Cmax, there was a statistical difference (p<0.05) between individual MDZ log(Cmax) obtained under the 2 administration settings in all cases, except with the sparse design with MONOLIX. However, the effect on Cmax was small. Finally, SX was shown to be a moderate CYP3A4 inhibitor, which at therapeutic doses increased MDZ exposure by a factor 2 in average and almost did not affect the Cmax. Conclusion The optimal sparse design enabled the estimation of CL/F of a CYP3A4 substrate and inhibitor when co-administered together and to show the interaction leading to the same conclusion than the full empirical design. PMID:19130187

  9. Heterogeneity in soil water and light environments and dispersal limitation: what facilitates tree species coexistence in a temperate forest?

    PubMed

    Masaki, T; Hata, S; Ide, Y

    2015-03-01

    In the present study, we analysed the habitat association of tree species in an old-growth temperate forest across all life stages to test theories on the coexistence of tree species in forest communities. An inventory for trees was implemented at a 6-ha plot in Ogawa Forest Reserve for adults, juveniles, saplings and seedlings. Volumetric soil water content (SMC) and light levels were measured in 10-m grids. Relationships between the actual number of stems and environmental variables were determined for 35 major tree species, and the spatial correlations within and among species were analysed. The light level had no statistically significant effect on distribution of saplings and seedlings of any species. In contrast, most species had specific optimal values along the SMC gradient. The optimal values were almost identical in earlier life stages, but were more variable in later life stages among species. However, no effective niche partitioning among the species was apparent even at the adult stage. Furthermore, results of spatial analyses suggest that dispersal limitation was not sufficient to mitigate competition between species. This might result from well-scattered seed distribution via wind and bird dispersal, as well as conspecific density-dependent mortality of seeds and seedlings. Thus, both niche partitioning and dispersal limitation appeared less important for facilitating coexistence of species within this forest than expected in tropical forests. The tree species assembly in this temperate forest might be controlled through a neutral process at the spatial scale tested in this study. © 2014 German Botanical Society and The Royal Botanical Society of the Netherlands.

  10. A New Combinatorial Optimization Approach for Integrated Feature Selection Using Different Datasets: A Prostate Cancer Transcriptomic Study

    PubMed Central

    Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo

    2015-01-01

    Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884

  11. A software platform for statistical evaluation of patient respiratory patterns in radiation therapy.

    PubMed

    Dunn, Leon; Kenny, John

    2017-10-01

    The aim of this work was to design and evaluate a software tool for analysis of a patient's respiration, with the goal of optimizing the effectiveness of motion management techniques during radiotherapy imaging and treatment. A software tool which analyses patient respiratory data files (.vxp files) created by the Varian Real-Time Position Management System (RPM) was developed to analyse patient respiratory data. The software, called RespAnalysis, was created in MATLAB and provides four modules, one each for determining respiration characteristics, providing breathing coaching (biofeedback training), comparing pre and post-training characteristics and performing a fraction-by-fraction assessment. The modules analyse respiratory traces to determine signal characteristics and specifically use a Sample Entropy algorithm as the key means to quantify breathing irregularity. Simulated respiratory signals, as well as 91 patient RPM traces were analysed with RespAnalysis to test the viability of using the Sample Entropy for predicting breathing regularity. Retrospective assessment of patient data demonstrated that the Sample Entropy metric was a predictor of periodic irregularity in respiration data, however, it was found to be insensitive to amplitude variation. Additional waveform statistics assessing the distribution of signal amplitudes over time coupled with Sample Entropy method were found to be useful in assessing breathing regularity. The RespAnalysis software tool presented in this work uses the Sample Entropy method to analyse patient respiratory data recorded for motion management purposes in radiation therapy. This is applicable during treatment simulation and during subsequent treatment fractions, providing a way to quantify breathing irregularity, as well as assess the need for breathing coaching. It was demonstrated that the Sample Entropy metric was correlated to the irregularity of the patient's respiratory motion in terms of periodicity, whilst other metrics, such as percentage deviation of inhale/exhale peak positions provided insight into respiratory amplitude regularity. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Sb2Te3 and Its Superlattices: Optimization by Statistical Design.

    PubMed

    Behera, Jitendra K; Zhou, Xilin; Ranjan, Alok; Simpson, Robert E

    2018-05-02

    The objective of this work is to demonstrate the usefulness of fractional factorial design for optimizing the crystal quality of chalcogenide van der Waals (vdW) crystals. We statistically analyze the growth parameters of highly c axis oriented Sb 2 Te 3 crystals and Sb 2 Te 3 -GeTe phase change vdW heterostructured superlattices. The statistical significance of the growth parameters of temperature, pressure, power, buffer materials, and buffer layer thickness was found by fractional factorial design and response surface analysis. Temperature, pressure, power, and their second-order interactions are the major factors that significantly influence the quality of the crystals. Additionally, using tungsten rather than molybdenum as a buffer layer significantly enhances the crystal quality. Fractional factorial design minimizes the number of experiments that are necessary to find the optimal growth conditions, resulting in an order of magnitude improvement in the crystal quality. We highlight that statistical design of experiment methods, which is more commonly used in product design, should be considered more broadly by those designing and optimizing materials.

  13. Association analysis of multiple traits by an approach of combining P values.

    PubMed

    Chen, Lili; Wang, Yong; Zhou, Yajing

    2018-03-01

    Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.

  14. Statistical analysis of the factors that influenced the mechanical properties improvement of cassava starch films

    NASA Astrophysics Data System (ADS)

    Monteiro, Mayra; Oliveira, Victor; Santos, Francisco; Barros Neto, Eduardo; Silva, Karyn; Silva, Rayane; Henrique, João; Chibério, Abimaelle

    2017-08-01

    In order to obtain cassava starch films with improved mechanical properties in relation to the synthetic polymer in the packaging production, a complete factorial design 23 was carried out in order to investigate which factor significantly influences the tensile strength of the biofilm. The factors to be investigated were cassava starch, glycerol and modified clay contents. Modified bentonite clay was used as a filling material of the biofilm. Glycerol was the plasticizer used to thermoplastify cassava starch. The factorial analysis suggested a regression model capable of predicting the optimal mechanical property of the cassava starch film from the maximization of the tensile strength. The reliability of the regression model was tested by the correlation established with the experimental data through the following statistical analyse: Pareto graph. The modified clay was the factor of greater statistical significance on the observed response variable, being the factor that contributed most to the improvement of the mechanical property of the starch film. The factorial experiments showed that the interaction of glycerol with both modified clay and cassava starch was significant for the reduction of biofilm ductility. Modified clay and cassava starch contributed to the maximization of biofilm ductility, while glycerol contributed to the minimization.

  15. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  16. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    PubMed

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  18. Receptor arrays optimized for natural odor statistics.

    PubMed

    Zwicker, David; Murugan, Arvind; Brenner, Michael P

    2016-05-17

    Natural odors typically consist of many molecules at different concentrations. It is unclear how the numerous odorant molecules and their possible mixtures are discriminated by relatively few olfactory receptors. Using an information theoretic model, we show that a receptor array is optimal for this task if it achieves two possibly conflicting goals: (i) Each receptor should respond to half of all odors and (ii) the response of different receptors should be uncorrelated when averaged over odors presented with natural statistics. We use these design principles to predict statistics of the affinities between receptors and odorant molecules for a broad class of odor statistics. We also show that optimal receptor arrays can be tuned to either resolve concentrations well or distinguish mixtures reliably. Finally, we use our results to predict properties of experimentally measured receptor arrays. Our work can thus be used to better understand natural olfaction, and it also suggests ways to improve artificial sensor arrays.

  19. Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD).

    PubMed

    Rhoades, Seth D; Weljie, Aalim M

    2016-12-01

    Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method.

  20. Comprehensive Optimization of LC-MS Metabolomics Methods Using Design of Experiments (COLMeD)

    PubMed Central

    Rhoades, Seth D.

    2017-01-01

    Introduction Both reverse-phase and HILIC chemistries are deployed for liquid-chromatography mass spectrometry (LC-MS) metabolomics analyses, however HILIC methods lag behind reverse-phase methods in reproducibility and versatility. Comprehensive metabolomics analysis is additionally complicated by the physiochemical diversity of metabolites and array of tunable analytical parameters. Objective Our aim was to rationally and efficiently design complementary HILIC-based polar metabolomics methods on multiple instruments using Design of Experiments (DoE). Methods We iteratively tuned LC and MS conditions on ion-switching triple quadrupole (QqQ) and quadrupole-time-of-flight (qTOF) mass spectrometers through multiple rounds of a workflow we term COLMeD (Comprehensive optimization of LC-MS metabolomics methods using design of experiments). Multivariate statistical analysis guided our decision process in the method optimizations. Results LC-MS/MS tuning for the QqQ method on serum metabolites yielded a median response increase of 161.5% (p<0.0001) over initial conditions with a 13.3% increase in metabolite coverage. The COLMeD output was benchmarked against two widely used polar metabolomics methods, demonstrating total ion current increases of 105.8% and 57.3%, with median metabolite response increases of 106.1% and 10.3% (p<0.0001 and p<0.05 respectively). For our optimized qTOF method, 22 solvent systems were compared on a standard mix of physiochemically diverse metabolites, followed by COLMeD optimization, yielding a median 29.8% response increase (p<0.0001) over initial conditions. Conclusions The COLMeD process elucidated response tradeoffs, facilitating improved chromatography and MS response without compromising separation of isobars. COLMeD is efficient, requiring no more than 20 injections in a given DoE round, and flexible, capable of class-specific optimization as demonstrated through acylcarnitine optimization within the QqQ method. PMID:28348510

  1. The impact on midlevel vision of statistically optimal divisive normalization in V1

    PubMed Central

    Coen-Cagli, Ruben; Schwartz, Odelia

    2013-01-01

    The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950

  2. Automation method to identify the geological structure of seabed using spatial statistic analysis of echo sounding data

    NASA Astrophysics Data System (ADS)

    Kwon, O.; Kim, W.; Kim, J.

    2017-12-01

    Recently construction of subsea tunnel has been increased globally. For safe construction of subsea tunnel, identifying the geological structure including fault at design and construction stage is more than important. Then unlike the tunnel in land, it's very difficult to obtain the data on geological structure because of the limit in geological survey. This study is intended to challenge such difficulties in a way of developing the technology to identify the geological structure of seabed automatically by using echo sounding data. When investigation a potential site for a deep subsea tunnel, there is the technical and economical limit with borehole of geophysical investigation. On the contrary, echo sounding data is easily obtainable while information reliability is higher comparing to above approaches. This study is aimed at developing the algorithm that identifies the large scale of geological structure of seabed using geostatic approach. This study is based on theory of structural geology that topographic features indicate geological structure. Basic concept of algorithm is outlined as follows; (1) convert the seabed topography to the grid data using echo sounding data, (2) apply the moving window in optimal size to the grid data, (3) estimate the spatial statistics of the grid data in the window area, (4) set the percentile standard of spatial statistics, (5) display the values satisfying the standard on the map, (6) visualize the geological structure on the map. The important elements in this study include optimal size of moving window, kinds of optimal spatial statistics and determination of optimal percentile standard. To determine such optimal elements, a numerous simulations were implemented. Eventually, user program based on R was developed using optimal analysis algorithm. The user program was designed to identify the variations of various spatial statistics. It leads to easy analysis of geological structure depending on variation of spatial statistics by arranging to easily designate the type of spatial statistics and percentile standard. This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government. (Project Number: 13 Construction Research T01)

  3. Relationships between optimism, educational environment, career adaptability and career motivation in nursing undergraduates: A cross-sectional study.

    PubMed

    Fang, Wenjie; Zhang, Yanting; Mei, Jiaojiao; Chai, Xiaohui; Fan, Xiuzhen

    2018-06-01

    For solving the problem of the abandonment of the career in nursing undergraduates, it is important to understand their motivation to choose nursing as a career and its associated personal and situational factors. To examine the relationships between optimism, educational environment, career adaptability, and career motivation in nursing undergraduates using the career construction model of adaptation. This study adopted a cross-sectional design. A convenience sample of 1060 nursing undergraduates from three universities completed questionnaires for measuring optimism, educational environment, career adaptability, and career motivation. Confirmatory factor analyses, descriptive analyses, comparison analyses, correlation analyses, and mediation analyses were performed accordingly. Nursing undergraduates' career motivation was positively correlated with their career adaptability (r = 0.41, P < 0.01), the educational environment (r = 0.60, P < 0.01), and optimism (r = 0.26, P < 0.01). In addition, the effects of optimism and educational environment on career motivation were partially mediated by career adaptability in nursing undergraduates. In nursing undergraduates, the educational environment had a relatively strong positive association with career motivation, while optimism had a weak one. Career adaptability played a mediating role in the relationships. Targeted interventions may improve nursing undergraduates' career motivation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A Language for Specifying Compiler Optimizations for Generic Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcock, Jeremiah J.

    2007-01-01

    Compiler optimization is important to software performance, and modern processor architectures make optimization even more critical. However, many modern software applications use libraries providing high levels of abstraction. Such libraries often hinder effective optimization — the libraries are difficult to analyze using current compiler technology. For example, high-level libraries often use dynamic memory allocation and indirectly expressed control structures, such as iteratorbased loops. Programs using these libraries often cannot achieve an optimal level of performance. On the other hand, software libraries have also been recognized as potentially aiding in program optimization. One proposed implementation of library-based optimization is to allowmore » the library author, or a library user, to define custom analyses and optimizations. Only limited systems have been created to take advantage of this potential, however. One problem in creating a framework for defining new optimizations and analyses is how users are to specify them: implementing them by hand inside a compiler is difficult and prone to errors. Thus, a domain-specific language for librarybased compiler optimizations would be beneficial. Many optimization specification languages have appeared in the literature, but they tend to be either limited in power or unnecessarily difficult to use. Therefore, I have designed, implemented, and evaluated the Pavilion language for specifying program analyses and optimizations, designed for library authors and users. These analyses and optimizations can be based on the implementation of a particular library, its use in a specific program, or on the properties of a broad range of types, expressed through concepts. The new system is intended to provide a high level of expressiveness, even though the intended users are unlikely to be compiler experts.« less

  5. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  6. Design, characterization, and in vitro cellular inhibition and uptake of optimized genistein-loaded NLC for the prevention of posterior capsular opacification using response surface methodology.

    PubMed

    Zhang, Wenji; Li, Xuedong; Ye, Tiantian; Chen, Fen; Sun, Xiao; Kong, Jun; Yang, Xinggang; Pan, Weisan; Li, Sanming

    2013-09-15

    This study was to design an innovative nanostructured lipid carrier (NLC) for drug delivery of genistein applied after cataract surgery for the prevention of posterior capsular opacification. NLC loaded with genistein (GEN-NLC) was produced with Compritol 888 ATO, Gelucire 44/14 and Miglyol 812N, stabilized by Solutol(®) HS15 by melt emulsification method. A 2(4) central composite design of 4 independent variables was performed for optimization. Effects of drug concentration, Gelucire 44/14 concentration in total solid lipid, liquid lipid concentration, and surfactant concentration on the mean particle size, polydispersity index, zeta potential and encapsulation efficiency were investigated. Analysis of variance (ANOVA) statistical test was used to assess the optimization. The optimized GEN-NLC showed a homogeneous particle size of 90.16 nm (with PI=0.33) of negatively charged surface (-25.08 mv) and high encapsulation efficiency (91.14%). Particle morphology assessed by TEM revealed a spherical shape. DSC analyses confirmed that GEN was mostly entrapped in amorphous state. In vitro release experiments indicated a prolonged and controlled genistein release for 72 h. In vitro growth inhibition assay showed an effective growth inhibition of GEN-NLCs on human lens epithelial cells (HLECs). Preliminary cellular uptake test proved a enhanced penetration of genistein into HLECs when delivered in NLC. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  8. Optimizing α for better statistical decisions: a case study involving the pace-of-life syndrome hypothesis: optimal α levels set to minimize Type I and II errors frequently result in different conclusions from those using α = 0.05.

    PubMed

    Mudge, Joseph F; Penny, Faith M; Houlahan, Jeff E

    2012-12-01

    Setting optimal significance levels that minimize Type I and Type II errors allows for more transparent and well-considered statistical decision making compared to the traditional α = 0.05 significance level. We use the optimal α approach to re-assess conclusions reached by three recently published tests of the pace-of-life syndrome hypothesis, which attempts to unify occurrences of different physiological, behavioral, and life history characteristics under one theory, over different scales of biological organization. While some of the conclusions reached using optimal α were consistent to those previously reported using the traditional α = 0.05 threshold, opposing conclusions were also frequently reached. The optimal α approach reduced probabilities of Type I and Type II errors, and ensured statistical significance was associated with biological relevance. Biologists should seriously consider their choice of α when conducting null hypothesis significance tests, as there are serious disadvantages with consistent reliance on the traditional but arbitrary α = 0.05 significance level. Copyright © 2012 WILEY Periodicals, Inc.

  9. Phase Transitions in Combinatorial Optimization Problems: Basics, Algorithms and Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2005-10-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.

  10. Spectral Characteristics of Salinized Soils during Microbial Remediation Processes.

    PubMed

    Ma, Chuang; Shen, Guang-rong; Zhi, Yue-e; Wang, Zi-jun; Zhu, Yun; Li, Xian-hua

    2015-09-01

    In this study, the spectral reflectance of saline soils, the associated soil salt content (SSC) and the concentrations of salt ions were measured and analysed by tracing the container microbial remediation experiments for saline soil (main salt is sodium chloride) of Dongying City, Shandong Province. The sensitive spectral reflectance bands of saline soils to SSC, Cl- and Na+ in the process of microbial remediation were analysed. The average-dimension reduction of these bands was conducted by using a combination of correlation coefficient and decision coefficient, and by gradually narrowing the sampling interval method. Results showed that the tendency and magnitude of the average spectral reflectance in all bands of saline soils during the total remediation processes were nearly consistent with SSC and with Cl- coocentration, respectively. The degree of salinity of the soil, including SSC and salt ion concentrations, had a significant positive correlation with the spectral reflectance of all bands, particularly in the near-infrared band. The optimal spectral bands of SSC were 1370 to 1445 nm and 1447 to 1608 nm, whereas the optimal spectral bands of Cl- and Na+ were 1336 to 1461 nm and 1471 to 1561 nm, respectively. The relationship model among SSC, soil salt ion concentrations (Cl- and Na+) and soil spectral reflectance of the corresponding optimal spectral band was established. The largest R2 of relationship model between SSC and the average reflectance of associated optimal band reached to 0.95, and RMSEC and RMSEP were 1.076 and 0.591, respectively. Significant statistical analysis of salt factors and soil reflectance for different microbial remediation processes indicated that the spectral response characteristics and sensitivity of SSC to soil reflectance, which implied the feasibility of high spectrum test on soil microbial remediation monitoring, also provided the basis for quick nondestructive monitoring soil bioremediation process by soil spectral reflectance.

  11. Reproducibility-optimized test statistic for ranking genes in microarray studies.

    PubMed

    Elo, Laura L; Filén, Sanna; Lahesmaa, Riitta; Aittokallio, Tero

    2008-01-01

    A principal goal of microarray studies is to identify the genes showing differential expression under distinct conditions. In such studies, the selection of an optimal test statistic is a crucial challenge, which depends on the type and amount of data under analysis. While previous studies on simulated or spike-in datasets do not provide practical guidance on how to choose the best method for a given real dataset, we introduce an enhanced reproducibility-optimization procedure, which enables the selection of a suitable gene- anking statistic directly from the data. In comparison with existing ranking methods, the reproducibilityoptimized statistic shows good performance consistently under various simulated conditions and on Affymetrix spike-in dataset. Further, the feasibility of the novel statistic is confirmed in a practical research setting using data from an in-house cDNA microarray study of asthma-related gene expression changes. These results suggest that the procedure facilitates the selection of an appropriate test statistic for a given dataset without relying on a priori assumptions, which may bias the findings and their interpretation. Moreover, the general reproducibilityoptimization procedure is not limited to detecting differential expression only but could be extended to a wide range of other applications as well.

  12. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udey, Ruth Norma

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  13. Monitoring the quality consistency of Weibizhi tablets by micellar electrokinetic chromatography fingerprints combined with multivariate statistical analyses, the simple quantified ratio fingerprint method, and the fingerprint-efficacy relationship.

    PubMed

    Liu, Yingchun; Sun, Guoxiang; Wang, Yan; Yang, Lanping; Yang, Fangliang

    2015-06-01

    Micellar electrokinetic chromatography fingerprinting combined with quantification was successfully developed and applied to monitor the quality consistency of Weibizhi tablets, which is a classical compound preparation used to treat gastric ulcers. A background electrolyte composed of 57 mmol/L sodium borate, 21 mmol/L sodium dodecylsulfate and 100 mmol/L sodium hydroxide was used to separate compounds. To optimize capillary electrophoresis conditions, multivariate statistical analyses were applied. First, the most important factors influencing sample electrophoretic behavior were identified as background electrolyte concentrations. Then, a Box-Benhnken design response surface strategy using resolution index RF as an integrated response was set up to correlate factors with response. RF reflects the effective signal amount, resolution, and signal homogenization in an electropherogram, thus, it was regarded as an excellent indicator. In fingerprint assessments, simple quantified ratio fingerprint method was established for comprehensive quality discrimination of traditional Chinese medicines/herbal medicines from qualitative and quantitative perspectives, by which the quality of 27 samples from the same manufacturer were well differentiated. In addition, the fingerprint-efficacy relationship between fingerprints and antioxidant activities was established using partial least squares regression, which provided important medicinal efficacy information for quality control. The present study offered an efficient means for monitoring Weibizhi tablet quality consistency. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Optimization of the Hartmann-Shack microlens array

    NASA Astrophysics Data System (ADS)

    de Oliveira, Otávio Gomes; de Lima Monteiro, Davies William

    2011-04-01

    In this work we propose to optimize the microlens-array geometry for a Hartmann-Shack wavefront sensor. The optimization makes possible that regular microlens arrays with a larger number of microlenses are replaced by arrays with fewer microlenses located at optimal sampling positions, with no increase in the reconstruction error. The goal is to propose a straightforward and widely accessible numerical method to calculate an optimized microlens array for a known aberration statistics. The optimization comprises the minimization of the wavefront reconstruction error and/or the number of necessary microlenses in the array. We numerically generate, sample and reconstruct the wavefront, and use a genetic algorithm to discover the optimal array geometry. Within an ophthalmological context, as a case study, we demonstrate that an array with only 10 suitably located microlenses can be used to produce reconstruction errors as small as those of a 36-microlens regular array. The same optimization procedure can be employed for any application where the wavefront statistics is known.

  15. A review of the efficacy of transcranial magnetic stimulation (TMS) treatment for depression, and current and future strategies to optimize efficacy.

    PubMed

    Loo, Colleen K; Mitchell, Philip B

    2005-11-01

    There is a growing interest in extending the use of repetitive transcranial magnetic stimulation (rTMS) beyond research centres to the widespread clinical treatment of depression. Thus it is timely to critically review the evidence for the efficacy of rTMS as an antidepressant treatment. Factors relevant to the efficacy of rTMS are discussed along with the implications of these for the further optimization of rTMS. Clinical trials of the efficacy of rTMS in depressed subjects are summarized and reviewed, focusing mainly on sham-controlled studies and meta-analyses published to date. There is a fairly consistent statistical evidence for the superiority of rTMS over a sham control, though the degree of clinical improvement is not large. However, this data is derived mainly from two-week comparisons of rTMS versus sham, and evidence suggests greater efficacy with longer treatment courses. Studies so far have also varied greatly in approaches to rTMS stimulation (with respect to stimulation site, stimulus parameters etc) with little empirical evidence to inform on the relative merits of these approaches. Only studies published in English were reviewed. Many of the studies in the literature had small sample sizes and different methodologies, making comparisons between studies difficult. Current published studies and meta-analyses have evaluated the efficacy of rTMS as given in treatment paradigms that are almost certainly suboptimal (e.g of two weeks' duration). While the data nevertheless supports positive outcomes for rTMS, there is much scope for the further refinement and development of rTMS as an antidepressant treatment. Ongoing research is critical for optimizing the efficacy of rTMS.

  16. Emerging Techniques for Dose Optimization in Abdominal CT

    PubMed Central

    Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit

    2014-01-01

    Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277

  17. Separation of the atmospheric variability into non-Gaussian multidimensional sources by projection pursuit techniques

    NASA Astrophysics Data System (ADS)

    Pires, Carlos A. L.; Ribeiro, Andreia F. S.

    2017-02-01

    We develop an expansion of space-distributed time series into statistically independent uncorrelated subspaces (statistical sources) of low-dimension and exhibiting enhanced non-Gaussian probability distributions with geometrically simple chosen shapes (projection pursuit rationale). The method relies upon a generalization of the principal component analysis that is optimal for Gaussian mixed signals and of the independent component analysis (ICA), optimized to split non-Gaussian scalar sources. The proposed method, supported by information theory concepts and methods, is the independent subspace analysis (ISA) that looks for multi-dimensional, intrinsically synergetic subspaces such as dyads (2D) and triads (3D), not separable by ICA. Basically, we optimize rotated variables maximizing certain nonlinear correlations (contrast functions) coming from the non-Gaussianity of the joint distribution. As a by-product, it provides nonlinear variable changes `unfolding' the subspaces into nearly Gaussian scalars of easier post-processing. Moreover, the new variables still work as nonlinear data exploratory indices of the non-Gaussian variability of the analysed climatic and geophysical fields. The method (ISA, followed by nonlinear unfolding) is tested into three datasets. The first one comes from the Lorenz'63 three-dimensional chaotic model, showing a clear separation into a non-Gaussian dyad plus an independent scalar. The second one is a mixture of propagating waves of random correlated phases in which the emergence of triadic wave resonances imprints a statistical signature in terms of a non-Gaussian non-separable triad. Finally the method is applied to the monthly variability of a high-dimensional quasi-geostrophic (QG) atmospheric model, applied to the Northern Hemispheric winter. We find that quite enhanced non-Gaussian dyads of parabolic shape, perform much better than the unrotated variables in which concerns the separation of the four model's centroid regimes (positive and negative phases of the Arctic Oscillation and of the North Atlantic Oscillation). Triads are also likely in the QG model but of weaker expression than dyads due to the imposed shape and dimension. The study emphasizes the existence of nonlinear dyadic and triadic nonlinear teleconnections.

  18. Task-based data-acquisition optimization for sparse image reconstruction systems

    NASA Astrophysics Data System (ADS)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  19. Wavefront-guided versus wavefront-optimized laser in situ keratomileusis: contralateral comparative study.

    PubMed

    Padmanabhan, Prema; Mrochen, Michael; Basuthkar, Subam; Viswanathan, Deepa; Joseph, Roy

    2008-03-01

    To compare the outcomes of wavefront-guided and wavefront-optimized treatment in fellow eyes of patients having laser in situ keratomileusis (LASIK) for myopia. Medical and Vision Research Foundation, Tamil Nadu, India. This prospective comparative study comprised 27 patients who had wavefront-guided LASIK in 1 eye and wavefront-optimized LASIK in the fellow eye. The Hansatome (Bausch & Lomb) was used to create a superior-hinged flap and the Allegretto laser (WaveLight Laser Technologie AG), for photoablation. The Allegretto wave analyzer was used to measure ocular wavefront aberrations and the Functional Acuity Contrast Test chart, to measure contrast sensitivity before and 1 month after LASIK. The refractive and visual outcomes and the changes in aberrations and contrast sensitivity were compared between the 2 treatment modalities. One month postoperatively, 92% of eyes in the wavefront-guided group and 85% in the wavefront-optimized group had uncorrected visual acuity of 20/20 or better; 93% and 89%, respectively, had a postoperative spherical equivalent refraction of +/-0.50 diopter. The differences between groups were not statistically significant. Wavefront-guided LASIK induced less change in 18 of 22 higher-order Zernike terms than wavefront-optimized LASIK, with the change in positive spherical aberration the only statistically significant one (P= .01). Contrast sensitivity improved at the low and middle spatial frequencies (not statistically significant) and worsened significantly at high spatial frequencies after wavefront-guided LASIK; there was a statistically significant worsening at all spatial frequencies after wavefront-optimized LASIK. Although both wavefront-guided and wavefront-optimized LASIK gave excellent refractive correction results, the former induced less higher-order aberrations and was associated with better contrast sensitivity.

  20. Early Warning Signs of Suicide in Service Members Who Engage in Unauthorized Acts of Violence

    DTIC Science & Technology

    2016-06-01

    observable to military law enforcement personnel. Statistical analyses tested for differences in warning signs between cases of suicide, violence, or...indicators, (2) Behavioral Change indicators, (3) Social indicators, and (4) Occupational indicators. Statistical analyses were conducted to test for...6 Coding _________________________________________________________________ 7 Statistical

  1. Mean-variance portfolio optimization by using time series approaches based on logarithmic utility function

    NASA Astrophysics Data System (ADS)

    Soeryana, E.; Fadhlina, N.; Sukono; Rusyaman, E.; Supian, S.

    2017-01-01

    Investments in stocks investors are also faced with the issue of risk, due to daily price of stock also fluctuate. For minimize the level of risk, investors usually forming an investment portfolio. Establishment of a portfolio consisting of several stocks are intended to get the optimal composition of the investment portfolio. This paper discussed about optimizing investment portfolio of Mean-Variance to stocks by using mean and volatility is not constant based on logarithmic utility function. Non constant mean analysed using models Autoregressive Moving Average (ARMA), while non constant volatility models are analysed using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization process is performed by using the Lagrangian multiplier technique. As a numerical illustration, the method is used to analyse some Islamic stocks in Indonesia. The expected result is to get the proportion of investment in each Islamic stock analysed.

  2. [Statistical analysis using freely-available "EZR (Easy R)" software].

    PubMed

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  3. Statistical principle and methodology in the NISAN system.

    PubMed Central

    Asano, C

    1979-01-01

    The NISAN system is a new interactive statistical analysis program package constructed by an organization of Japanese statisticans. The package is widely available for both statistical situations, confirmatory analysis and exploratory analysis, and is planned to obtain statistical wisdom and to choose optimal process of statistical analysis for senior statisticians. PMID:540594

  4. Statistical physics of hard combinatorial optimization: Vertex cover problem

    NASA Astrophysics Data System (ADS)

    Zhao, Jin-Hua; Zhou, Hai-Jun

    2014-07-01

    Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.

  5. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment.

    PubMed

    Berkes, Pietro; Orbán, Gergo; Lengyel, Máté; Fiser, József

    2011-01-07

    The brain maintains internal models of its environment to interpret sensory inputs and to prepare actions. Although behavioral studies have demonstrated that these internal models are optimally adapted to the statistics of the environment, the neural underpinning of this adaptation is unknown. Using a Bayesian model of sensory cortical processing, we related stimulus-evoked and spontaneous neural activities to inferences and prior expectations in an internal model and predicted that they should match if the model is statistically optimal. To test this prediction, we analyzed visual cortical activity of awake ferrets during development. Similarity between spontaneous and evoked activities increased with age and was specific to responses evoked by natural scenes. This demonstrates the progressive adaptation of internal models to the statistics of natural stimuli at the neural level.

  6. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  7. HydroApps: An R package for statistical simulation to use in regional analysis

    NASA Astrophysics Data System (ADS)

    Ganora, D.

    2013-12-01

    The HydroApps package is a newborn R extension initially developed to support the use of a recent model for flood frequency estimation developed for applications in Northwestern Italy; it also contains some general tools for regional analyses and can be easily extended to include other statistical models. The package is currently at an experimental level of development. The HydroApps is a corollary of the SSEM project for regional flood frequency analysis, although it was developed independently to support various instances of regional analyses. Its aim is to provide a basis for interplay between statistical simulation and practical operational use. In particular, the main module of the package deals with the building of the confidence bands of flood frequency curves expressed by means of their L-moments. Other functions include pre-processing and visualization of hydrologic time series, analysis of the optimal design-flood under uncertainty, but also tools useful in water resources management for the estimation of flow duration curves and their sensitivity to water withdrawals. Particular attention is devoted to the code granularity, i.e. the level of detail and aggregation of the code: a greater detail means more low-level functions, which entails more flexibility but reduces the ease of use for practical use. A balance between detail and simplicity is necessary and can be resolved with appropriate wrapping functions and specific help pages for each working block. From a more general viewpoint, the package has not really and user-friendly interface, but runs on multiple operating systems and it's easy to update, as many other open-source projects., The HydroApps functions and their features are reported in order to share ideas and materials to improve the ';technological' and information transfer between scientist communities and final users like policy makers.

  8. [Stapedotomie with the use of CO2 laser--"one shot" technique].

    PubMed

    Szyfter, Witold; Mielcarek-Kuchta, Daniela; Młodkowska, Anna; Miętkiewska-Leszniewska, Dorota; Obrębowska, Zofia; Łączkowska-Przybylska, Joanna

    2013-01-01

    CO2 laser is used in stapes surgery due to good water absorption and quite optimal ablation of a bony structure without the influence of inner ear parameters. the assessment of the influence of CO2 - "one shot" laser on hearing results in the patients group with otosclerosis. The study was carried out on a patients group after surgical treatment. The follow up time was at least 6 months. The hearing results were described according to the guidelines of the American Committee on Hearing and Equilibrium. The obtained results were statistically analysed with the use of the Wilcoxon sequence pair test. The CO2 - "one shot" laser has been used in Department of Otolaryngology in Poznań since July 2009. Using this system 101 operations were carried out to the end of December 2011. 54 patients were in the analysed group, there were 40 women and 14 men, the age range from 22 to 59. In the Wilcoxon sequence pair test there was a statistically significant correlation between the value of the hearing threshold in pre- and post-operative examinations at 0.5, 1, 2 and 3kHz for bone and air conduction. We found also a statistically significant correlation between the mean value of the air-bone gap before and after treatment. In the group after the surgery the cochlear reserved became closed or decreased (p<0.001). based of the hearing results we found great usefulness of CO2 - "one shot" laser in stapes surgery. Copyright © 2012 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  9. New low-spin states of 122Xe observed via high-statistics β-decay of 122Cs

    NASA Astrophysics Data System (ADS)

    Jigmeddorj, B.; Garrett, P. E.; Andreoiu, C.; Ball, G. C.; Bruhn, T.; Cross, D. S.; Garnsworthy, A. B.; Hadinia, B.; Moukaddam, M.; Park, J.; Pore, J. L.; Radich, A. J.; Rajabali, M. M.; Rand, E. T.; Rizwan, U.; Svensson, C. E.; Voss, P.; Wang, Z. M.; Wood, J. L.; Yates, S. W.

    2018-05-01

    Excited states of 122Xe were studied via the β+/EC decay of 122Cs with the 8π γ-ray spectrometer at the TRIUMF-ISAC facility. Compton-suppressed HPGe detectors were used for measurements of γ-ray intensities, γγ coincidences, and γ-γ angular correlations. Two sets of data were collected to optimize the decays of the ground (21.2 s) and isomeric (3.7 min) states of 122Cs. The data collected have enabled the observation of about 505 new transitions and about 250 new levels, including 51 new low-spin states. Spin assignments have been made for 58 low-spin states based on the deduced β-decay feeding and γ-γ angular correlation analyses.

  10. The 'wired' universe of organic chemistry.

    PubMed

    Grzybowski, Bartosz A; Bishop, Kyle J M; Kowalczyk, Bartlomiej; Wilmer, Christopher E

    2009-04-01

    The millions of reactions performed and compounds synthesized by organic chemists over the past two centuries connect to form a network larger than the metabolic networks of higher organisms and rivalling the complexity of the World Wide Web. Despite its apparent randomness, the network of chemistry has a well-defined, modular architecture. The network evolves in time according to trends that have not changed since the inception of the discipline, and thus project into chemistry's future. Analysis of organic chemistry using the tools of network theory enables the identification of most 'central' organic molecules, and for the prediction of which and how many molecules will be made in the future. Statistical analyses based on network connectivity are useful in optimizing parallel syntheses, in estimating chemical reactivity, and more.

  11. Identifying optimal threshold statistics for elimination of hookworm using a stochastic simulation model.

    PubMed

    Truscott, James E; Werkman, Marleen; Wright, James E; Farrell, Sam H; Sarkar, Rajiv; Ásbjörnsdóttir, Kristjana; Anderson, Roy M

    2017-06-30

    There is an increased focus on whether mass drug administration (MDA) programmes alone can interrupt the transmission of soil-transmitted helminths (STH). Mathematical models can be used to model these interventions and are increasingly being implemented to inform investigators about expected trial outcome and the choice of optimum study design. One key factor is the choice of threshold for detecting elimination. However, there are currently no thresholds defined for STH regarding breaking transmission. We develop a simulation of an elimination study, based on the DeWorm3 project, using an individual-based stochastic disease transmission model in conjunction with models of MDA, sampling, diagnostics and the construction of study clusters. The simulation is then used to analyse the relationship between the study end-point elimination threshold and whether elimination is achieved in the long term within the model. We analyse the quality of a range of statistics in terms of the positive predictive values (PPV) and how they depend on a range of covariates, including threshold values, baseline prevalence, measurement time point and how clusters are constructed. End-point infection prevalence performs well in discriminating between villages that achieve interruption of transmission and those that do not, although the quality of the threshold is sensitive to baseline prevalence and threshold value. Optimal post-treatment prevalence threshold value for determining elimination is in the range 2% or less when the baseline prevalence range is broad. For multiple clusters of communities, both the probability of elimination and the ability of thresholds to detect it are strongly dependent on the size of the cluster and the size distribution of the constituent communities. Number of communities in a cluster is a key indicator of probability of elimination and PPV. Extending the time, post-study endpoint, at which the threshold statistic is measured improves PPV value in discriminating between eliminating clusters and those that bounce back. The probability of elimination and PPV are very sensitive to baseline prevalence for individual communities. However, most studies and programmes are constructed on the basis of clusters. Since elimination occurs within smaller population sub-units, the construction of clusters introduces new sensitivities for elimination threshold values to cluster size and the underlying population structure. Study simulation offers an opportunity to investigate key sources of sensitivity for elimination studies and programme designs in advance and to tailor interventions to prevailing local or national conditions.

  12. Cumulative psychosocial stress, coping resources, and preterm birth.

    PubMed

    McDonald, Sheila W; Kingston, Dawn; Bayrampour, Hamideh; Dolan, Siobhan M; Tough, Suzanne C

    2014-12-01

    Preterm birth constitutes a significant international public health issue, with implications for child and family well-being. High levels of psychosocial stress and negative affect before and during pregnancy are contributing factors to shortened gestation and preterm birth. We developed a cumulative psychosocial stress variable and examined its association with early delivery controlling for known preterm birth risk factors and confounding environmental variables. We further examined this association among subgroups of women with different levels of coping resources. Utilizing the All Our Babies (AOB) study, an ongoing prospective pregnancy cohort study in Alberta, Canada (n = 3,021), multinomial logistic regression was adopted to examine the independent effect of cumulative psychosocial stress and preterm birth subgroups compared to term births. Stratified analyses according to categories of perceived social support and optimism were undertaken to examine differential effects among subgroups of women. Cumulative psychosocial stress was a statistically significant risk factor for late preterm birth (OR = 1.73; 95 % CI = 1.07, 2.81), but not for early preterm birth (OR = 2.44; 95 % CI = 0.95, 6.32), controlling for income, history of preterm birth, pregnancy complications, reproductive history, and smoking in pregnancy. Stratified analyses showed that cumulative psychosocial stress was a significant risk factor for preterm birth at <37 weeks gestation for women with low levels of social support (OR = 2.09; 95 % CI = 1.07, 4.07) or optimism (OR = 1.87; 95 % CI = 1.04, 3.37). Our analyses suggest that early vulnerability combined with current anxiety symptoms in pregnancy confers risk for preterm birth. Coping resources may mitigate the effect of cumulative psychosocial stress on the risk for early delivery.

  13. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  14. Environmental statistics and optimal regulation

    NASA Astrophysics Data System (ADS)

    Sivak, David; Thomson, Matt

    2015-03-01

    The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.

  15. Comparative one-factor-at-a-time, response surface (statistical) and bench-scale bioreactor level optimization of thermoalkaline protease production from a psychrotrophic Pseudomonas putida SKG-1 isolate.

    PubMed

    Singh, Santosh K; Singh, Sanjay K; Tripathi, Vinayak R; Khare, Sunil K; Garg, Satyendra K

    2011-12-28

    Production of alkaline protease from various bacterial strains using statistical methods is customary now-a-days. The present work is first attempt for the production optimization of a solvent stable thermoalkaline protease by a psychrotrophic Pseudomonas putida isolate using conventional, response surface methods, and fermentor level optimization. The pre-screening medium amended with optimized (w/v) 1.0% glucose, 2.0% gelatin and 0.5% yeast extract, produced 278 U protease ml(-1) at 72 h incubation. Enzyme production increased to 431 Uml(-1) when Mg2+ (0.01%, w/v) was supplemented. Optimization of physical factors further enhanced protease to 514 Uml(-1) at pH 9.0, 25°C and 200 rpm within 60 h. The combined effect of conventionally optimized variables (glucose, yeast extract, MgSO4 and pH), thereafter predicted by response surface methodology yielded 617 U protease ml(-1) at glucose 1.25% (w/v), yeast extract 0.5% (w/v), MgSO4 0.01% (w/v) and pH 8.8. Bench-scale bioreactor level optimization resulted in enhanced production of 882 U protease ml(-1) at 0.8 vvm aeration and 150 rpm agitation during only 48 h incubation. The optimization of fermentation variables using conventional, statistical approaches and aeration/agitation at fermentor level resulted in ~13.5 folds increase (882 Uml(-1)) in protease production compared to un-optimized conditions (65 Uml(-1)). This is the highest level of thermoalkaline protease reported so far by any psychrotrophic bacterium.

  16. Received signal strength and local terrain profile data for radio network planning and optimization at GSM frequency bands.

    PubMed

    Popoola, Segun I; Atayero, Aderemi A; Faruk, Nasir

    2018-02-01

    The behaviour of radio wave signals in a wireless channel depends on the local terrain profile of the propagation environments. In view of this, Received Signal Strength (RSS) of transmitted signals are measured at different points in space for radio network planning and optimization. However, these important data are often not publicly available for wireless channel characterization and propagation model development. In this data article, RSS data of a commercial base station operating at 900 and 1800 MHz were measured along three different routes of Lagos-Badagry Highway, Nigeria. In addition, local terrain profile data of the study area (terrain elevation, clutter height, altitude, and the distance of the mobile station from the base station) are extracted from Digital Terrain Map (DTM) to account for the unique environmental features. Statistical analyses and probability distributions of the RSS data are presented in tables and graphs. Furthermore, the degree of correlations (and the corresponding significance) between the RSS and the local terrain parameters were computed and analyzed for proper interpretations. The data provided in this article will help radio network engineers to: predict signal path loss; estimate radio coverage; efficiently reuse limited frequencies; avoid interferences; optimize handover; and adjust transmitted power level.

  17. From crude glycerol to carotenoids by using a Rhodotorula glutinis mutant.

    PubMed

    Cutzu, Raffaela; Coi, Annalisa; Rosso, Fulvia; Bardi, Laura; Ciani, Maurizio; Budroni, Marilena; Zara, Giacomo; Zara, Severino; Mannazzu, Ilaria

    2013-06-01

    In this work eighteen red yeasts were screened for carotenoids production on glycerol containing medium. Strain C2.5t1 of Rhodotorula glutinis, that showed the highest productivity, was UV mutagenized. Mutant 400A15, that exhibited a 280 % increase in β-carotene production in respect to the parental strain, was selected. A central composite design was applied to 400A15 to optimize carotenoids and biomass productions. Regression analyses of the quadratic polynomial equations obtained (R(2) = 0.87 and 0.94, for carotenoids and biomass, respectively) suggest that the models are reliable and significant (P < 0.0001) in the prediction of carotenoids and biomass productions on the basis of the concentrations of crude glycerol, yeast extract and peptone. Accordingly, total carotenoids production achieved (14.07 ± 1.45 mg l(-1)) under optimized growth conditions was not statistically different from the maximal predicted (14.64 ± 1.57 mg l(-1)) (P < 0.05), and it was about 100 % higher than that obtained under un-optimized conditions. Therefore mutant 400A15 may represent a biocatalyst of choice for the bioconversion of crude glycerol into value-added metabolites, and a tool for the valorization of this by-product of the biodiesel industry.

  18. Defining hypotension in moderate to severely injured trauma patients: raising the bar for the elderly.

    PubMed

    Edwards, Meghan; Ley, Eric; Mirocha, James; Hadjibashi, Anoushiravan Amini; Margulies, Daniel R; Salim, Ali

    2010-10-01

    Hypotension, defined as systolic blood pressure less than 90 mm Hg, is recognized as a sign of hemorrhagic shock and is a validated prognostic indicator. The definition of hypotension, particularly in the elderly population, deserves attention. We hypothesized that the systolic blood pressure associated with increased mortality resulting from hemorrhagic shock increases with increasing age. The Los Angeles County Trauma Database was queried for all moderate to severely injured patients without major head injuries admitted between 1998 and 2005. Several fit statistic analyses were performed for each systolic blood pressure from 50 to 180 mm Hg to identify the model that most accurately defined hypotension for three age groups. The optimal definition of hypotension for each group was determined from the best fit model. A total of 24,438 patients were analyzed. The optimal definition of hypotension was systolic blood pressure of 100 mm Hg for patients 20 to 49 years, 120 mm Hg for patients 50 to 69 years, and 140 mm Hg for patients 70 years and older. The optimal systolic blood pressure for improved mortality in hemorrhagic shock increases significantly with increasing age. Elderly trauma patients without major head injuries should be considered hypotensive for systolic blood pressure less than 140 mm Hg.

  19. Optimization of fermentation medium for the production of atrazine degrading strain Acinetobacter sp. DNS(32) by statistical analysis system.

    PubMed

    Zhang, Ying; Wang, Yang; Wang, Zhi-Gang; Wang, Xi; Guo, Huo-Sheng; Meng, Dong-Fang; Wong, Po-Keung

    2012-01-01

    Statistical experimental designs provided by statistical analysis system (SAS) software were applied to optimize the fermentation medium composition for the production of atrazine-degrading Acinetobacter sp. DNS(32) in shake-flask cultures. A "Plackett-Burman Design" was employed to evaluate the effects of different components in the medium. The concentrations of corn flour, soybean flour, and K(2)HPO(4) were found to significantly influence Acinetobacter sp. DNS(32) production. The steepest ascent method was employed to determine the optimal regions of these three significant factors. Then, these three factors were optimized using central composite design of "response surface methodology." The optimized fermentation medium composition was composed as follows (g/L): corn flour 39.49, soybean flour 25.64, CaCO(3) 3, K(2)HPO(4) 3.27, MgSO(4)·7H(2)O 0.2, and NaCl 0.2. The predicted and verifiable values in the medium with optimized concentration of components in shake flasks experiments were 7.079 × 10(8) CFU/mL and 7.194 × 10(8) CFU/mL, respectively. The validated model can precisely predict the growth of atrazine-degraing bacterium, Acinetobacter sp. DNS(32).

  20. Using machine learning to assess covariate balance in matching studies.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2016-12-01

    In order to assess the effectiveness of matching approaches in observational studies, investigators typically present summary statistics for each observed pre-intervention covariate, with the objective of showing that matching reduces the difference in means (or proportions) between groups to as close to zero as possible. In this paper, we introduce a new approach to distinguish between study groups based on their distributions of the covariates using a machine-learning algorithm called optimal discriminant analysis (ODA). Assessing covariate balance using ODA as compared with the conventional method has several key advantages: the ability to ascertain how individuals self-select based on optimal (maximum-accuracy) cut-points on the covariates; the application to any variable metric and number of groups; its insensitivity to skewed data or outliers; and the use of accuracy measures that can be widely applied to all analyses. Moreover, ODA accepts analytic weights, thereby extending the assessment of covariate balance to any study design where weights are used for covariate adjustment. By comparing the two approaches using empirical data, we are able to demonstrate that using measures of classification accuracy as balance diagnostics produces highly consistent results to those obtained via the conventional approach (in our matched-pairs example, ODA revealed a weak statistically significant relationship not detected by the conventional approach). Thus, investigators should consider ODA as a robust complement, or perhaps alternative, to the conventional approach for assessing covariate balance in matching studies. © 2016 John Wiley & Sons, Ltd.

  1. Optimizing liquid effluent monitoring at a large nuclear complex.

    PubMed

    Chou, Charissa J; Barnett, D Brent; Johnson, Vernon G; Olson, Phil M

    2003-12-01

    Effluent monitoring typically requires a large number of analytes and samples during the initial or startup phase of a facility. Once a baseline is established, the analyte list and sampling frequency may be reduced. Although there is a large body of literature relevant to the initial design, few, if any, published papers exist on updating established effluent monitoring programs. This paper statistically evaluates four years of baseline data to optimize the liquid effluent monitoring efficiency of a centralized waste treatment and disposal facility at a large defense nuclear complex. Specific objectives were to: (1) assess temporal variability in analyte concentrations, (2) determine operational factors contributing to waste stream variability, (3) assess the probability of exceeding permit limits, and (4) streamline the sampling and analysis regime. Results indicated that the probability of exceeding permit limits was one in a million under normal facility operating conditions, sampling frequency could be reduced, and several analytes could be eliminated. Furthermore, indicators such as gross alpha and gross beta measurements could be used in lieu of more expensive specific isotopic analyses (radium, cesium-137, and strontium-90) for routine monitoring. Study results were used by the state regulatory agency to modify monitoring requirements for a new discharge permit, resulting in an annual cost savings of US dollars 223,000. This case study demonstrates that statistical evaluation of effluent contaminant variability coupled with process knowledge can help plant managers and regulators streamline analyte lists and sampling frequencies based on detection history and environmental risk.

  2. Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay

    2016-10-01

    Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.

  3. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    PubMed

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  5. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  6. Recent developments in the structural design and optimization of ITER neutral beam manifold

    NASA Astrophysics Data System (ADS)

    Chengzhi, CAO; Yudong, PAN; Zhiwei, XIA; Bo, LI; Tao, JIANG; Wei, LI

    2018-02-01

    This paper describes a new design of the neutral beam manifold based on a more optimized support system. A proposed alternative scheme has presented to replace the former complex manifold supports and internal pipe supports in the final design phase. Both the structural reliability and feasibility were confirmed with detailed analyses. Comparative analyses between two typical types of manifold support scheme were performed. All relevant results of mechanical analyses for typical operation scenarios and fault conditions are presented. Future optimization activities are described, which will give useful information for a refined setting of components in the next phase.

  7. Missing value imputation for microarray data: a comprehensive comparison study and a web tool.

    PubMed

    Chiu, Chia-Chun; Chan, Shih-Yao; Wang, Chung-Ching; Wu, Wei-Sheng

    2013-01-01

    Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses.

  8. What Would I Know About Mercy? Faith and Optimistic Expectancies Among African Americans

    PubMed Central

    Mattis, Jacqueline S.; Powell, Wizdom; Grayman, Nyasha A.; Murray, Yohance; Cole-Lewis, Yasmin C.; Goodwill, Janelle R.

    2017-01-01

    A small body of research has begun to explore the association between faith and optimism among African Americans. However, missing from the extant work is an examination of the extent to which traditional indices of religious commitment work together with beliefs about God to shape optimism. The present study examines the utility of indices of social location, religious commitment (i.e., early and current religious service attendance, subjective religiosity), belief about the quality of one’s relationship with God (i.e., a belief that one is connected to a loving God), and beliefs about being the recipient of divine forgiveness for predicting dispositional optimism among a sample of community residing African American adults (N = 241). Age, subjective religiosity, and organizational religiosity were positively related to optimism in bivariate analyses. Hierarchical regression analyses demonstrated a significant association between age, subjective religiosity, and optimism; however, those associations were eliminated once relationship with God and belief in one’s forgiveness by God were entered into the model. Only belief in God’s love predicted optimism in multivariate analyses. Serial mediation analyses revealed that beliefs about the quality of one’s relationship with God and belief in divine forgiveness fully mediated the relationship between subjective religiosity and optimism, but that the relationship is driven largely by relationship with God. Implications of these findings are discussed. PMID:29057016

  9. Optimality, stochasticity, and variability in motor behavior

    PubMed Central

    Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel

    2008-01-01

    Recent theories of motor control have proposed that the nervous system acts as a stochastically optimal controller, i.e. it plans and executes motor behaviors taking into account the nature and statistics of noise. Detrimental effects of noise are converted into a principled way of controlling movements. Attractive aspects of such theories are their ability to explain not only characteristic features of single motor acts, but also statistical properties of repeated actions. Here, we present a critical analysis of stochastic optimality in motor control which reveals several difficulties with this hypothesis. We show that stochastic control may not be necessary to explain the stochastic nature of motor behavior, and we propose an alternative framework, based on the action of a deterministic controller coupled with an optimal state estimator, which relieves drawbacks of stochastic optimality and appropriately explains movement variability. PMID:18202922

  10. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  11. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  12. Optimal predictions in everyday cognition: the wisdom of individuals or crowds?

    PubMed

    Mozer, Michael C; Pashler, Harold; Homaei, Hadjar

    2008-10-01

    Griffiths and Tenenbaum (2006) asked individuals to make predictions about the duration or extent of everyday events (e.g., cake baking times), and reported that predictions were optimal, employing Bayesian inference based on veridical prior distributions. Although the predictions conformed strikingly to statistics of the world, they reflect averages over many individuals. On the conjecture that the accuracy of the group response is chiefly a consequence of aggregating across individuals, we constructed simple, heuristic approximations to the Bayesian model premised on the hypothesis that individuals have access merely to a sample of k instances drawn from the relevant distribution. The accuracy of the group response reported by Griffiths and Tenenbaum could be accounted for by supposing that individuals each utilize only two instances. Moreover, the variability of the group data is more consistent with this small-sample hypothesis than with the hypothesis that people utilize veridical or nearly veridical representations of the underlying prior distributions. Our analyses lead to a qualitatively different view of how individuals reason from past experience than the view espoused by Griffiths and Tenenbaum. 2008 Cognitive Science Society, Inc.

  13. Optimal and fast E/B separation with a dual messenger field

    NASA Astrophysics Data System (ADS)

    Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-05-01

    We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.

  14. Dispositional and Explanatory Style Optimism as Potential Moderators of the Relationship between Hopelessness and Suicidal Ideation

    ERIC Educational Resources Information Center

    Hirsch, Jameson K.; Conner, Kenneth R.

    2006-01-01

    To test the hypothesis that higher levels of optimism reduce the association between hopelessness and suicidal ideation, 284 college students completed self-report measures of optimism and Beck scales for hopelessness, suicidal ideation, and depression. A statistically significant interaction between hopelessness and one measure of optimism was…

  15. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  16. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography.

    PubMed

    Packham, B; Barnes, G; Dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-06-01

    Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity.

  17. Empirical validation of statistical parametric mapping for group imaging of fast neural activity using electrical impedance tomography

    PubMed Central

    Packham, B; Barnes, G; dos Santos, G Sato; Aristovich, K; Gilad, O; Ghosh, A; Oh, T; Holder, D

    2016-01-01

    Abstract Electrical impedance tomography (EIT) allows for the reconstruction of internal conductivity from surface measurements. A change in conductivity occurs as ion channels open during neural activity, making EIT a potential tool for functional brain imaging. EIT images can have  >10 000 voxels, which means statistical analysis of such images presents a substantial multiple testing problem. One way to optimally correct for these issues and still maintain the flexibility of complicated experimental designs is to use random field theory. This parametric method estimates the distribution of peaks one would expect by chance in a smooth random field of a given size. Random field theory has been used in several other neuroimaging techniques but never validated for EIT images of fast neural activity, such validation can be achieved using non-parametric techniques. Both parametric and non-parametric techniques were used to analyze a set of 22 images collected from 8 rats. Significant group activations were detected using both techniques (corrected p  <  0.05). Both parametric and non-parametric analyses yielded similar results, although the latter was less conservative. These results demonstrate the first statistical analysis of such an image set and indicate that such an analysis is an approach for EIT images of neural activity. PMID:27203477

  18. Diagnostic accuracy of the clinical and CT findings for differentiating Kikuchi's disease and tuberculous lymphadenitis presenting with cervical lymphadenopathy.

    PubMed

    Baek, Hye Jin; Lee, Jeong Hyun; Lim, Hyun Kyung; Lee, Ha Young; Baek, Jung Hwan

    2014-11-01

    To determine the optimal clinical and CT findings for differentiating Kikuchi's disease (KD) and tuberculous lymphadenitis (TB) in patients presenting with cervical lymphadenopathy. From 2006 to 2010, 87 consecutive patients who were finally diagnosed with KD or TB were enrolled. Two radiologists performed independent analysis of contrast-enhanced neck CT images with regard to the involvement pattern, nodal or perinodal changes, and evidence of the previous infection. Significant clinical and CT findings of KD were determined by statistical analyses. Of the 87 patients, 27 (31%) were classified as having KD and 60 (69%) as having TB. Statistically significant findings of KD patients were younger age, presence of fever, involvement of ≥5 nodal levels or the bilateral neck, no or minimal nodal necrosis, marked perinodal infiltration, and no evidence of upper lung lesion or mediastinal lymphadenopathy. The presence of four or more statistically significant clinical and CT findings of KD had the largest area under the receiver-operating characteristic curve (A z = 0.861; 95% confidence intervals 0.801, 0.909), with a sensitivity of 89% and specificity of 83%. CT can be a helpful tool for differentiating KD from TB, especially when it is combined with the clinical findings.

  19. Multivariate statistical approach to estimate mixing proportions for unknown end members

    USGS Publications Warehouse

    Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.

    2012-01-01

    A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.

  20. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    PubMed

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  1. Optimism bias leads to inconclusive results - an empirical study

    PubMed Central

    Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T.; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J.

    2010-01-01

    Objective Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully, and explored whether poor accrual or optimism bias is responsible for inconclusive results. Study Design Systematic review Setting Retrospective analysis of a consecutive series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Results 359 trials (374 comparisons) enrolling 150,232 patients were analyzed. 70% (262/374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273/374) of studies. Investigators’ judgments and statistical inferences were concordant in 75% (279/374) of trials. Investigators consistently overestimated their expected treatment effects, but to a significantly larger extent for inconclusive trials. The median ratio of expected over observed hazard ratio or odds ratio was 1.34 (range 0.19 – 15.40) in conclusive trials compared to 1.86 (range 1.09 – 12.00) in inconclusive studies (p<0.0001). Only 17% of the trials had treatment effects that matched original researchers’ expectations. Conclusion Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. PMID:21163620

  2. Optimism bias leads to inconclusive results-an empirical study.

    PubMed

    Djulbegovic, Benjamin; Kumar, Ambuj; Magazin, Anja; Schroen, Anneke T; Soares, Heloisa; Hozo, Iztok; Clarke, Mike; Sargent, Daniel; Schell, Michael J

    2011-06-01

    Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully and explored whether poor accrual or optimism bias is responsible for inconclusive results. Systematic review. Retrospective analysis of a consecutive-series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. Three hundred fifty-nine trials (374 comparisons) enrolling 150,232 patients were analyzed. Seventy percent (262 of 374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273 of 374) of studies. Investigators' judgments and statistical inferences were concordant in 75% (279 of 374) of trials. Investigators consistently overestimated their expected treatment effects but to a significantly larger extent for inconclusive trials. The median ratio of expected and observed hazard ratio or odds ratio was 1.34 (range: 0.19-15.40) in conclusive trials compared with 1.86 (range: 1.09-12.00) in inconclusive studies (P<0.0001). Only 17% of the trials had treatment effects that matched original researchers' expectations. Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Effect of experimental design on the prediction performance of calibration models based on near-infrared spectroscopy for pharmaceutical applications.

    PubMed

    Bondi, Robert W; Igne, Benoît; Drennen, James K; Anderson, Carl A

    2012-12-01

    Near-infrared spectroscopy (NIRS) is a valuable tool in the pharmaceutical industry, presenting opportunities for online analyses to achieve real-time assessment of intermediates and finished dosage forms. The purpose of this work was to investigate the effect of experimental designs on prediction performance of quantitative models based on NIRS using a five-component formulation as a model system. The following experimental designs were evaluated: five-level, full factorial (5-L FF); three-level, full factorial (3-L FF); central composite; I-optimal; and D-optimal. The factors for all designs were acetaminophen content and the ratio of microcrystalline cellulose to lactose monohydrate. Other constituents included croscarmellose sodium and magnesium stearate (content remained constant). Partial least squares-based models were generated using data from individual experimental designs that related acetaminophen content to spectral data. The effect of each experimental design was evaluated by determining the statistical significance of the difference in bias and standard error of the prediction for that model's prediction performance. The calibration model derived from the I-optimal design had similar prediction performance as did the model derived from the 5-L FF design, despite containing 16 fewer design points. It also outperformed all other models estimated from designs with similar or fewer numbers of samples. This suggested that experimental-design selection for calibration-model development is critical, and optimum performance can be achieved with efficient experimental designs (i.e., optimal designs).

  4. Optimal Cost Avoidance Investment and Pricing Strategies for Performance-Based Post-Production Service Contracts

    DTIC Science & Technology

    2011-04-30

    a BS degree in Mathematics and an MS degree in Statistics and Financial and Actuarial Mathematics from Kiev National Taras Shevchenko University...degrees from Rutgers University in Industrial Engineering (PhD and MS) and Statistics (MS) and from Universidad Nacional Autonoma de Mexico in Actuarial ...Science. His research efforts focus on developing mathematical models for the analysis, computation, and optimization of system performance with

  5. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  6. Use of missing data methods in longitudinal studies: the persistence of bad practices in developmental psychology.

    PubMed

    Jelicić, Helena; Phelps, Erin; Lerner, Richard M

    2009-07-01

    Developmental science rests on describing, explaining, and optimizing intraindividual changes and, hence, empirically requires longitudinal research. Problems of missing data arise in most longitudinal studies, thus creating challenges for interpreting the substance and structure of intraindividual change. Using a sample of reports of longitudinal studies obtained from three flagship developmental journals-Child Development, Developmental Psychology, and Journal of Research on Adolescence-we examined the number of longitudinal studies reporting missing data and the missing data techniques used. Of the 100 longitudinal studies sampled, 57 either reported having missing data or had discrepancies in sample sizes reported for different analyses. The majority of these studies (82%) used missing data techniques that are statistically problematic (either listwise deletion or pairwise deletion) and not among the methods recommended by statisticians (i.e., the direct maximum likelihood method and the multiple imputation method). Implications of these results for developmental theory and application, and the need for understanding the consequences of using statistically inappropriate missing data techniques with actual longitudinal data sets, are discussed.

  7. Lamb survival analysis from birth to weaning in Iranian Kermani sheep.

    PubMed

    Barazandeh, Arsalan; Moghbeli, Sadrollah Molaei; Vatankhah, Mahmood; Hossein-Zadeh, Navid Ghavi

    2012-04-01

    Survival records from 1,763 Kermani lambs born between 1996 and 2004 from 294 ewes and 81 rams were used to determine genetic and non-genetic factors affecting lamb survival. Traits included were lamb survival across five periods from birth to 7, 14, 56, 70, and 90 days of age. Traits were analyzed under Weibull proportional hazard sire models. Several binary analyses were also conducted using animal models. Statistical models included the fixed class effects of sex of lamb, month and year of birth, a covariate effect of birth weight, and random genetic effects of both sire (in survival analyses) and animal (in binary analyses). The average survival to 90 days of age was 94.8%. Hazard rates ranged from 1.00 (birth to 90 days of age) to 1.73 (birth to 7 days of age) between the two sexes indicating that male lambs were at higher risk of mortality than females (P < 0.01). This study also revealed a curvilinear relationship between lamb survival and lamb birth weight, suggesting that viability and birth weight could be considered simultaneously in the selection programs to obtain optimal birth weight in Kermani lambs. Estimates of heritabilities from survival analyses were medium and ranged from 0.23 to 0.29. In addition, heritability estimates obtained from binary analyses were low and varied from 0.04 to 0.09. The results of this study suggest that progress in survival traits could be possible through managerial strategies and genetic selection.

  8. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  9. Conceptual Design Oriented Wing Structural Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Lau, May Yuen

    1996-01-01

    Airplane optimization has always been the goal of airplane designers. In the conceptual design phase, a designer's goal could be tradeoffs between maximum structural integrity, minimum aerodynamic drag, or maximum stability and control, many times achieved separately. Bringing all of these factors into an iterative preliminary design procedure was time consuming, tedious, and not always accurate. For example, the final weight estimate would often be based upon statistical data from past airplanes. The new design would be classified based on gross characteristics, such as number of engines, wingspan, etc., to see which airplanes of the past most closely resembled the new design. This procedure works well for conventional airplane designs, but not very well for new innovative designs. With the computing power of today, new methods are emerging for the conceptual design phase of airplanes. Using finite element methods, computational fluid dynamics, and other computer techniques, designers can make very accurate disciplinary-analyses of an airplane design. These tools are computationally intensive, and when used repeatedly, they consume a great deal of computing time. In order to reduce the time required to analyze a design and still bring together all of the disciplines (such as structures, aerodynamics, and controls) into the analysis, simplified design computer analyses are linked together into one computer program. These design codes are very efficient for conceptual design. The work in this thesis is focused on a finite element based conceptual design oriented structural synthesis capability (CDOSS) tailored to be linked into ACSYNT.

  10. Microcomputed tomographic and histomorphometric analyses of novel titanium mesh membranes for guided bone regeneration: a study in rat calvarial defects.

    PubMed

    Rakhmatia, Yunia Dwi; Ayukawa, Yasunori; Furuhashi, Akihiro; Koyano, Kiyoshi

    2014-01-01

    The objective of this study was to evaluate the optimal thickness and porosity of novel titanium mesh membranes to enhance bone augmentation, prevent soft tissue ingrowth, and prevent membrane exposure. Six types of novel titanium meshes with different thicknesses and pore sizes, along with three commercially available membranes, were used to cover surgically created calvarial defects in 6-week-old Sprague-Dawley rats. The animals were killed after 4 or 8 weeks. Microcomputed tomographic analyses were performed to analyze the three-dimensional bone volume and bone mineral density. Soft tissue ingrowth was also evaluated histologically and histomorphometrically. The novel titanium membranes used in this study were as effective at augmenting bone in the rat calvarial defect model as the commercially available membranes. The greatest bone volume was observed on 100-μm-thick membranes with larger pores, although these membranes promoted growth of bone with lower mineral density. Soft tissue ingrowth when 100-μm membranes were used was increased at 4 weeks but decreased again by 8 weeks to a level not statistically significantly different from other membranes. Membrane thickness affects the total amount of new bone formation, and membrane porosity is an essential factor for guided bone regeneration, especially during the initial healing period, although the final bone volume obtained is essentially the same. Newly developed titanium mesh membranes of 100 μm in thickness and with large pores appear to be optimal for guided bone regeneration.

  11. Debating Curricular Strategies for Teaching Statistics and Research Methods: What Does the Current Evidence Suggest?

    ERIC Educational Resources Information Center

    Barron, Kenneth E.; Apple, Kevin J.

    2014-01-01

    Coursework in statistics and research methods is a core requirement in most undergraduate psychology programs. However, is there an optimal way to structure and sequence methodology courses to facilitate student learning? For example, should statistics be required before research methods, should research methods be required before statistics, or…

  12. Biomechanical Analysis of Military Boots. Phase 1. Materials Testing of Military and Commercial Footwear

    DTIC Science & Technology

    1992-10-01

    N=8) and Results of 44 Statistical Analyses for Impact Test Performed on Forefoot of Unworn Footwear A-2. Summary Statistics (N=8) and Results of...on Forefoot of Worn Footwear Vlll Tables (continued) Table Page B-2. Summary Statistics (N=4) and Results of 76 Statistical Analyses for Impact...used tests to assess heel and forefoot shock absorption, upper and sole durability, and flexibility (Cavanagh, 1978). Later, the number of tests was

  13. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  14. Algorithms and analyses for stochastic optimization for turbofan noise reduction using parallel reduced-order modeling

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Gunzburger, Max

    2017-06-01

    Simulation-based optimization of acoustic liner design in a turbofan engine nacelle for noise reduction purposes can dramatically reduce the cost and time needed for experimental designs. Because uncertainties are inevitable in the design process, a stochastic optimization algorithm is posed based on the conditional value-at-risk measure so that an ideal acoustic liner impedance is determined that is robust in the presence of uncertainties. A parallel reduced-order modeling framework is developed that dramatically improves the computational efficiency of the stochastic optimization solver for a realistic nacelle geometry. The reduced stochastic optimization solver takes less than 500 seconds to execute. In addition, well-posedness and finite element error analyses of the state system and optimization problem are provided.

  15. [Applications of the hospital statistics management system].

    PubMed

    Zhai, Hong; Ren, Yong; Liu, Jing; Li, You-Zhang; Ma, Xiao-Long; Jiao, Tao-Tao

    2008-01-01

    The Hospital Statistics Management System is built on an Office Automation Platform of Shandong provincial hospital system. Its workflow, role and popedom technologies are used to standardize and optimize the management program of statistics in the total quality control of hospital statistics. The system's applications have combined the office automation platform with the statistics management in a hospital and this provides a practical example of a modern hospital statistics management model.

  16. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Horsetail matching: a flexible approach to optimization under uncertainty

    NASA Astrophysics Data System (ADS)

    Cook, L. W.; Jarrett, J. P.

    2018-04-01

    It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.

  18. Scenario based optimization of a container vessel with respect to its projected operating conditions

    NASA Astrophysics Data System (ADS)

    Wagner, Jonas; Binkowski, Eva; Bronsart, Robert

    2014-06-01

    In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.

  19. Optimization of Thick, Large Area YBCO Film Growth Through Response Surface Methods

    NASA Astrophysics Data System (ADS)

    Porzio, J.; Mahoney, C. H.; Sullivan, M. C.

    2014-03-01

    We present our work on the optimization of thick, large area YB2C3O7-δ (YBCO) film growth through response surface methods. Thick, large area films have commercial uses and have recently been used in dramatic demonstrations of levitation and suspension. Our films are grown via pulsed laser deposition and we have optimized growth parameters via response surface methods. Response surface methods is a statistical tool to optimize selected quantities with respect to a set of variables. We optimized our YBCO films' critical temperatures, thicknesses, and structures with respect to three PLD growth parameters: deposition temperature, laser energy, and deposition pressure. We will present an overview of YBCO growth via pulsed laser deposition, the statistical theory behind response surface methods, and the application of response surface methods to pulsed laser deposition growth of YBCO. Results from the experiment will be presented in a discussion of the optimized film quality. Supported by NFS grant DMR-1305637

  20. Optimal design of structures for earthquake loads by a hybrid RBF-BPSO method

    NASA Astrophysics Data System (ADS)

    Salajegheh, Eysa; Gholizadeh, Saeed; Khatibinia, Mohsen

    2008-03-01

    The optimal seismic design of structures requires that time history analyses (THA) be carried out repeatedly. This makes the optimal design process inefficient, in particular, if an evolutionary algorithm is used. To reduce the overall time required for structural optimization, two artificial intelligence strategies are employed. In the first strategy, radial basis function (RBF) neural networks are used to predict the time history responses of structures in the optimization flow. In the second strategy, a binary particle swarm optimization (BPSO) is used to find the optimum design. Combining the RBF and BPSO, a hybrid RBF-BPSO optimization method is proposed in this paper, which achieves fast optimization with high computational performance. Two examples are presented and compared to determine the optimal weight of structures under earthquake loadings using both exact and approximate analyses. The numerical results demonstrate the computational advantages and effectiveness of the proposed hybrid RBF-BPSO optimization method for the seismic design of structures.

  1. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  2. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  3. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  4. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  5. Unrealistic comparative optimism: An unsuccessful search for evidence of a genuinely motivational bias

    PubMed Central

    Harris, Adam J. L.; de Molière, Laura; Soh, Melinda; Hahn, Ulrike

    2017-01-01

    One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events—they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a ‘severity effect’ whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method. PMID:28278200

  6. Statistical model for speckle pattern optimization.

    PubMed

    Su, Yong; Zhang, Qingchuan; Gao, Zeren

    2017-11-27

    Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.

  7. Unrealistic comparative optimism: An unsuccessful search for evidence of a genuinely motivational bias.

    PubMed

    Harris, Adam J L; de Molière, Laura; Soh, Melinda; Hahn, Ulrike

    2017-01-01

    One of the most accepted findings across psychology is that people are unrealistically optimistic in their judgments of comparative risk concerning future life events-they judge negative events as less likely to happen to themselves than to the average person. Harris and Hahn (2011), however, demonstrated how unbiased (non-optimistic) responses can result in data patterns commonly interpreted as indicative of optimism due to statistical artifacts. In the current paper, we report the results of 5 studies that control for these statistical confounds and observe no evidence for residual unrealistic optimism, even observing a 'severity effect' whereby severe outcomes were overestimated relative to neutral ones (Studies 3 & 4). We conclude that there is no evidence supporting an optimism interpretation of previous results using the prevalent comparison method.

  8. A state-of-the-art review of transportation systems evaluation techniques relevant to air transportation, volume 1. [urban planning and urban transportation using decision theory

    NASA Technical Reports Server (NTRS)

    Haefner, L. E.

    1975-01-01

    Mathematical and philosophical approaches are presented for evaluation and implementation of ground and air transportation systems. Basic decision processes are examined that are used for cost analyses and planning (i.e, statistical decision theory, linear and dynamic programming, optimization, game theory). The effects on the environment and the community that a transportation system may have are discussed and modelled. Algorithmic structures are examined and selected bibliographic annotations are included. Transportation dynamic models were developed. Citizen participation in transportation projects (i.e, in Maryland and Massachusetts) is discussed. The relevance of the modelling and evaluation approaches to air transportation (i.e, airport planning) is examined in a case study in St. Louis, Missouri.

  9. Assessment of physicochemical and antioxidant characteristics of Quercus pyrenaica honeydew honeys.

    PubMed

    Shantal Rodríguez Flores, M; Escuredo, Olga; Carmen Seijo, M

    2015-01-01

    Consumers are exhibiting increasing interest in honeydew honey, principally due to its functional properties. Some plants can be sources of honeydew honey, but in north-western Spain, this honey type only comes from Quercus pyrenaica. In the present study, the melissopalynological and physicochemical characteristics and the antioxidant properties of 32 honeydew honey samples are described. Q. pyrenaica honeydew honey was defined by its colour, high pH, phenols and flavonoids. Multivariate statistical techniques were used to analyse the influence of the production year on the honey's physicochemical parameters and polyphenol content. Differences among the honey samples were found, showing that weather affected the physicochemical composition of the honey samples. Optimal conditions for oak growth favoured the production of honeydew honey. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  11. 40 CFR 91.512 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis for... will be made available to the public during Agency business hours. ...

  12. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    PubMed

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  13. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  14. Optimal planning and design of a renewable energy based supply system for microgrids

    DOE PAGES

    Hafez, Omar; Bhattacharya, Kankar

    2012-03-03

    This paper presents a technique for optimal planning and design of hybrid renewable energy systems for microgrid applications. The Distributed Energy Resources Customer Adoption Model (DER-CAM) is used to determine the optimal size and type of distributed energy resources (DERs) and their operating schedules for a sample utility distribution system. Using the DER-CAM results, an evaluation is performed to evaluate the electrical performance of the distribution circuit if the DERs selected by the DER-CAM optimization analyses are incorporated. Results of analyses regarding the economic benefits of utilizing the optimal locations identified for the selected DER within the system are alsomore » presented. The actual Brookhaven National Laboratory (BNL) campus electrical network is used as an example to show the effectiveness of this approach. The results show that these technical and economic analyses of hybrid renewable energy systems are essential for the efficient utilization of renewable energy resources for microgird applications.« less

  15. Parameters optimization defined by statistical analysis for cysteine-dextran radiolabeling with technetium tricarbonyl core.

    PubMed

    Núñez, Eutimio Gustavo Fernández; Faintuch, Bluma Linkowski; Teodoro, Rodrigo; Wiecek, Danielle Pereira; da Silva, Natanael Gomes; Papadopoulos, Minas; Pelecanou, Maria; Pirmettis, Ioannis; de Oliveira Filho, Renato Santos; Duatti, Adriano; Pasqualini, Roberto

    2011-04-01

    The objective of this study was the development of a statistical approach for radiolabeling optimization of cysteine-dextran conjugates with Tc-99m tricarbonyl core. This strategy has been applied to the labeling of 2-propylene-S-cysteine-dextran in the attempt to prepare a new class of tracers for sentinel lymph node detection, and can be extended to other radiopharmaceuticals for different targets. The statistical routine was based on three-level factorial design. Best labeling conditions were achieved. The specific activity reached was 5 MBq/μg. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  16. Role of sufficient statistics in stochastic thermodynamics and its implication to sensory adaptation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takumi; Sagawa, Takahiro

    2018-04-01

    A sufficient statistic is a significant concept in statistics, which means a probability variable that has sufficient information required for an inference task. We investigate the roles of sufficient statistics and related quantities in stochastic thermodynamics. Specifically, we prove that for general continuous-time bipartite networks, the existence of a sufficient statistic implies that an informational quantity called the sensory capacity takes the maximum. Since the maximal sensory capacity imposes a constraint that the energetic efficiency cannot exceed one-half, our result implies that the existence of a sufficient statistic is inevitably accompanied by energetic dissipation. We also show that, in a particular parameter region of linear Langevin systems there exists the optimal noise intensity at which the sensory capacity, the information-thermodynamic efficiency, and the total entropy production are optimized at the same time. We apply our general result to a model of sensory adaptation of E. coli and find that the sensory capacity is nearly maximal with experimentally realistic parameters.

  17. Algorithm for Identifying Erroneous Rain-Gauge Readings

    NASA Technical Reports Server (NTRS)

    Rickman, Doug

    2005-01-01

    An algorithm analyzes rain-gauge data to identify statistical outliers that could be deemed to be erroneous readings. Heretofore, analyses of this type have been performed in burdensome manual procedures that have involved subjective judgements. Sometimes, the analyses have included computational assistance for detecting values falling outside of arbitrary limits. The analyses have been performed without statistically valid knowledge of the spatial and temporal variations of precipitation within rain events. In contrast, the present algorithm makes it possible to automate such an analysis, makes the analysis objective, takes account of the spatial distribution of rain gauges in conjunction with the statistical nature of spatial variations in rainfall readings, and minimizes the use of arbitrary criteria. The algorithm implements an iterative process that involves nonparametric statistics.

  18. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  19. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  20. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Fels-Rand: an Xlisp-Stat program for the comparative analysis of data under phylogenetic uncertainty.

    PubMed

    Blomberg, S

    2000-11-01

    Currently available programs for the comparative analysis of phylogenetic data do not perform optimally when the phylogeny is not completely specified (i.e. the phylogeny contains polytomies). Recent literature suggests that a better way to analyse the data would be to create random trees from the known phylogeny that are fully-resolved but consistent with the known tree. A computer program is presented, Fels-Rand, that performs such analyses. A randomisation procedure is used to generate trees that are fully resolved but whose structure is consistent with the original tree. Statistics are then calculated on a large number of these randomly-generated trees. Fels-Rand uses the object-oriented features of Xlisp-Stat to manipulate internal tree representations. Xlisp-Stat's dynamic graphing features are used to provide heuristic tools to aid in analysis, particularly outlier analysis. The usefulness of Xlisp-Stat as a system for phylogenetic computation is discussed. Available from the author or at http://www.uq.edu.au/~ansblomb/Fels-Rand.sit.hqx. Xlisp-Stat is available from http://stat.umn.edu/~luke/xls/xlsinfo/xlsinfo.html. s.blomberg@abdn.ac.uk

  2. A novel health indicator for on-line lithium-ion batteries remaining useful life prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Yapeng; Huang, Miaohua; Chen, Yupu; Tao, Ye

    2016-07-01

    Prediction of lithium-ion batteries remaining useful life (RUL) plays an important role in an intelligent battery management system. The capacity and internal resistance are often used as the batteries health indicator (HI) for quantifying degradation and predicting RUL. However, on-line measurement of capacity and internal resistance are hardly realizable due to the not fully charged and discharged condition and the extremely expensive cost, respectively. Therefore, there is a great need to find an optional way to deal with this plight. In this work, a novel HI is extracted from the operating parameters of lithium-ion batteries for degradation modeling and RUL prediction. Moreover, Box-Cox transformation is employed to improve HI performance. Then Pearson and Spearman correlation analyses are utilized to evaluate the similarity between real capacity and the estimated capacity derived from the HI. Next, both simple statistical regression technique and optimized relevance vector machine are employed to predict the RUL based on the presented HI. The correlation analyses and prediction results show the efficiency and effectiveness of the proposed HI for battery degradation modeling and RUL prediction.

  3. iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spycher, Nicolas; Finsterle, Stefan

    GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less

  4. Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?

    PubMed

    Li, Tianjing; Dickersin, Kay

    2013-06-01

    Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  5. Reporting quality of statistical methods in surgical observational studies: protocol for systematic review.

    PubMed

    Wu, Robert; Glen, Peter; Ramsay, Tim; Martel, Guillaume

    2014-06-28

    Observational studies dominate the surgical literature. Statistical adjustment is an important strategy to account for confounders in observational studies. Research has shown that published articles are often poor in statistical quality, which may jeopardize their conclusions. The Statistical Analyses and Methods in the Published Literature (SAMPL) guidelines have been published to help establish standards for statistical reporting.This study will seek to determine whether the quality of statistical adjustment and the reporting of these methods are adequate in surgical observational studies. We hypothesize that incomplete reporting will be found in all surgical observational studies, and that the quality and reporting of these methods will be of lower quality in surgical journals when compared with medical journals. Finally, this work will seek to identify predictors of high-quality reporting. This work will examine the top five general surgical and medical journals, based on a 5-year impact factor (2007-2012). All observational studies investigating an intervention related to an essential component area of general surgery (defined by the American Board of Surgery), with an exposure, outcome, and comparator, will be included in this systematic review. Essential elements related to statistical reporting and quality were extracted from the SAMPL guidelines and include domains such as intent of analysis, primary analysis, multiple comparisons, numbers and descriptive statistics, association and correlation analyses, linear regression, logistic regression, Cox proportional hazard analysis, analysis of variance, survival analysis, propensity analysis, and independent and correlated analyses. Each article will be scored as a proportion based on fulfilling criteria in relevant analyses used in the study. A logistic regression model will be built to identify variables associated with high-quality reporting. A comparison will be made between the scores of surgical observational studies published in medical versus surgical journals. Secondary outcomes will pertain to individual domains of analysis. Sensitivity analyses will be conducted. This study will explore the reporting and quality of statistical analyses in surgical observational studies published in the most referenced surgical and medical journals in 2013 and examine whether variables (including the type of journal) can predict high-quality reporting.

  6. Response surface optimization of substrates for thermophilic anaerobic codigestion of sewage sludge and food waste.

    PubMed

    Kim, Hyun-Woo; Shin, Hang-Sik; Han, Sun-Kee; Oh, Sae-Eun

    2007-03-01

    This study investigated the effects of food waste constituents on thermophilic (55 degrees C) anaerobic codigestion of sewage sludge and food waste by using statistical techniques based on biochemical methane potential tests. Various combinations of grain, vegetable, and meat as cosubstrate were tested, and then the data of methane potential (MP), methane production rate (MPR), and first-order kinetic constant of hydrolysis (kH) were collected for further analyses. Response surface methodology by the Box-Behnken design can verify the effects and their interactions of three variables on responses efficiently. MP was mainly affected by grain, whereas MPR and kH were affected by both vegetable and meat. Estimated polynomial regression models can properly explain the variability of experimental data with a high-adjusted R2 of 0.727, 0.836, and 0.915, respectively. By applying a series of optimization techniques, it was possible to find the proper criteria of cosubstrate. The optimal cosubstrate region was suggested based on overlay contours of overall mean responses. With the desirability contour plots, it was found that optimal conditions of cosubstrate for the maximum MPR (56.6 mL of CH4/g of chemical oxygen demand [COD]/day) were 0.71 g of COD/L of grain, 0.18 g of COD/L of vegetable, and 0.38 g of COD/L of meat by the simultaneous consideration of MP, MPR, and kH. Within the range of each factor examined, the corresponding optimal ratio of sewage sludge to cosubstrate was 71:29 as the COD basis. Elaborate discussions could yield practical operational strategies for the enhanced thermophilic anaerobic codigestion of sewage sludge and food waste.

  7. An Umeclidinium membrane sensor; Two-step optimization strategy for improved responses.

    PubMed

    Yehia, Ali M; Monir, Hany H

    2017-09-01

    In the scientific context of membrane sensors and improved experimentation, we devised an experimentally designed protocol for sensor optimization. Two-step strategy was implemented for Umeclidinium bromide (UMEC) analysis which is a novel quinuclidine-based muscarinic antagonist used for maintenance treatment of symptoms accompanied with chronic obstructive pulmonary disease. In the first place, membrane components were screened for ideal ion exchanger, ionophore and plasticizer using three categorical factors at three levels in Taguchi design. Secondly, experimentally designed optimization was followed in order to tune the sensor up for finest responses. Twelve experiments were randomly carried out in a continuous factor design. Nernstian response, detection limit and selectivity were assigned as responses in these designs. The optimized membrane sensor contained tetrakis-[3,5-bis(trifluoro- methyl)phenyl] borate (0.44wt%) and calix[6]arene (0.43wt%) in 50.00% PVC plasticized with 49.13wt% 2-ni-tro-phenyl octylether. This sensor, along with an optimum concentration of inner filling solution (2×10 -4 molL -1 UMEC) and 2h of soaking time, attained the design objectives. Nernstian response approached 59.7mV/decade and detection limit decreased by about two order of magnitude (8×10 -8 mol L -1 ) through this optimization protocol. The proposed sensor was validated for UMEC determination in its linear range (3.16×10 -7 -1×10 -3 mol L -1 ) and challenged for selective discrimination of other congeners and inorganic cations. Results of INCRUSE ELLIPTA ® inhalation powder analyses obtained from the proposed sensor and manufacturer's UPLC were statistically compared. Moreover the proposed sensor was successfully used for the determination of UMEC in plasma samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Statistical analyses of commercial vehicle accident factors. Volume 1 Part 1

    DOT National Transportation Integrated Search

    1978-02-01

    Procedures for conducting statistical analyses of commercial vehicle accidents have been established and initially applied. A file of some 3,000 California Highway Patrol accident reports from two areas of California during a period of about one year...

  9. 40 CFR 90.712 - Request for public hearing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sampling plans and statistical analyses have been properly applied (specifically, whether sampling procedures and statistical analyses specified in this subpart were followed and whether there exists a basis... Clerk and will be made available to the public during Agency business hours. ...

  10. Exploring efficacy of residential energy efficiency programs in Florida

    NASA Astrophysics Data System (ADS)

    Taylor, Nicholas Wade

    Electric utilities, government agencies, and private interests in the U.S. have committed and continue to invest substantial resources in the pursuit of energy efficiency and conservation through demand-side management (DSM) programs. Program investments, and the demand for impact evaluations that accompany them, are projected to grow in coming years due to increased pressure from state-level energy regulation, costs and challenges of building additional production capacity, fuel costs and potential carbon or renewable energy regulation. This dissertation provides detailed analyses of ex-post energy savings from energy efficiency programs in three key sectors of residential buildings: new, single-family, detached homes; retrofits to existing single-family, detached homes; and retrofits to existing multifamily housing units. Each of the energy efficiency programs analyzed resulted in statistically significant energy savings at the full program group level, yet savings for individual participants and participant subgroups were highly variable. Even though savings estimates were statistically greater than zero, those energy savings did not always meet expectations. Results also show that high variability in energy savings among participant groups or subgroups can negatively impact overall program performance and can undermine marketing efforts for future participation. Design, implementation, and continued support of conservation programs based solely on deemed or projected savings is inherently counter to the pursuit of meaningful energy conservation and reductions in greenhouse gas emissions. To fully understand and optimize program impacts, consistent and robust measurement and verification protocols must be instituted in the design phase and maintained over time. Furthermore, marketing for program participation must target those who have the greatest opportunity for savings. In most utility territories it is not possible to gain access to the type of large scale datasets that would facilitate robust program analysis. Along with measuring and optimizing energy conservation programs, utilities should provide public access to historical consumption data. Open access to data, program optimization, consistent measurement and verification and transparency in reported savings are essential to reducing energy use and its associated environmental impacts.

  11. Perceived stress in first year medical students - associations with personal resources and emotional distress.

    PubMed

    Heinen, Ines; Bullinger, Monika; Kocalevent, Rüya-Daniela

    2017-01-06

    Medical students have been found to report high levels of perceived stress, yet there is a lack of theoretical frameworks examining possible reasons. This cross-sectional study examines correlates of perceived stress in medical students on the basis of a conceptual stress model originally developed for and applied to the general population. The aim was to identify via structural equation modeling the associations between perceived stress and emotional distress (anxiety and depression), taking into account the activation of personal resources (optimism, self-efficacy and resilient coping). Within this cross-sectional study, 321 first year medical students (age 22 ± 4 years, 39.3% men) completed the Perceived Stress Questionnaire (PSQ-20), the Self-Efficacy Optimism Scale (SWOP) and the Brief Resilient Coping Scale (BRCS) as well as the Patient Health Questionnaire (PHQ-4). The statistical analyses used t-tests, ANOVA, Spearman Rho correlation and multiple regression analysis as well as structural equation modeling. Medical students reported higher levels of perceived stress and higher levels of anxiety and depression than reference samples. No statistically significant differences in stress levels were found within the sample according to gender, migration background or employment status. Students reported more self-efficacy, optimism, and resilient coping and higher emotional distress compared to validation samples and results in other studies. Structural equation analysis revealed a satisfactory fit between empirical data and the proposed stress model indicating that personal resources modulated perceived stress, which in turn had an impact on emotional distress. Medical students' perceived stress and emotional distress levels are generally high, with personal resources acting as a buffer, thus supporting the population-based general stress model. Results suggest providing individual interventions for those students, who need support in dealing with the challenges of the medical curriculum as well as addressing structural determinants of student stress such as course load and timing of exams.

  12. Cumulative sum control charts for assessing performance in arterial surgery.

    PubMed

    Beiles, C Barry; Morton, Anthony P

    2004-03-01

    The Melbourne Vascular Surgical Association (Melbourne, Australia) undertakes surveillance of mortality following aortic aneurysm surgery, patency at discharge following infrainguinal bypass and stroke and death following carotid endarterectomy. Quality improvement protocol employing the Deming cycle requires that the system for performing surgery first be analysed and optimized. Then process and outcome data are collected and these data require careful analysis. There must be a mechanism so that the causes of unsatisfactory outcomes can be determined and a good feedback mechanism must exist so that good performance is acknowledged and unsatisfactory performance corrected. A simple method for analysing these data that detects changes in average outcome rates is available using cumulative sum statistical control charts. Data have been analysed both retrospectively from 1999 to 2001, and prospectively during 2002 using cumulative sum control methods. A pathway to deal with control chart signals has been developed. The standard of arterial surgery in Victoria, Australia, is high. In one case a safe and satisfactory outcome was achieved by following the pathway developed by the audit committee. Cumulative sum control charts are a simple and effective tool for the identification of variations in performance standards in arterial surgery. The establishment of a pathway to manage problem performance is a vital part of audit activity.

  13. A crowdsourced nickel-and-dime approach to analog OBM research: A behavioral economic framework for understanding workforce attrition.

    PubMed

    Henley, Amy J; DiGennaro Reed, Florence D; Reed, Derek D; Kaplan, Brent A

    2016-09-01

    Incentives are a popular method to achieve desired employee performance; however, research on optimal incentive magnitude is lacking. Behavioral economic demand curves model persistence of responding in the face of increasing cost and may be suitable to examine the reinforcing value of incentives on work performance. The present use-inspired basic study integrated an experiential human operant task within a crowdsourcing platform to evaluate the applicability of behavioral economics for quantifying changes in workforce attrition. Participants included 88 Amazon Mechanical Turk Workers who earned either a $0.05 or $0.10 incentive for completing a progressively increasing response requirement. Analyses revealed statistically significant differences in breakpoint between the two groups. Additionally, a novel translation of the Kaplan-Meier survival-curve analyses for use within a demand curve framework allowed for examination of elasticity of workforce attrition. Results indicate greater inelastic attrition in the $0.05 group. We discuss the benefits of a behavioral economic approach to modeling employee behavior, how the metrics obtained from the elasticity of workforce attrition analyses (e.g., P max ) may be used to set goals for employee behavior while balancing organizational costs, and how economy type may have influenced observed outcomes. © 2016 Society for the Experimental Analysis of Behavior.

  14. A statistical-based scheduling algorithm in automated data path synthesis

    NASA Technical Reports Server (NTRS)

    Jeon, Byung Wook; Lursinsap, Chidchanok

    1992-01-01

    In this paper, we propose a new heuristic scheduling algorithm based on the statistical analysis of the cumulative frequency distribution of operations among control steps. It has a tendency of escaping from local minima and therefore reaching a globally optimal solution. The presented algorithm considers the real world constraints such as chained operations, multicycle operations, and pipelined data paths. The result of the experiment shows that it gives optimal solutions, even though it is greedy in nature.

  15. Demonstration optimization analyses of pumping from selected Arapahoe aquifer municipal wells in the west-central Denver Basin, Colorado, 2010–2109

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.

    2012-01-01

    Declining water levels caused by withdrawals of water from wells in the west-central part of the Denver Basin bedrock-aquifer system have raised concerns with respect to the ability of the aquifer system to sustain production. The Arapahoe aquifer in particular is heavily used in this area. Two optimization analyses were conducted to demonstrate approaches that could be used to evaluate possible future pumping scenarios intended to prolong the productivity of the aquifer and to delay excessive loss of saturated thickness. These analyses were designed as demonstrations only, and were not intended as a comprehensive optimization study. Optimization analyses were based on a groundwater-flow model of the Denver Basin developed as part of a recently published U.S. Geological Survey groundwater-availability study. For each analysis an optimization problem was set up to maximize total withdrawal rate, subject to withdrawal-rate and hydraulic-head constraints, for 119 selected municipal water-supply wells located in 96 model cells. The optimization analyses were based on 50- and 100-year simulations of groundwater withdrawals. The optimized total withdrawal rate for all selected wells for a 50-year simulation time was about 58.8 cubic feet per second. For an analysis in which the simulation time and head-constraint time were extended to 100 years, the optimized total withdrawal rate for all selected wells was about 53.0 cubic feet per second, demonstrating that a reduction in withdrawal rate of about 10 percent may extend the time before the hydraulic-head constraints are violated by 50 years, provided that pumping rates are optimally distributed. Analysis of simulation results showed that initially, the pumping produces water primarily by release of water from storage in the Arapahoe aquifer. However, because confining layers between the Denver and Arapahoe aquifers are thin, in less than 5 years, most of the water removed by managed-flows pumping likely would be supplied by depleting overlying hydrogeologic units, substantially increasing the rate of decline of hydraulic heads in parts of the overlying Denver aquifer.

  16. Optimal choice of word length when comparing two Markov sequences using a χ 2-statistic.

    PubMed

    Bai, Xin; Tang, Kujin; Ren, Jie; Waterman, Michael; Sun, Fengzhu

    2017-10-03

    Alignment-free sequence comparison using counts of word patterns (grams, k-tuples) has become an active research topic due to the large amount of sequence data from the new sequencing technologies. Genome sequences are frequently modelled by Markov chains and the likelihood ratio test or the corresponding approximate χ 2 -statistic has been suggested to compare two sequences. However, it is not known how to best choose the word length k in such studies. We develop an optimal strategy to choose k by maximizing the statistical power of detecting differences between two sequences. Let the orders of the Markov chains for the two sequences be r 1 and r 2 , respectively. We show through both simulations and theoretical studies that the optimal k= max(r 1 ,r 2 )+1 for both long sequences and next generation sequencing (NGS) read data. The orders of the Markov chains may be unknown and several methods have been developed to estimate the orders of Markov chains based on both long sequences and NGS reads. We study the power loss of the statistics when the estimated orders are used. It is shown that the power loss is minimal for some of the estimators of the orders of Markov chains. Our studies provide guidelines on choosing the optimal word length for the comparison of Markov sequences.

  17. Does unbelted safety requirement affect protection for belted occupants?

    PubMed

    Hu, Jingwen; Klinich, Kathleen D; Manary, Miriam A; Flannagan, Carol A C; Narayanaswamy, Prabha; Reed, Matthew P; Andreen, Margaret; Neal, Mark; Lin, Chin-Hsu

    2017-05-29

    Federal regulations in the United States require vehicles to meet occupant performance requirements with unbelted test dummies. Removing the test requirements with unbelted occupants might encourage the deployment of seat belt interlocks and allow restraint optimization to focus on belted occupants. The objective of this study is to compare the performance of restraint systems optimized for belted-only occupants with those optimized for both belted and unbelted occupants using computer simulations and field crash data analyses. In this study, 2 validated finite element (FE) vehicle/occupant models (a midsize sedan and a midsize SUV) were selected. Restraint design optimizations under standardized crash conditions (U.S.-NCAP and FMVSS 208) with and without unbelted requirements were conducted using Hybrid III (HIII) small female and midsize male anthropomorphic test devices (ATDs) in both vehicles on both driver and right front passenger positions. A total of 10 to 12 design parameters were varied in each optimization using a combination of response surface method (RSM) and genetic algorithm. To evaluate the field performance of restraints optimized with and without unbelted requirements, 55 frontal crash conditions covering a greater variety of crash types than those in the standardized crashes were selected. A total of 1,760 FE simulations were conducted for the field performance evaluation. Frontal crashes in the NASS-CDS database from 2002 to 2012 were used to develop injury risk curves and to provide the baseline performance of current restraint system and estimate the injury risk change by removing the unbelted requirement. Unbelted requirements do not affect the optimal seat belt and airbag design parameters in 3 out of 4 vehicle/occupant position conditions, except for the SUV passenger side. Overall, compared to the optimal designs with unbelted requirements, optimal designs without unbelted requirements generated the same or lower total injury risks for belted occupants depending on statistical methods used for the analysis, but they could also increase the total injury risks for unbelted occupants. This study demonstrated potential for reducing injury risks to belted occupants if the unbelted requirements are eliminated. Further investigations are necessary to confirm these findings.

  18. Statistical analysis and optimization of direct metal laser deposition of 227-F Colmonoy nickel alloy

    NASA Astrophysics Data System (ADS)

    Angelastro, A.; Campanelli, S. L.; Casalino, G.

    2017-09-01

    This paper presents a study on process parameters and building strategy for the deposition of Colmonoy 227-F powder by CO2 laser with a focal spot diameter of 0.3 mm. Colmonoy 227-F is a nickel alloy especially designed for mold manufacturing. The substrate material is a 10 mm thick plate of AISI 304 steel. A commercial CO2 laser welding machine was equipped with a low-cost powder feeding system. In this work, following another one in which laser power, scanning speed and powder flow rate had been studied, the effects of two important process parameters, i.e. hatch spacing and step height, on the properties of the built parts were analysed. The explored ranges of hatch spacing and step height were respectively 150-300 μm and 100-200 μm, whose dimensions were comparable with that of the laser spot. The roughness, adhesion, microstructure, microhardness and density of the manufactured specimens were studied for multi-layer samples, which were made of 30 layers. The statistical significance of the studied process parameters was assessed by the analysis of the variance. The process parameters used allowed to obtain both first layer-to-substrate and layer-to-layer good adhesions. The microstructure was fine and almost defect-free. The microhardness of the deposited material was about 100 HV higher than that of the starting powder. The density as high as 98% of that of the same bulk alloy was more than satisfactory. Finally, simultaneous optimization of density and roughness was performed using the contour plots.

  19. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  20. Resource-constrained Data Collection and Fusion for Identifying Weak Distributed Patterns in Networks

    DTIC Science & Technology

    2013-10-15

    statistic,” in Artifical Intelligence and Statistics (AISTATS), 2013. [6] ——, “Detecting activity in graphs via the Graph Ellipsoid Scan Statistic... Artifical Intelligence and Statistics (AISTATS), 2013. [8] ——, “Near-optimal anomaly detection in graphs using Lovász Extended Scan Statistic,” in Neural...networks,” in Artificial Intelligence and Statistics (AISTATS), 2010. 11 [11] D. Aldous, “The random walk construction of uniform spanning trees and

  1. Research of Extension of the Life Cycle of Helicopter Rotor Blade in Hungary

    DTIC Science & Technology

    2003-02-01

    Radiography (DXR), and (iii) Vibration Diagnostics (VD) with Statistical Energy Analysis (SEA) were semi- simultaneously applied [1]. The used three...2.2. Vibration Diagnostics (VD)) Parallel to the NDT measurements the Statistical Energy Analysis (SEA) as a vibration diagnostical tool were...noises were analysed with a dual-channel real time frequency analyser (BK2035). In addition to the Statistical Energy Analysis measurement a small

  2. Functional Differences between Statistical Learning with and without Explicit Training

    ERIC Educational Resources Information Center

    Batterink, Laura J.; Reber, Paul J.; Paller, Ken A.

    2015-01-01

    Humans are capable of rapidly extracting regularities from environmental input, a process known as statistical learning. This type of learning typically occurs automatically, through passive exposure to environmental input. The presumed function of statistical learning is to optimize processing, allowing the brain to more accurately predict and…

  3. Time Scale Optimization and the Hunt for Astronomical Cycles in Deep Time Strata

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.

    2016-04-01

    A valuable attribute of astrochronology is the direct link between chronometer and climate change, providing a remarkable opportunity to constrain the evolution of the surficial Earth System. Consequently, the hunt for astronomical cycles in strata has spurred the development of a rich conceptual framework for climatic/oceanographic change, and has allowed exploration of the geologic record with unprecedented temporal resolution. Accompanying these successes, however, has been a persistent skepticism about appropriate astrochronologic testing and circular reasoning: how does one reliably test for astronomical cycles in stratigraphic data, especially when time is poorly constrained? From this perspective, it would seem that the merits and promise of astrochronology (e.g., a geologic time scale measured in ≤400 kyr increments) also serves as its Achilles heel, if the confirmation of such short rhythms defies rigorous statistical testing. To address these statistical challenges in astrochronologic testing, a new approach has been developed that (1) explicitly evaluates time scale uncertainty, (2) is resilient to common problems associated with spectrum confidence level assessment and 'multiple testing', and (3) achieves high statistical power under a wide range of conditions (it can identify astronomical cycles when present in data). Designated TimeOpt (for "time scale optimization"; Meyers 2015), the method employs a probabilistic linear regression model framework to investigate amplitude modulation and frequency ratios (bundling) in stratigraphic data, while simultaneously determining the optimal time scale. This presentation will review the TimeOpt method, and demonstrate how the flexible statistical framework can be further extended to evaluate (and optimize upon) complex sedimentation rate models, enhancing the statistical power of the approach, and addressing the challenge of unsteady sedimentation. Meyers, S. R. (2015), The evaluation of eccentricity-related amplitude modulation and bundling in paleoclimate data: An inverse approach for astrochronologic testing and time scale optimization, Paleoceanography, 30, doi:10.1002/ 2015PA002850.

  4. Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions

    DOE PAGES

    Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...

    2016-02-24

    Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less

  5. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  6. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    PubMed

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  7. Asymptotically Optimal and Private Statistical Estimation

    NASA Astrophysics Data System (ADS)

    Smith, Adam

    Differential privacy is a definition of "privacy" for statistical databases. The definition is simple, yet it implies strong semantics even in the presence of an adversary with arbitrary auxiliary information about the database.

  8. Process optimization using combinatorial design principles: parallel synthesis and design of experiment methods.

    PubMed

    Gooding, Owen W

    2004-06-01

    The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.

  9. Statistical Mechanics of Combinatorial Auctions

    NASA Astrophysics Data System (ADS)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-09-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  10. Statistical evaluation of a project to estimate fish trajectories through the intakes of Kaplan hydropower turbines

    NASA Astrophysics Data System (ADS)

    Sutton, Virginia Kay

    This paper examines statistical issues associated with estimating paths of juvenile salmon through the intakes of Kaplan turbines. Passive sensors, hydrophones, detecting signals from ultrasonic transmitters implanted in individual fish released into the preturbine region were used to obtain the information to estimate fish paths through the intake. Aim and location of the sensors affects the spatial region in which the transmitters can be detected, and formulas relating this region to sensor aiming directions are derived. Cramer-Rao lower bounds for the variance of estimators of fish location are used to optimize placement of each sensor. Finally, a statistical methodology is developed for analyzing angular data collected from optimally placed sensors.

  11. Increasing the statistical significance of entanglement detection in experiments.

    PubMed

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  12. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    PubMed

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  13. Optimal Fisher Discriminant Ratio for an Arbitrary Spatial Light Modulator

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1999-01-01

    Optimizing the Fisher ratio is well established in statistical pattern recognition as a means of discriminating between classes. I show how to optimize that ratio for optical correlation intensity by choice of filter on an arbitrary spatial light modulator (SLM). I include the case of additive noise of known power spectral density.

  14. Recent developments of axial flow compressors under transonic flow conditions

    NASA Astrophysics Data System (ADS)

    Srinivas, G.; Raghunandana, K.; Satish Shenoy, B.

    2017-05-01

    The objective of this paper is to give a holistic view of the most advanced technology and procedures that are practiced in the field of turbomachinery design. Compressor flow solver is the turbulence model used in the CFD to solve viscous problems. The popular techniques like Jameson’s rotated difference scheme was used to solve potential flow equation in transonic condition for two dimensional aero foils and later three dimensional wings. The gradient base method is also a popular method especially for compressor blade shape optimization. Various other types of optimization techniques available are Evolutionary algorithms (EAs) and Response surface methodology (RSM). It is observed that in order to improve compressor flow solver and to get agreeable results careful attention need to be paid towards viscous relations, grid resolution, turbulent modeling and artificial viscosity, in CFD. The advanced techniques like Jameson’s rotated difference had most substantial impact on wing design and aero foil. For compressor blade shape optimization, Evolutionary algorithm is quite simple than gradient based technique because it can solve the parameters simultaneously by searching from multiple points in the given design space. Response surface methodology (RSM) is a method basically used to design empirical models of the response that were observed and to study systematically the experimental data. This methodology analyses the correct relationship between expected responses (output) and design variables (input). RSM solves the function systematically in a series of mathematical and statistical processes. For turbomachinery blade optimization recently RSM has been implemented successfully. The well-designed high performance axial flow compressors finds its application in any air-breathing jet engines.

  15. Neoadjuvant Long-Course Chemoradiotherapy for Rectal Cancer: Does Time to Surgery Matter?

    PubMed Central

    Panagiotopoulou, Ioanna G.; Parashar, Deepak; Qasem, Eyas; Mezher-Sikafi, Rasha; Parmar, Jitesh; Wells, Alan D.; Bajwa, Farrukh M.; Menon, Madhav; Jephcott, Catherine R.

    2015-01-01

    The objective of this paper was to evaluate whether delaying surgery following long-course chemoradiotherapy for rectal cancer correlates with pathologic complete response. Pre-operative chemoradiotherapy (CRT) is standard practice in the UK for the management of locally advanced rectal cancer. Optimal timing of surgery following CRT is still not clearly defined. All patients with a diagnosis of rectal cancer who had undergone long-course CRT prior to surgery between January 2008 and December 2011 were included. Statistical analysis was performed using Stata 11. Fifty-nine patients received long-course CRT prior to surgery in the selected period. Twenty-seven percent (16/59) of patients showed a complete histopathologic response and 59.3% (35/59) of patients had tumor down-staging from radiologically-assessed node positive to histologically-proven node negative disease. There was no statistically significant delay to surgery after completion of CRT in the 16 patients with complete response (CR) compared with the rest of the group [IR: incomplete response; CR group median: 74.5 days (IQR: 70–87.5) and IR group median: 72 days (IQR: 57–83), P = 0.470]. Although no statistically significant predictors of either complete response or tumor nodal status down-staging were identified in logistic regression analyses, a trend toward complete response was seen with longer delay to surgery following completion of long-course CRT. PMID:26414816

  16. Evaluating the efficiency of environmental monitoring programs

    USGS Publications Warehouse

    Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina

    2014-01-01

    Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.

  17. Optimization of media components for laccase production by litter dwelling fungal isolate Fusarium incarnatum LD-3.

    PubMed

    Chhaya, Urvish; Gupte, Akshaya

    2010-02-01

    Laccase production by solid state fermentation (SSF) using an indigenously isolated litter dwelling fungus Fusarium incarnatum LD-3 was optimized. Fourteen medium components were screened by the initial screening method of Plackett-Burman. Each of the components was screened on the basis of 'p' (probability value) which was above 95% confidence level. Ortho-dianisidine, thiamine HCl and CuSO(4) . 5 H(2)O were identified as significant components for laccase production. The Central Composite Design response surface methodology was then applied to further optimize the laccase production. The optimal concentration of these three medium components for higher laccase production were (g/l): CuSO(4) . 5 H(2)O, 0.01; thiamine HCl, 0.0136 and ortho-dianisidine, 0.388 mM served as an inducer. Wheat straw, 5.0 g was used as a solid substrate. Using this statistical optimization method the laccase production was found to increase from 40 U/g to 650 U/g of wheat straw, which was sixteen times higher than non optimized medium. This is the first report on statistical optimization of laccase production from Fusarium incarnatum LD-3.

  18. Attachment to Life: Psychometric Analyses of the Valuation of Life Scale and Differences Among Older Adults

    PubMed Central

    Gitlin, Laura N.; Parisi, Jeanine; Huang, Jin; Winter, Laraine; Roth, David L.

    2016-01-01

    Purpose of study: Examine psychometric properties of Lawton’s Valuation of Life (VOL) scale, a measure of an older adults’ assessment of the perceived value of their lives; and whether ratings differ by race (White, Black/African American) and sex. Design and Methods: The 13-item VOL scale was administered at baseline in 2 separate randomized trials (Advancing Better Living for Elders, ABLE; Get Busy Get Better, GBGB) for a total of 527 older adults. Principal component analyses were applied to a subset of ABLE data (subsample 1) and confirmatory factor analyses were conducted on remaining data (subsample 2 and GBGB). Once the factor structure was identified and confirmed, 2 subscales were created, corresponding to optimism and engagement. Convergent validity of total and subscale scores were examined using measures of depressive symptoms, social support, control-oriented strategies, mastery, and behavioral activation. For discriminant validity, indices of health status, physical function, financial strain, cognitive status, and number of falls were examined. Results: Trial samples (ABLE vs. GBGB) differed by age, race, marital status, education, and employment. Principal component analysis on ABLE subsample 1 (n = 156) yielded two factors subsequently confirmed in confirmatory factor analyses on ABLE subsample 2 (n = 163) and GBGB sample (N = 208) separately. Adequate fit was found for the 2-factor model. Correlational analyses supported strong convergent and discriminant validity. Some statistically significant race and sex differences in subscale scores were found. Implications: VOL measures subjective appraisals of perceived value of life. Consisting of two interrelated subscales, it offers an efficient approach to ascertain personal attributions. PMID:26874189

  19. Latest Results From the QuakeFinder Statistical Analysis Framework

    NASA Astrophysics Data System (ADS)

    Kappler, K. N.; MacLean, L. S.; Schneider, D.; Bleier, T.

    2017-12-01

    Since 2005 QuakeFinder (QF) has acquired an unique dataset with outstanding spatial and temporal sampling of earth's magnetic field along several active fault systems. This QF network consists of 124 stations in California and 45 stations along fault zones in Greece, Taiwan, Peru, Chile and Indonesia. Each station is equipped with three feedback induction magnetometers, two ion sensors, a 4 Hz geophone, a temperature sensor, and a humidity sensor. Data are continuously recorded at 50 Hz with GPS timing and transmitted daily to the QF data center in California for analysis. QF is attempting to detect and characterize anomalous EM activity occurring ahead of earthquakes. There have been many reports of anomalous variations in the earth's magnetic field preceding earthquakes. Specifically, several authors have drawn attention to apparent anomalous pulsations seen preceding earthquakes. Often studies in long term monitoring of seismic activity are limited by availability of event data. It is particularly difficult to acquire a large dataset for rigorous statistical analyses of the magnetic field near earthquake epicenters because large events are relatively rare. Since QF has acquired hundreds of earthquakes in more than 70 TB of data, we developed an automated approach for finding statistical significance of precursory behavior and developed an algorithm framework. Previously QF reported on the development of an Algorithmic Framework for data processing and hypothesis testing. The particular instance of algorithm we discuss identifies and counts magnetic variations from time series data and ranks each station-day according to the aggregate number of pulses in a time window preceding the day in question. If the hypothesis is true that magnetic field activity increases over some time interval preceding earthquakes, this should reveal itself by the station-days on which earthquakes occur receiving higher ranks than they would if the ranking scheme were random. This can be analysed using the Receiver Operating Characteristic test. In this presentation we give a status report of our latest results, largely focussed on reproducibility of results, robust statistics in the presence of missing data, and exploring optimization landscapes in our parameter space.

  20. A Modeling Framework for Optimal Computational Resource Allocation Estimation: Considering the Trade-offs between Physical Resolutions, Uncertainty and Computational Costs

    NASA Astrophysics Data System (ADS)

    Moslehi, M.; de Barros, F.; Rajagopal, R.

    2014-12-01

    Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.

  1. A Retrospective Survey of Research Design and Statistical Analyses in Selected Chinese Medical Journals in 1998 and 2008

    PubMed Central

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-01-01

    Background High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Methodology/Principal Findings Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, p<0.001), 59.8% (545/1,335) in 1998 compared to 52.2% (664/1,578) in 2008. The overall error/defect proportion of study design also decreased ( = 21.22, p<0.001), 50.9% (680/1,335) compared to 42.40% (669/1,578). In 2008, design with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, p<0.001), 92.7% (945/1,019) compared to 78.2% (1023/1,309) and interpretation ( = 27.26, p<0.001), 9.7% (99/1,019) compared to 4.3% (56/1,309), some serious ones persisted. Conclusions/Significance Chinese medical research seems to have made significant progress regarding statistical analyses, but there remains ample room for improvement regarding study designs. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative. PMID:20520824

  2. Use of Statistical Analyses in the Ophthalmic Literature

    PubMed Central

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J.; Marvasti, Amir H.; Sharpsten, Lucie; Medeiros, Felipe A.

    2014-01-01

    Purpose To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to sequentially add knowledge of more advanced techniques to their statistical repertoire. Design Cross-sectional study Methods All articles published from January 2012 to December 2012 in Ophthalmology, American Journal of Ophthalmology and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Main Outcome Measures Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Results Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. In order to understand more than half (51.4%) of the articles published, readers were expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, while knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles in retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared to cornea. Conclusions Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of published studies in the literature. The frequency of use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge in order to critically appraise articles submitted for publication. The results of this study could provide guidance to direct the statistical learning of clinical ophthalmologists, researchers and educators involved in the design of courses for residents and medical students. PMID:24612977

  3. ConvAn: a convergence analyzing tool for optimization of biochemical networks.

    PubMed

    Kostromins, Andrejs; Mozga, Ivars; Stalidzans, Egils

    2012-01-01

    Dynamic models of biochemical networks usually are described as a system of nonlinear differential equations. In case of optimization of models for purpose of parameter estimation or design of new properties mainly numerical methods are used. That causes problems of optimization predictability as most of numerical optimization methods have stochastic properties and the convergence of the objective function to the global optimum is hardly predictable. Determination of suitable optimization method and necessary duration of optimization becomes critical in case of evaluation of high number of combinations of adjustable parameters or in case of large dynamic models. This task is complex due to variety of optimization methods, software tools and nonlinearity features of models in different parameter spaces. A software tool ConvAn is developed to analyze statistical properties of convergence dynamics for optimization runs with particular optimization method, model, software tool, set of optimization method parameters and number of adjustable parameters of the model. The convergence curves can be normalized automatically to enable comparison of different methods and models in the same scale. By the help of the biochemistry adapted graphical user interface of ConvAn it is possible to compare different optimization methods in terms of ability to find the global optima or values close to that as well as the necessary computational time to reach them. It is possible to estimate the optimization performance for different number of adjustable parameters. The functionality of ConvAn enables statistical assessment of necessary optimization time depending on the necessary optimization accuracy. Optimization methods, which are not suitable for a particular optimization task, can be rejected if they have poor repeatability or convergence properties. The software ConvAn is freely available on www.biosystems.lv/convan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  5. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  6. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  7. Inverse Statistics and Asset Allocation Efficiency

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam

    In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.

  8. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  9. Global atmospheric circulation statistics, 1000-1 mb

    NASA Technical Reports Server (NTRS)

    Randel, William J.

    1992-01-01

    The atlas presents atmospheric general circulation statistics derived from twelve years (1979-90) of daily National Meteorological Center (NMC) operational geopotential height analyses; it is an update of a prior atlas using data over 1979-1986. These global analyses are available on pressure levels covering 1000-1 mb (approximately 0-50 km). The geopotential grids are a combined product of the Climate Analysis Center (which produces analyses over 70-1 mb) and operational NMC analyses (over 1000-100 mb). Balance horizontal winds and hydrostatic temperatures are derived from the geopotential fields.

  10. Empirical analyses on the development trend of non-ferrous metal industry under China’s new normal

    NASA Astrophysics Data System (ADS)

    Li, C. X.; Liu, C. X.; Zhang, Q. L.

    2017-08-01

    The CGE model of Yunnan’s macro economy was constructed based on the input-output data of Yunnan in 2012, and the development trend of the non-ferrous metals industry (NMI) under the China’s new normal was simulated. In view of this, according to different expected economic growth, and optimized economic structure, the impact on development of Yunnan NMI was simulated. The results show that the NMI growth rate is expected to decline when the economic growth show a downward trend, but the change of the proportion is relatively small. Moreover, the structure in proportion was adjusted to realize the economic structure optimization, while the proportion of NMI in GDP will decline. In contrast, the biggest influence on the NMI is the change of economic structure. From the statistics of last two years, we can see that NMI is growing, and at the same time, its proportion is declining, which is consistent with the results of simulation. But the adjustment of economic structure will take a long time. It is need to improve the proportion of deep-processing industry, extend the industrial chain, enhance the value chain, so as to be made good use of resource advantage.

  11. Pareto-optimal phylogenetic tree reconciliation

    PubMed Central

    Libeskind-Hadas, Ran; Wu, Yi-Chieh; Bansal, Mukul S.; Kellis, Manolis

    2014-01-01

    Motivation: Phylogenetic tree reconciliation is a widely used method for reconstructing the evolutionary histories of gene families and species, hosts and parasites and other dependent pairs of entities. Reconciliation is typically performed using maximum parsimony, in which each evolutionary event type is assigned a cost and the objective is to find a reconciliation of minimum total cost. It is generally understood that reconciliations are sensitive to event costs, but little is understood about the relationship between event costs and solutions. Moreover, choosing appropriate event costs is a notoriously difficult problem. Results: We address this problem by giving an efficient algorithm for computing Pareto-optimal sets of reconciliations, thus providing the first systematic method for understanding the relationship between event costs and reconciliations. This, in turn, results in new techniques for computing event support values and, for cophylogenetic analyses, performing robust statistical tests. We provide new software tools and demonstrate their use on a number of datasets from evolutionary genomic and cophylogenetic studies. Availability and implementation: Our Python tools are freely available at www.cs.hmc.edu/∼hadas/xscape. Contact: mukul@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24932009

  12. Production of a cellulase-free alkaline xylanase from Bacillus pumilus MTCC 5015 by submerged fermentation and its application in biobleaching.

    PubMed

    Thomas, Leya; Sindhu, Raveendran; Binod, Parameswaran; Pandey, Ashok

    2015-06-01

    Here, we described the production of a cellulase-free alkaline xylanase from Bacillus pumilus MTCC 5015 by submerged fermentation and its application in biobleaching. Various process parameters affecting xylanase production by B. pumilus were optimized by adopting a Plackett-Burman design (PBD) as well as Response surface methodology (RSM). These statistical methods aid in improving the enzyme yield by analysing the individual crucial components of the medium. Maximum production was obtained with 4% yeast extract, 0.08% magnesium sulphate, 30 h of inoculum age, incubation temperature of 33.5 degrees C and pH 9.0. Under optimized conditions, the xylanase activity was 372 IU/ml. Media engineering improved a 5-fold increase in the enzyme production. Scanning electron microscopy (SEM) showed significant changes on the surface of xylanase treated pulps as a result of xylan hydrolysis. Increased roughness of paper carton fibres was apparent in scanning electron micrograph due to opening of the micro fibrils present on the surface by xylanase action. The untreated pulp did not show any such change. These results demonstrated that the B. pumilus MTCC 5015 xylanase was effective in bio-bleaching of paper carton.

  13. Carbon flow analysis and Carbon emission reduction of FCC in Chinese oil refineries

    NASA Astrophysics Data System (ADS)

    Jia, Fengrui; Wei, Na; Ma, Danzhu; Liu, Guangxin; Wu, Ming; Yue, Qiang

    2017-08-01

    The major problem of the energy production in oil refineries is the high emission of CO2 in China. The fluid catalytic cracking unit (FCC) is the key source of carbon emission in the oil refineries. According to the statistical data, the carbon emission of FCC unit accounts for more than 31% for the typical oil refineries. The carbon flow of FCC in the typical Chinese oil refineries were evaluated and analysed, which aimed at the solution of CO2 emission reduction. The method of substances flow analysis (SFA) and the mathematical programming were used to evaluate the carbon metabolism and optimize the carbon emission. The results indicated that the combustion emission of the reaction-regeneration subsystem (RRS) was the major source of FCC. The quantity of CO2 emission of RSS was more than 90%. The combustion efficiency and the amount of residual oil affected the carbon emission of RRS most according to the optimized analysis of carbon emission reduction. Moreover, the fractionation subsystem (TFS) had the highest environmental efficiency and the absorption-stabilization subsystem (ASS) had the highest resource efficiency (approximately to 1) of carbon.

  14. Evaluation of the Gini Coefficient in Spatial Scan Statistics for Detecting Irregularly Shaped Clusters

    PubMed Central

    Kim, Jiyu; Jung, Inkyung

    2017-01-01

    Spatial scan statistics with circular or elliptic scanning windows are commonly used for cluster detection in various applications, such as the identification of geographical disease clusters from epidemiological data. It has been pointed out that the method may have difficulty in correctly identifying non-compact, arbitrarily shaped clusters. In this paper, we evaluated the Gini coefficient for detecting irregularly shaped clusters through a simulation study. The Gini coefficient, the use of which in spatial scan statistics was recently proposed, is a criterion measure for optimizing the maximum reported cluster size. Our simulation study results showed that using the Gini coefficient works better than the original spatial scan statistic for identifying irregularly shaped clusters, by reporting an optimized and refined collection of clusters rather than a single larger cluster. We have provided a real data example that seems to support the simulation results. We think that using the Gini coefficient in spatial scan statistics can be helpful for the detection of irregularly shaped clusters. PMID:28129368

  15. A Synergy Cropland of China by Fusing Multiple Existing Maps and Statistics.

    PubMed

    Lu, Miao; Wu, Wenbin; You, Liangzhi; Chen, Di; Zhang, Li; Yang, Peng; Tang, Huajun

    2017-07-12

    Accurate information on cropland extent is critical for scientific research and resource management. Several cropland products from remotely sensed datasets are available. Nevertheless, significant inconsistency exists among these products and the cropland areas estimated from these products differ considerably from statistics. In this study, we propose a hierarchical optimization synergy approach (HOSA) to develop a hybrid cropland map of China, circa 2010, by fusing five existing cropland products, i.e., GlobeLand30, Climate Change Initiative Land Cover (CCI-LC), GlobCover 2009, MODIS Collection 5 (MODIS C5), and MODIS Cropland, and sub-national statistics of cropland area. HOSA simplifies the widely used method of score assignment into two steps, including determination of optimal agreement level and identification of the best product combination. The accuracy assessment indicates that the synergy map has higher accuracy of spatial locations and better consistency with statistics than the five existing datasets individually. This suggests that the synergy approach can improve the accuracy of cropland mapping and enhance consistency with statistics.

  16. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  17. Missing value imputation for microarray data: a comprehensive comparison study and a web tool

    PubMed Central

    2013-01-01

    Background Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. Results In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. Conclusions In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses. PMID:24565220

  18. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    PubMed Central

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  19. Optimal cut-off points for waist circumference in the definition of metabolic syndrome in Brazilian adults: baseline analyses of the Longitudinal Study of Adult Health (ELSA-Brasil).

    PubMed

    Cardinal, Thiane Ristow; Vigo, Alvaro; Duncan, Bruce Bartholow; Matos, Sheila Maria Alvim; da Fonseca, Maria de Jesus Mendes; Barreto, Sandhi Maria; Schmidt, Maria Inês

    2018-01-01

    Waist circumference (WC) has been incorporated in the definition of the metabolic syndrome (MetS) but the exact WC cut-off points across populations are not clear. The Joint Interim Statement (JIS) suggested possible cut-offs to different populations and ethnic groups. However, the adequacy of these cut-offs to Brazilian adults has been scarcely investigated. The objective of the study is to evaluate possible WC thresholds to be used in the definition of MetS using data from the Longitudinal Study of Adult Health (ELSA-Brasil), a multicenter cohort study of civil servants (35-74 years old) of six Brazilian cities. We analyzed baseline data from 14,893 participants (6772 men and 8121 women). A MetS was defined according to the JIS criteria, but excluding WC and thus requiring 2 of the 4 remaining elements. We used restricted cubic spline regression to graph the relationship between WC and MetS. We identified optimal cut-off points which maximized joint sensitivity and specificity (Youden's index) from Receiver Operator Characteristic Curves. We also estimated the C-statistics using logistic regression. We found no apparent threshold for WC in restricted cubic spline plots. Optimal cut-off for men was 92 cm (2 cm lower than that recommended by JIS for Caucasian/Europids or Sub-Saharan African men), but 2 cm higher than that recommended for ethnic Central and South American. For women, optimal cut-off was 86, 6 cm higher than that recommended for Caucasian/Europids and ethnic Central and South American. Optimal cut-offs did not very across age groups and most common race/color categories (except for Asian men, 87 cm). Sex-specific cut-offs for WC recommended by JIS differ from optimal cut-offs we found for adult men and women of Brazil´s most common ethnic groups.

  20. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  1. Generation and optimization of superpixels as image processing kernels for Jones matrix optical coherence tomography

    PubMed Central

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa; Yasuno, Yoshiaki

    2017-01-01

    Jones matrix-based polarization sensitive optical coherence tomography (JM-OCT) simultaneously measures optical intensity, birefringence, degree of polarization uniformity, and OCT angiography. The statistics of the optical features in a local region, such as the local mean of the OCT intensity, are frequently used for image processing and the quantitative analysis of JM-OCT. Conventionally, local statistics have been computed with fixed-size rectangular kernels. However, this results in a trade-off between image sharpness and statistical accuracy. We introduce a superpixel method to JM-OCT for generating the flexible kernels of local statistics. A superpixel is a cluster of image pixels that is formed by the pixels’ spatial and signal value proximities. An algorithm for superpixel generation specialized for JM-OCT and its optimization methods are presented in this paper. The spatial proximity is in two-dimensional cross-sectional space and the signal values are the four optical features. Hence, the superpixel method is a six-dimensional clustering technique for JM-OCT pixels. The performance of the JM-OCT superpixels and its optimization methods are evaluated in detail using JM-OCT datasets of posterior eyes. The superpixels were found to well preserve tissue structures, such as layer structures, sclera, vessels, and retinal pigment epithelium. And hence, they are more suitable for local statistics kernels than conventional uniform rectangular kernels. PMID:29082073

  2. Expected p-values in light of an ROC curve analysis applied to optimal multiple testing procedures.

    PubMed

    Vexler, Albert; Yu, Jihnhee; Zhao, Yang; Hutson, Alan D; Gurevich, Gregory

    2017-01-01

    Many statistical studies report p-values for inferential purposes. In several scenarios, the stochastic aspect of p-values is neglected, which may contribute to drawing wrong conclusions in real data experiments. The stochastic nature of p-values makes their use to examine the performance of given testing procedures or associations between investigated factors to be difficult. We turn our focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we prove that the EPV can be considered in the context of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. The ROC-based framework provides a new and efficient methodology for investigating and constructing statistical decision-making procedures, including: (1) evaluation and visualization of properties of the testing mechanisms, considering, e.g. partial EPVs; (2) developing optimal tests via the minimization of EPVs; (3) creation of novel methods for optimally combining multiple test statistics. We demonstrate that the proposed EPV-based approach allows us to maximize the integrated power of testing algorithms with respect to various significance levels. In an application, we use the proposed method to construct the optimal test and analyze a myocardial infarction disease dataset. We outline the usefulness of the "EPV/ROC" technique for evaluating different decision-making procedures, their constructions and properties with an eye towards practical applications.

  3. Factoring handedness data: II. Geschwind's multidimensional hypothesis.

    PubMed

    Messinger, H B; Messinger, M I

    1996-06-01

    The challenge in this journal by Peters and Murphy to the validity of two published factor analyses of handedness data because of bimodality was dealt with in Part I by identifying measures to normalize the handedness item distributions. A new survey using Oldfield's questionnaire format had 38 bell-shaped (unimodal) handedness-item distributions and 11 that were only marginally bimodal out of the 55 items used in Geschwind's 1986 study. Yet they were still non-normal and the factor analysis was unsatisfactory; bimodality is not the only problem. By choosing a transformation for each item that was optimal as assessed by D'Agostino's K2 statistic, all but two items could be normalized. Seven factors were derived that showed high congruence between maximum likelihood and principal components extractions before and after varimax rotation. Geschwind's assertion that handedness is not unidimensional is therefore supported.

  4. Rapid assessment of tinnitus-related psychological distress using the Mini-TQ.

    PubMed

    Hiller, Wolfgang; Goebel, Gerhard

    2004-01-01

    The aim of this study was to develop an abridged version of the Tinnitus Questionnaire (TQ) to be used as a quick tool for the assessment of tinnitus-related psychological distress. Data from 351 inpatients and 122 outpatients with chronic tinnitus were used to analyse item statistics and psychometric properties. Twelve items with an optimal combination of high item-total correlations, reliability and sensitivity in assessing changes were selected for the Mini-TQ. Correlation with the full TQ was >0.90, and test-retest reliability was 0.89. Validity was confirmed by associations with general psychological symptom patterns. Treatment effects indicated by the Mini-TQ were slightly greater than those indicated by the full TQ. The Mini-TQ is recommended as a psychometrically approved and solid tool for rapid and economical assessment of subjective tinnitus distress.

  5. Surface Sites for Engineering Allosteric Control in Proteins

    PubMed Central

    Lee, Jeeyeon; Natarajan, Madhusudan; Nashine, Vishal C.; Socolich, Michael; Vo, Tina; Russ, William P.; Benkovic, Stephen J.; Ranganathan, Rama

    2010-01-01

    Statistical analyses of protein families reveal networks of coevolving amino acids that functionally link distantly positioned functional surfaces. Such linkages suggest a concept for engineering allosteric control into proteins: The intramolecular networks of two proteins could be joined across their surface sites such that the activity of one protein might control the activity of the other. We tested this idea by creating PAS-DHFR, a designed chimeric protein that connects a light-sensing signaling domain from a plant member of the Per/Arnt/Sim (PAS) family of proteins with Escherichia coli dihydrofolate reductase (DHFR). With no optimization, PAS-DHFR exhibited light-dependent catalytic activity that depended on the site of connection and on known signaling mechanisms in both proteins. PAS-DHFR serves as a proof of concept for engineering regulatory activities into proteins through interface design at conserved allosteric sites. PMID:18927392

  6. Effect of keyswitch design of desktop and notebook keyboards related to key stiffness and typing force.

    PubMed

    Bufton, Marcia J; Marklin, Richard W; Nagurka, Mark L; Simoneau, Guy G

    2006-08-15

    This study aimed to compare and analyse rubber-dome desktop, spring-column desktop and notebook keyboards in terms of key stiffness and fingertip typing force. The spring-column keyboard resulted in the highest mean peak contact force (0.86N), followed by the rubber dome desktop (0.68N) and the notebook (0.59N). All these differences were statistically significant. Likewise, the spring-column keyboard registered the highest fingertip typing force and the notebook keyboard the lowest. A comparison of forces showed the notebook (rubber dome) keyboard had the highest fingertip-to-peak contact force ratio (overstrike force), and the spring-column generated the least excess force (as a ratio of peak contact force). The results of this study could aid in optimizing computer key design that could possibly reduce subject discomfort and fatigue.

  7. Throughput-optimal scheduling for broadcast channels

    NASA Astrophysics Data System (ADS)

    Eryilmaz, Atilla; Srikant, Rayadurgam; Perkins, James R.

    2001-07-01

    In this paper, we consider a degraded Gaussian broadcast channel, where the transmitter maintains separate queues for each receiver. We present throughput optimal policies that stabilize the queues without knowing the statistics of the arrival processes to these queues.

  8. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  9. Secondary Analysis of National Longitudinal Transition Study 2 Data

    ERIC Educational Resources Information Center

    Hicks, Tyler A.; Knollman, Greg A.

    2015-01-01

    This review examines published secondary analyses of National Longitudinal Transition Study 2 (NLTS2) data, with a primary focus upon statistical objectives, paradigms, inferences, and methods. Its primary purpose was to determine which statistical techniques have been common in secondary analyses of NLTS2 data. The review begins with an…

  10. A Nonparametric Geostatistical Method For Estimating Species Importance

    Treesearch

    Andrew J. Lister; Rachel Riemann; Michael Hoppus

    2001-01-01

    Parametric statistical methods are not always appropriate for conducting spatial analyses of forest inventory data. Parametric geostatistical methods such as variography and kriging are essentially averaging procedures, and thus can be affected by extreme values. Furthermore, non normal distributions violate the assumptions of analyses in which test statistics are...

  11. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    ERIC Educational Resources Information Center

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  12. Comparison of Kalman filter and optimal smoother estimates of spacecraft attitude

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1994-01-01

    Given a valid system model and adequate observability, a Kalman filter will converge toward the true system state with error statistics given by the estimated error covariance matrix. The errors generally do not continue to decrease. Rather, a balance is reached between the gain of information from new measurements and the loss of information during propagation. The errors can be further reduced, however, by a second pass through the data with an optimal smoother. This algorithm obtains the optimally weighted average of forward and backward propagating Kalman filters. It roughly halves the error covariance by including future as well as past measurements in each estimate. This paper investigates whether such benefits actually accrue in the application of an optimal smoother to spacecraft attitude determination. Tests are performed both with actual spacecraft data from the Extreme Ultraviolet Explorer (EUVE) and with simulated data for which the true state vector and noise statistics are exactly known.

  13. Statistical optimization of lovastatin production by Omphalotus olearius (DC.) singer in submerged fermentation.

    PubMed

    Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemhen, Omoanghe S

    2016-01-01

    In this study, culture conditions were optimized to improve lovastatin production by Omphalotus olearius, isolate OBCC 2002, using statistical experimental designs. The Plackett-Burman design was used to select important variables affecting lovastatin production. Accordingly, glucose, peptone, and agitation speed were determined as the variables that have influence on lovastatin production. In a further experiment, these variables were optimized with a Box-Behnken design and applied in a submerged process; this resulted in 12.51 mg/L lovastatin production on a medium containing glucose (10 g/L), peptone (5 g/L), thiamine (1 mg/L), and NaCl (0.4 g/L) under static conditions. This level of lovastatin production is eight times higher than that produced under unoptimized media and growth conditions by Omphalotus olearius. To the best of our knowledge, this is the first attempt to optimize submerged fermentation process for lovastatin production by Omphalotus olearius.

  14. Optimal experimental designs for fMRI when the model matrix is uncertain.

    PubMed

    Kao, Ming-Hung; Zhou, Lin

    2017-07-15

    This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Statistical optimization of beta-carotene production by Arthrobacter agilis A17 using response surface methodology and Box-Behnken design

    NASA Astrophysics Data System (ADS)

    Özdal, Murat; Özdal, Özlem Gür; Gürkök, Sümeyra

    2017-04-01

    β-carotene is a commercially important natural pigment and has been widely applied in the medicine, pharmaceutical, food, feed and cosmetic industries. The current study aimed to investigate the usability of molasses for β-carotene production by Arthrobacter agilis A17 (KP318146) and to optimize the production process. Box-Behnken Design of Response Surface Methodology was used to determine the optimum levels and the interactions of three independent variables namely molasses, yeast extract and KH2PO4 at three different levels. β-carotene yield in optimized medium containing 70 g/l molasses, 25 g/l yeast extract and 0.96 g/l KH2PO4, reached up to 100 mg/l, which is approximately 2.5-fold higher than the yield, obtained from control cultivation. A remarkable β-carotene production on inexpensive carbon source was achieved with the use of statistical optimization.

  16. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  17. Program document for Energy Systems Optimization Program 2 (ESOP2). Volume 1: Engineering manual

    NASA Technical Reports Server (NTRS)

    Hamil, R. G.; Ferden, S. L.

    1977-01-01

    The Energy Systems Optimization Program, which is used to provide analyses of Modular Integrated Utility Systems (MIUS), is discussed. Modifications to the input format to allow modular inputs in specified blocks of data are described. An optimization feature which enables the program to search automatically for the minimum value of one parameter while varying the value of other parameters is reported. New program option flags for prime mover analyses and solar energy for space heating and domestic hot water are also covered.

  18. An Improved LC-ESI-MS/MS Method to Quantify Pregabalin in Human Plasma and Dry Plasma Spot for Therapeutic Monitoring and Pharmacokinetic Applications.

    PubMed

    Dwivedi, Jaya; Namdev, Kuldeep K; Chilkoti, Deepak C; Verma, Surajpal; Sharma, Swapnil

    2018-06-06

    Therapeutic drug monitoring (TDM) of anti-epileptic drugs provides a valid clinical tool in optimization of overall therapy. However, TDM is challenging due to the high biological samples (plasma/blood) storage/shipment costs and the limited availability of laboratories providing TDM services. Sampling in the form of dry plasma spot (DPS) or dry blood spot (DBS) is a suitable alternative to overcome these issues. An improved, simple, rapid, and stability indicating method for quantification of pregabalin in human plasma and DPS has been developed and validated. Analyses were performed on liquid chromatography tandem mass spectrometer under positive ionization mode of electrospray interface. Pregabain-d4 was used as internal standard, and the chromatographic separations were performed on Poroshell 120 EC-C18 column using an isocratic mobile phase flow rate of 1 mL/min. Stability of pregabalin in DPS was evaluated under simulated real-time conditions. Extraction procedures from plasma and DPS samples were compared using statistical tests. The method was validated considering the FDA method validation guideline. The method was linear over the concentration range of 20-16000 ng/mL and 100-10000 ng/mL in plasma and DPS, respectively. DPS samples were found stable for only one week upon storage at room temperature and for at least four weeks at freezing temperature (-20 ± 5 °C). Method was applied for quantification of pregabalin in over 600 samples of a clinical study. Statistical analyses revealed that two extraction procedures in plasma and DPS samples showed statistically insignificant difference and can be used interchangeably without any bias. Proposed method involves simple and rapid steps of sample processing that do not require a pre- or post-column derivatization procedure. The method is suitable for routine pharmacokinetic analysis and therapeutic monitoring of pregabalin.

  19. Interferon alpha adjuvant therapy in patients with high-risk melanoma: a systematic review and meta-analysis.

    PubMed

    Mocellin, Simone; Pasquali, Sandro; Rossi, Carlo R; Nitti, Donato

    2010-04-07

    Based on previous meta-analyses of randomized controlled trials (RCTs), the use of interferon alpha (IFN-alpha) in the adjuvant setting improves disease-free survival (DFS) in patients with high-risk cutaneous melanoma. However, RCTs have yielded conflicting data on the effect of IFN-alpha on overall survival (OS). We conducted a systematic review and meta-analysis to examine the effect of IFN-alpha on DFS and OS in patients with high-risk cutaneous melanoma. The systematic review was performed by searching MEDLINE, EMBASE, Cancerlit, Cochrane, ISI Web of Science, and ASCO databases. The meta-analysis was performed using time-to-event data from which hazard ratios (HRs) and 95% confidence intervals (CIs) of DFS and OS were estimated. Subgroup and meta-regression analyses to investigate the effect of dose and treatment duration were also performed. Statistical tests were two-sided. The meta-analysis included 14 RCTs, published between 1990 and 2008, and involved 8122 patients, of which 4362 patients were allocated to the IFN-alpha arm. IFN-alpha alone was compared with observation in 12 of the 14 trials, and 17 comparisons (IFN-alpha vs comparator) were generated in total. IFN-alpha treatment was associated with a statistically significant improvement in DFS in 10 of the 17 comparisons (HR for disease recurrence = 0.82, 95% CI = 0.77 to 0.87; P < .001) and improved OS in four of the 14 comparisons (HR for death = 0.89, 95% CI = 0.83 to 0.96; P = .002). No between-study heterogeneity in either DFS or OS was observed. No optimal IFN-alpha dose and/or treatment duration or a subset of patients more responsive to adjuvant therapy was identified using subgroup analysis and meta-regression. In patients with high-risk cutaneous melanoma, IFN-alpha adjuvant treatment showed statistically significant improvement in both DFS and OS.

  20. Aircraft Maneuvers for the Evaluation of Flying Qualities and Agility. Volume 1. Maneuver Development Process and Initial Maneuver Set

    DTIC Science & Technology

    1993-08-01

    subtitled "Simulation Data," consists of detailed infonrnation on the design parmneter variations tested, subsequent statistical analyses conducted...used with confidence during the design process. The data quality can be examined in various forms such as statistical analyses of measure of merit data...merit, such as time to capture or nmaximurn pitch rate, can be calculated from the simulation time history data. Statistical techniques are then used

  1. Comparative study of the craniofacial growth depending on the type of lactation received.

    PubMed

    Sánchez-Molins, M; Grau Carbó, J; Lischeid Gaig, C; Ustrell Torrent, J M

    2010-06-01

    Several organizations consider mother's milk the optimal nutrition source for newborns [AAP, 1998; Gartner et al., 1997; Mohrbacher and Stock, 2002; WHO, 1989]. However, there is little scientific evidence supporting the idea that breastfeeding has a positive influence on the development of the orofacial structures. The study of cases and controls (observational, analytical and retrospective) and lateral teleradiographs of the cranium of 197 patients (106 breast-fed and 91 bottle-fed) were compared. Ricketts, Steiner and McNamara values were used for the cephalometric analysis. Differences between the two groups were analysed by applying the T-test and ANOVA. Statistical significance levels were set at p<0.05. Non-nutritive infantile sucking habits have been compared; differences between the two groups were analysed by applying the Chi-square test. First, the upper incisors were found to be protruded in the bottle-fed group. Second, subjects belonging to the breast-fed group displayed a brachycephalic mandible arch, while those fed with bottle had a dolichocephalic Steiner mandibular plane. Third, both facial depth and distance of the pogonion to the perpendicular nasion presented a certain tendency to a retruded mandibular bone in the bottle-fed group. And fourth, the frequency of use of dummy and thumb suction were greater in the bottle feed group, without statistical significance. In addition to the multiple advantages that mother's milk offers to newborns, breastfeeding also helps correct orofacial development (not only for the incisors position, but also for the vertical and sagittal relations of the mandible with upper maxillary and cranial basis).

  2. Storage capacity of subcutaneous fat in Japanese adults.

    PubMed

    Sato, S; Demura, S; Nakai, M

    2015-08-01

    On the basis of our previous study, which examined the nonlinear relationship between visceral fat area (VFA) and percent regional fat mass in the trunk, we hypothesise the presence of some storage capacity of subcutaneous fat. This study aimed to examine the storage capacity of subcutaneous fat on the basis of subcutaneous fat area (SFA) and VFA in 791 Japanese adult males and 563 females. Regression analyses by using SFA as a dependent variable and VFA as an independent variable were performed for each group classified by visceral fat obesity (VO): VO (VFA ⩾ 100 cm(2)) and the no-VO (NVO) groups. To statistically identify an optimal critical point for subcutaneous fat accumulation, we changed the cutoff point for the VO group from 50-150 cm(2) in 10-cm(2) increments and confirmed the significance of the correlation between SFA and VFA for each obesity group, the statistical difference in correlations between NVO and VO groups, and the goodness of fit for the two regression lines using the standard error of estimation values. These analyses were conducted for each sex and age (<65 and ⩾ 65 years) group. The critical point for subcutaneous fat accumulation appears at the following cutoff points of VFA: 130 cm(2) in <65-year-old males, 110 cm(2) in ⩾ 65-year-old males and 100 cm(2) in both female groups. These results suggest the presence of some storage capacity of subcutaneous fat. As a further application, these findings may serve to improve the risk assessment of obesity-related diseases.

  3. Attentional and Contextual Priors in Sound Perception.

    PubMed

    Wolmetz, Michael; Elhilali, Mounya

    2016-01-01

    Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.

  4. Attentional and Contextual Priors in Sound Perception

    PubMed Central

    Wolmetz, Michael; Elhilali, Mounya

    2016-01-01

    Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis. PMID:26882228

  5. How precise can atoms of a nanocluster be located in 3D using a tilt series of scanning transmission electron microscopy images?

    PubMed

    Alania, M; De Backer, A; Lobato, I; Krause, F F; Van Dyck, D; Rosenauer, A; Van Aert, S

    2017-10-01

    In this paper, we investigate how precise atoms of a small nanocluster can ultimately be located in three dimensions (3D) from a tilt series of images acquired using annular dark field (ADF) scanning transmission electron microscopy (STEM). Therefore, we derive an expression for the statistical precision with which the 3D atomic position coordinates can be estimated in a quantitative analysis. Evaluating this statistical precision as a function of the microscope settings also allows us to derive the optimal experimental design. In this manner, the optimal angular tilt range, required electron dose, optimal detector angles, and number of projection images can be determined. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  7. [Applications of mathematical statistics methods on compatibility researches of traditional Chinese medicines formulae].

    PubMed

    Mai, Lan-Yin; Li, Yi-Xuan; Chen, Yong; Xie, Zhen; Li, Jie; Zhong, Ming-Yu

    2014-05-01

    The compatibility of traditional Chinese medicines (TCMs) formulae containing enormous information, is a complex component system. Applications of mathematical statistics methods on the compatibility researches of traditional Chinese medicines formulae have great significance for promoting the modernization of traditional Chinese medicines and improving clinical efficacies and optimizations of formulae. As a tool for quantitative analysis, data inference and exploring inherent rules of substances, the mathematical statistics method can be used to reveal the working mechanisms of the compatibility of traditional Chinese medicines formulae in qualitatively and quantitatively. By reviewing studies based on the applications of mathematical statistics methods, this paper were summarized from perspective of dosages optimization, efficacies and changes of chemical components as well as the rules of incompatibility and contraindication of formulae, will provide the references for further studying and revealing the working mechanisms and the connotations of traditional Chinese medicines.

  8. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    NASA Astrophysics Data System (ADS)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  9. Optimism and well-being: a prospective multi-method and multi-dimensional examination of optimism as a resilience factor following the occurrence of stressful life events.

    PubMed

    Kleiman, Evan M; Chiara, Alexandra M; Liu, Richard T; Jager-Hyman, Shari G; Choi, Jimmy Y; Alloy, Lauren B

    2017-02-01

    Optimism has been conceptualised variously as positive expectations (PE) for the future , optimistic attributions , illusion of control , and self-enhancing biases. Relatively little research has examined these multiple dimensions of optimism in relation to psychological and physical health. The current study assessed the multi-dimensional nature of optimism within a prospective vulnerability-stress framework. Initial principal component analyses revealed the following dimensions: PEs, Inferential Style (IS), Sense of Invulnerability (SI), and Overconfidence (O). Prospective follow-up analyses demonstrated that PE was associated with fewer depressive episodes and moderated the effect of stressful life events on depressive symptoms. SI also moderated the effect of life stress on anxiety symptoms. Generally, our findings indicated that optimism is a multifaceted construct and not all forms of optimism have the same effects on well-being. Specifically, our findings indicted that PE may be the most relevant to depression, whereas SI may be the most relevant to anxiety.

  10. Systematic Analysis of Hollow Fiber Model of Tuberculosis Experiments.

    PubMed

    Pasipanodya, Jotam G; Nuermberger, Eric; Romero, Klaus; Hanna, Debra; Gumbo, Tawanda

    2015-08-15

    The in vitro hollow fiber system model of tuberculosis (HFS-TB), in tandem with Monte Carlo experiments, was introduced more than a decade ago. Since then, it has been used to perform a large number of tuberculosis pharmacokinetics/pharmacodynamics (PK/PD) studies that have not been subjected to systematic analysis. We performed a literature search to identify all HFS-TB experiments published between 1 January 2000 and 31 December 2012. There was no exclusion of articles by language. Bias minimization was according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Steps for reporting systematic reviews were followed. There were 22 HFS-TB studies published, of which 12 were combination therapy studies and 10 were monotherapy studies. There were 4 stand-alone Monte Carlo experiments that utilized quantitative output from the HFS-TB. All experiments reported drug pharmacokinetics, which recapitulated those encountered in humans. HFS-TB studies included log-phase growth studies under ambient air, semidormant bacteria at pH 5.8, and nonreplicating persisters at low oxygen tension of ≤ 10 parts per billion. The studies identified antibiotic exposures associated with optimal kill of Mycobacterium tuberculosis and suppression of acquired drug resistance (ADR) and informed predictions about optimal clinical doses, expected performance of standard doses and regimens in patients, and expected rates of ADR, as well as a proposal of new susceptibility breakpoints. The HFS-TB model offers the ability to perform PK/PD studies including humanlike drug exposures, to identify bactericidal and sterilizing effect rates, and to identify exposures associated with suppression of drug resistance. Because of the ability to perform repetitive sampling from the same unit over time, the HFS-TB vastly improves statistical power and facilitates the execution of time-to-event analyses and repeated event analyses, as well as dynamic system pharmacology mathematical models. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    PubMed

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Hyperparameterization of soil moisture statistical models for North America with Ensemble Learning Models (Elm)

    NASA Astrophysics Data System (ADS)

    Steinberg, P. D.; Brener, G.; Duffy, D.; Nearing, G. S.; Pelissier, C.

    2017-12-01

    Hyperparameterization, of statistical models, i.e. automated model scoring and selection, such as evolutionary algorithms, grid searches, and randomized searches, can improve forecast model skill by reducing errors associated with model parameterization, model structure, and statistical properties of training data. Ensemble Learning Models (Elm), and the related Earthio package, provide a flexible interface for automating the selection of parameters and model structure for machine learning models common in climate science and land cover classification, offering convenient tools for loading NetCDF, HDF, Grib, or GeoTiff files, decomposition methods like PCA and manifold learning, and parallel training and prediction with unsupervised and supervised classification, clustering, and regression estimators. Continuum Analytics is using Elm to experiment with statistical soil moisture forecasting based on meteorological forcing data from NASA's North American Land Data Assimilation System (NLDAS). There Elm is using the NSGA-2 multiobjective optimization algorithm for optimizing statistical preprocessing of forcing data to improve goodness-of-fit for statistical models (i.e. feature engineering). This presentation will discuss Elm and its components, including dask (distributed task scheduling), xarray (data structures for n-dimensional arrays), and scikit-learn (statistical preprocessing, clustering, classification, regression), and it will show how NSGA-2 is being used for automate selection of soil moisture forecast statistical models for North America.

  13. How to make optimal use of maximal multipartite entanglement in clock synchronization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Changliang; Hofmann, Holger F.

    2014-12-04

    We introduce a multi-party quantum clock synchronization protocol that makes optimal use of the maximal multipartite entanglement of GHZ-type states. The measurement statistics of the protocol are analyzed and the efficiency is evaluated.

  14. Word lengths are optimized for efficient communication.

    PubMed

    Piantadosi, Steven T; Tily, Harry; Gibson, Edward

    2011-03-01

    We demonstrate a substantial improvement on one of the most celebrated empirical laws in the study of language, Zipf's 75-y-old theory that word length is primarily determined by frequency of use. In accord with rational theories of communication, we show across 10 languages that average information content is a much better predictor of word length than frequency. This indicates that human lexicons are efficiently structured for communication by taking into account interword statistical dependencies. Lexical systems result from an optimization of communicative pressures, coding meanings efficiently given the complex statistics of natural language use.

  15. Optimization of Statistical Methods Impact on Quantitative Proteomics Data.

    PubMed

    Pursiheimo, Anna; Vehmas, Anni P; Afzal, Saira; Suomi, Tomi; Chand, Thaman; Strauss, Leena; Poutanen, Matti; Rokka, Anne; Corthals, Garry L; Elo, Laura L

    2015-10-02

    As tools for quantitative label-free mass spectrometry (MS) rapidly develop, a consensus about the best practices is not apparent. In the work described here we compared popular statistical methods for detecting differential protein expression from quantitative MS data using both controlled experiments with known quantitative differences for specific proteins used as standards as well as "real" experiments where differences in protein abundance are not known a priori. Our results suggest that data-driven reproducibility-optimization can consistently produce reliable differential expression rankings for label-free proteome tools and are straightforward in their application.

  16. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    ERIC Educational Resources Information Center

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  17. Multi-year objective analyses of warm season ground-level ozone and PM2.5 over North America using real-time observations and Canadian operational air quality models

    NASA Astrophysics Data System (ADS)

    Robichaud, A.; Ménard, R.

    2014-02-01

    Multi-year objective analyses (OA) on a high spatiotemporal resolution for the warm season period (1 May to 31 October) for ground-level ozone and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) are presented. The OA used in this study combines model outputs from the Canadian air quality forecast suite with US and Canadian observations from various air quality surface monitoring networks. The analyses are based on an optimal interpolation (OI) with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg (H-L) method. The error statistics are "tuned" using a χ2 (chi-square) diagnostic, a semi-empirical procedure that provides significantly better verification than without tuning. Successful cross-validation experiments were performed with an OA setup using 90% of data observations to build the objective analyses and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface-derived or ground-based measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5, respectively, and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. This paper focuses on two applications: (1) presenting long-term averages of OA and analysis increments as a form of summer climatology; and (2) analyzing long-term (decadal) trends and inter-annual fluctuations using OA outputs. The results show that high percentiles of ozone and PM2.5 were both following a general decreasing trend in North America, with the eastern part of the United States showing the most widespread decrease, likely due to more effective pollution controls. Some locations, however, exhibited an increasing trend in the mean ozone and PM2.5, such as the northwestern part of North America (northwest US and Alberta). Conversely, the low percentiles are generally rising for ozone, which may be linked to the intercontinental transport of increased emissions from emerging countries. After removing the decadal trend, the inter-annual fluctuations of the high percentiles are largely explained by the temperature fluctuations for ozone and to a lesser extent by precipitation fluctuations for PM2.5. More interesting is the economic short-term change (as expressed by the variation of the US gross domestic product growth rate), which explains 37% of the total variance of inter-annual fluctuations of PM2.5 and 15% in the case of ozone.

  18. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. Conclusions IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise. PMID:27729304

  19. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix. IBMWA is a new alternative for data analytics software that automates descriptive, predictive, and visual analytics. This program is very user-friendly but requires data preprocessing, statistical conceptual understanding, and domain expertise.

  20. Optimization strategies based on sequential quadratic programming applied for a fermentation process for butanol production.

    PubMed

    Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens

    2009-11-01

    In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.

  1. "Real-time" disintegration analysis and D-optimal experimental design for the optimization of diclofenac sodium fast-dissolving films.

    PubMed

    El-Malah, Yasser; Nazzal, Sami

    2013-01-01

    The objective of this work was to study the dissolution and mechanical properties of fast-dissolving films prepared from a tertiary mixture of pullulan, polyvinylpyrrolidone and hypromellose. Disintegration studies were performed in real-time by probe spectroscopy to detect the onset of film disintegration. Tensile strength and elastic modulus of the films were measured by texture analysis. Disintegration time of the films ranged from 21 to 105 seconds whereas their mechanical properties ranged from approximately 2 to 49 MPa for tensile strength and 1 to 21 MPa% for young's modulus. After generating polynomial models correlating the variables using a D-Optimal mixture design, an optimal formulation with desired responses was proposed by the statistical package. For validation, a new film formulation loaded with diclofenac sodium based on the optimized composition was prepared and tested for dissolution and tensile strength. Dissolution of the optimized film was found to commence almost immediately with 50% of the drug released within one minute. Tensile strength and young's modulus of the film were 11.21 MPa and 6, 78 MPa%, respectively. Real-time spectroscopy in conjunction with statistical design were shown to be very efficient for the optimization and development of non-conventional intraoral delivery system such as fast dissolving films.

  2. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  3. Item response theory analysis of the life orientation test-revised: age and gender differential item functioning analyses.

    PubMed

    Steca, Patrizia; Monzani, Dario; Greco, Andrea; Chiesi, Francesca; Primi, Caterina

    2015-06-01

    This study is aimed at testing the measurement properties of the Life Orientation Test-Revised (LOT-R) for the assessment of dispositional optimism by employing item response theory (IRT) analyses. The LOT-R was administered to a large sample of 2,862 Italian adults. First, confirmatory factor analyses demonstrated the theoretical conceptualization of the construct measured by the LOT-R as a single bipolar dimension. Subsequently, IRT analyses for polytomous, ordered response category data were applied to investigate the items' properties. The equivalence of the items across gender and age was assessed by analyzing differential item functioning. Discrimination and severity parameters indicated that all items were able to distinguish people with different levels of optimism and adequately covered the spectrum of the latent trait. Additionally, the LOT-R appears to be gender invariant and, with minor exceptions, age invariant. Results provided evidence that the LOT-R is a reliable and valid measure of dispositional optimism. © The Author(s) 2014.

  4. Dispositional and explanatory style optimism as potential moderators of the relationship between hopelessness and suicidal ideation.

    PubMed

    Hirsch, Jameson K; Conner, Kenneth R

    2006-12-01

    To test the hypothesis that higher levels of optimism reduce the association between hopelessness and suicidal ideation, 284 college students completed self-report measures of optimism and Beck scales for hopelessness, suicidal ideation, and depression. A statistically significant interaction between hopelessness and one measure of optimism was obtained, consistent with the hypothesis that optimism moderates the relationship between hopelessness and suicidal ideation. Hopelessness is not inevitably associated with suicidal ideation. Optimism may be an important moderator of the association. The development of treatments to enhance optimism may complement standard treatments to reduce suicidality that target depression and hopelessness.

  5. Initial blood storage experiment

    NASA Technical Reports Server (NTRS)

    Surgenor, Douglas MACN.

    1988-01-01

    The design of the Initial Blood Storage Experiment (IBSE) was based upon a carefully controlled comparison between identical sets of human blood cell suspensions - red cells, white cell, and platelets - one set of which was transported aboard the Columbia on a 6 day 11 hour mission, and the other held on the ground. Both sets were carried inside stainless steel dewars within specially fabricated flight hardware. Individual bags of cell suspensions were randomly assigned with respect to ground vs orbit status, dewar chamber, and specific location within the dewar. To foster optimal preservation, each cell type was held under specific optimal conditions of pH, ionic strength, solute concentration, gas tension, and temperature. An added variable in this initial experiment was provided by the use of three different polymer/plasticizer formulations for the sealed bags which held the blood cells. At termination of the experiment, aliquots of the suspensions, identified only by code, were distributed to be assayed. Assays were selected to constitute a broad survey of cellular properties and thereby maximize the chances of detection of gravitational effects. A total of 74 different outcome measurements were reported for statistical analysis. When the measurements were completed, the results were entered into the IBSE data base, at which time the data were matched with the original blood bag numbers to determine their status with respect to polymer/plasticizer type, orbit status (orbit or ground), and storage position within the experimental hardware. The data were studied by analysis of variance. Initially, type of bag and orbital status were main factors; later more detailed analyses were made on specific issues such as position in the hardware and specific plastic. If the analysis of variance indicated a statistical significance at the 5 percent level the corresponding p-value was reported.

  6. Statistical detection of EEG synchrony using empirical bayesian inference.

    PubMed

    Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven

    2015-01-01

    There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  7. Granulocyte-colony stimulating factor in the prevention of postoperative infectious complications and sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). Protocol for a controlled clinical trial developed by consensus of an international study group. Part two: design of the study.

    PubMed

    Bauhofer, A; Lorenz, W; Stinner, B; Rothmund, M; Koller, M; Sitter, H; Celik, I; Farndon, J R; Fingerhut, A; Hay, J M; Lefering, R; Lorijn, R; Nyström, P O; Schäfer, H; Schein, M; Solomkin, J; Troidl, H; Volk, H D; Wittmann, D H; Wyatt, J

    2001-04-01

    Presentation of a new type of a study protocol for evaluation of the effectiveness of an immune modifier (rhG-CSF, filgrastim): prevention of postoperative infectious complications and of sub-optimal recovery from operation in patients with colorectal cancer and increased preoperative risk (ASA 3 and 4). This part describes the design of the randomised, placebo controlled, double-blinded, single-centre study performed at an university hospital (n = 40 patients for each group). The trial design includes the following elements for a prototype protocol: * The study population is restricted to patients with colorectal cancer, including a left sided resection and an increased perioperative risk (ASA 3 and 4). * Patients are allocated by random to the control or treatment group. * The double blinding strategy of the trial is assessed by psychometric indices. * An endpoint construct with quality of life (EORTC QLQ-C30) and a recovery index (modified Mc Peek index) are used as primary endpoints. Qualitative analysis of clinical relevance of the endpoints is performed by both patients and doctors. * Statistical analysis uses an area under the curve (AUC) model for improvement of quality of life on leaving hospital and two and six months after operation. A confirmatory statistical model with quality of life as the first primary endpoint in the hierarchic test procedure is used. Expectations of patients and surgeons and the negative affect are analysed by social psychological scales. This study design differs from other trials on preoperative prophylaxis and postoperative recovery, and has been developed to try a new concept and avoid previous failures.

  8. Reward maximization justifies the transition from sensory selection at childhood to sensory integration at adulthood.

    PubMed

    Daee, Pedram; Mirian, Maryam S; Ahmadabadi, Majid Nili

    2014-01-01

    In a multisensory task, human adults integrate information from different sensory modalities--behaviorally in an optimal Bayesian fashion--while children mostly rely on a single sensor modality for decision making. The reason behind this change of behavior over age and the process behind learning the required statistics for optimal integration are still unclear and have not been justified by the conventional Bayesian modeling. We propose an interactive multisensory learning framework without making any prior assumptions about the sensory models. In this framework, learning in every modality and in their joint space is done in parallel using a single-step reinforcement learning method. A simple statistical test on confidence intervals on the mean of reward distributions is used to select the most informative source of information among the individual modalities and the joint space. Analyses of the method and the simulation results on a multimodal localization task show that the learning system autonomously starts with sensory selection and gradually switches to sensory integration. This is because, relying more on modalities--i.e. selection--at early learning steps (childhood) is more rewarding than favoring decisions learned in the joint space since, smaller state-space in modalities results in faster learning in every individual modality. In contrast, after gaining sufficient experiences (adulthood), the quality of learning in the joint space matures while learning in modalities suffers from insufficient accuracy due to perceptual aliasing. It results in tighter confidence interval for the joint space and consequently causes a smooth shift from selection to integration. It suggests that sensory selection and integration are emergent behavior and both are outputs of a single reward maximization process; i.e. the transition is not a preprogrammed phenomenon.

  9. Facilitation of school re-entry and peer acceptance of children with cancer: a review and meta-analysis of intervention studies.

    PubMed

    Helms, A S; Schmiegelow, K; Brok, J; Johansen, C; Thorsteinsson, T; Simovska, V; Larsen, H B

    2016-01-01

    Increased survival rates from childhood cancer call for efforts to reintegrate children with cancer back into their academic and social environments. The aims of this study were to: (1) review and analyse the existing literature on school re-entry interventions for children with cancer; and (2) discuss the importance of peer involvement in the treatment. Relevant databases were searched using equivalent search algorithms and six studies were selected that target children with cancer and/or their classmates. Two authors independently reviewed the literature for data extraction. The articles were reviewed using the PRISMA model for reporting reviews. Statistical calculations for the meta-analyses were done using Review Manager 5.2. The meta-analyses showed significant effects of school re-entry programmes in terms of enhancing academic achievement in children with cancer (P = 0.008) and lowering their levels of depression (P = 0.05). Increased knowledge among classmates was associated with less fear and a more positive attitude towards the child with cancer. Due to limited numbers of patients, lack of control groups, and the diversity of intervention strategies used in previous studies, there is a need for intervention programmes exploring the optimal path for the reintegration of children with cancer into the education system and into their peer groups. © 2014 John Wiley & Sons Ltd.

  10. Influence of stromal refractive index and hydration on corneal laser refractive surgery.

    PubMed

    de Ortueta, Diego; von Rüden, Dennis; Magnago, Thomas; Arba Mosquera, Samuel

    2014-06-01

    To evaluate the influence of the stromal refractive index and hydration on postoperative outcomes in eyes that had corneal laser refractive surgery using the Amaris laser system. Augenzentrum Recklinghausen, Recklinghausen, Germany. Comparative case series. At the 6-month follow-up, right eyes were retrospectively analyzed. The effect of the stromal refractive index and hydration on refractive outcomes was assessed using univariate linear and multilinear correlations. Sixty eyes were analyzed. Univariate linear analyses showed that the stromal refractive index and hydration were correlated with the thickness of the preoperative exposed stroma and was statistically different for laser in situ keratomileusis and laser-assisted subepithelial keratectomy treatments. Univariate multilinear analyses showed that the spherical equivalent (SE) was correlated with the attempted SE and stromal refractive index (or hydration). Analyses suggest overcorrections for higher stromal refractive index values and for lower hydration values. The stromal refractive index and hydration affected postoperative outcomes in a subtle, yet significant manner. An adjustment toward greater attempted correction in highly hydrated corneas and less intended correction in low hydrated corneas might help optimize refractive outcomes. Mr. Magnago and Dr. Arba-Mosquera are employees of and Dr. Diego de Ortueta is a consultant to Schwind eye-tech-solutions GmbH & Co. KG. Mr. Rüden has no financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  12. Energy Efficiency Optimization in Relay-Assisted MIMO Systems With Perfect and Statistical CSI

    NASA Astrophysics Data System (ADS)

    Zappone, Alessio; Cao, Pan; Jorswieck, Eduard A.

    2014-01-01

    A framework for energy-efficient resource allocation in a single-user, amplify-and-forward relay-assisted MIMO system is devised in this paper. Previous results in this area have focused on rate maximization or sum power minimization problems, whereas fewer results are available when bits/Joule energy efficiency (EE) optimization is the goal. The performance metric to optimize is the ratio between the system's achievable rate and the total consumed power. The optimization is carried out with respect to the source and relay precoding matrices, subject to QoS and power constraints. Such a challenging non-convex problem is tackled by means of fractional programming and and alternating maximization algorithms, for various CSI assumptions at the source and relay. In particular the scenarios of perfect CSI and those of statistical CSI for either the source-relay or the relay-destination channel are addressed. Moreover, sufficient conditions for beamforming optimality are derived, which is useful in simplifying the system design. Numerical results are provided to corroborate the validity of the theoretical findings.

  13. Optimization of hole generation in Ti/CFRP stacks

    NASA Astrophysics Data System (ADS)

    Ivanov, Y. N.; Pashkov, A. E.; Chashhin, N. S.

    2018-03-01

    The article aims to describe methods for improving the surface quality and hole accuracy in Ti/CFRP stacks by optimizing cutting methods and drill geometry. The research is based on the fundamentals of machine building, theory of probability, mathematical statistics, and experiment planning and manufacturing process optimization theories. Statistical processing of experiment data was carried out by means of Statistica 6 and Microsoft Excel 2010. Surface geometry in Ti stacks was analyzed using a Taylor Hobson Form Talysurf i200 Series Profilometer, and in CFRP stacks - using a Bruker ContourGT-Kl Optical Microscope. Hole shapes and sizes were analyzed using a Carl Zeiss CONTURA G2 Measuring machine, temperatures in cutting zones were recorded with a FLIR SC7000 Series Infrared Camera. Models of multivariate analysis of variance were developed. They show effects of drilling modes on surface quality and accuracy of holes in Ti/CFRP stacks. The task of multicriteria drilling process optimization was solved. Optimal cutting technologies which improve performance were developed. Methods for assessing thermal tool and material expansion effects on the accuracy of holes in Ti/CFRP/Ti stacks were developed.

  14. Bayesian Spatial Design of Optimal Deep Tubewell Locations in Matlab, Bangladesh.

    PubMed

    Warren, Joshua L; Perez-Heydrich, Carolina; Yunus, Mohammad

    2013-09-01

    We introduce a method for statistically identifying the optimal locations of deep tubewells (dtws) to be installed in Matlab, Bangladesh. Dtw installations serve to mitigate exposure to naturally occurring arsenic found at groundwater depths less than 200 meters, a serious environmental health threat for the population of Bangladesh. We introduce an objective function, which incorporates both arsenic level and nearest town population size, to identify optimal locations for dtw placement. Assuming complete knowledge of the arsenic surface, we then demonstrate how minimizing the objective function over a domain favors dtws placed in areas with high arsenic values and close to largely populated regions. Given only a partial realization of the arsenic surface over a domain, we use a Bayesian spatial statistical model to predict the full arsenic surface and estimate the optimal dtw locations. The uncertainty associated with these estimated locations is correctly characterized as well. The new method is applied to a dataset from a village in Matlab and the estimated optimal locations are analyzed along with their respective 95% credible regions.

  15. Structural Tailoring of Advanced Turboprops (STAT)

    NASA Technical Reports Server (NTRS)

    Brown, Kenneth W.

    1988-01-01

    This interim report describes the progress achieved in the structural Tailoring of Advanced Turboprops (STAT) program which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. This report provides a detailed description of the input, optimization procedures, approximate analyses and refined analyses, as well as validation test cases for the STAT program. In addition, conclusions and recommendations are summarized.

  16. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  17. Piglet colibacillosis diagnosis based on multiplex polymerase chain reaction and immunohistochemistry of paraffin-embedded tissues

    PubMed Central

    de Andrade, Caroline P.; Machado, Verônica S. L.; Bianchi, Matheus V.; Rolim, Veronica M.; Cruz, Raquel A. S.; Driemeier, David

    2018-01-01

    Enterotoxigenic Escherichia coli (ETEC) causes diarrhea in pigs, referred to as colibacillosis. The aim of this study was to optimize multiplex polymerase chain reaction (PCR) and immunohistochemistry (IHC) analyses of paraffin-embedded material to detect pathogenic E. coli strains causing colibacillosis in pigs. Multiplex PCR was optimized for fimbriae (F18, F4, F6, F5, and F41) and toxins (types A and B heat-stable toxins [STaP and STb], heat-labile toxin [LT], and type 2 Shiga toxin [STx2e]), and IHC was optimized for an anti-E. coli polyclonal antibody. Samples (132) from pigs received between 2006 and 2014 with clinical and histopathological diagnoses of colibacillosis were analyzed. E. coli was detected by IHC in 78.7%, and at least one virulence factor gene was detected in 71.2%. Pathogenic strains of ETEC with at least one fimbria and one toxin were detected in 40% of the samples in multiplex PCR. The most frequent virulence types were F18-STaP (7.5%), F18-STaP-STb (5.7%), and F4-STaP (3.8%). A statistically significant association was noted between virulence factors F4, F18, STaP, and STb and positive immunostaining results. Colibacillosis diagnosis through multiplex PCR and IHC of paraffin-embedded tissues is a practical approach, as samples can be fixed and stored for long periods before analysis. PMID:28693311

  18. Incremental Validity and Informant Effect from a Multi-Method Perspective: Assessing Relations between Parental Acceptance and Children’s Behavioral Problems

    PubMed Central

    Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P.; Carrasco, Miguel Á.

    2016-01-01

    This study examines the relationships between perceived parental acceptance and children’s behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children’s (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother–father or child–father combinations seem to be the best way to optimize the multi-informant method in order to predict children’s behavioral problems based on perceived parental acceptance. PMID:27242582

  19. Incremental Validity and Informant Effect from a Multi-Method Perspective: Assessing Relations between Parental Acceptance and Children's Behavioral Problems.

    PubMed

    Izquierdo-Sotorrío, Eva; Holgado-Tello, Francisco P; Carrasco, Miguel Á

    2016-01-01

    This study examines the relationships between perceived parental acceptance and children's behavioral problems (externalizing and internalizing) from a multi-informant perspective. Using mothers, fathers, and children as sources of information, we explore the informant effect and incremental validity. The sample was composed of 681 participants (227 children, 227 fathers, and 227 mothers). Children's (40% boys) ages ranged from 9 to 17 years (M = 12.52, SD = 1.81). Parents and children completed both the Parental Acceptance Rejection/Control Questionnaire (PARQ/Control) and the check list of the Achenbach System of Empirically Based Assessment (ASEBA). Statistical analyses were based on the correlated uniqueness multitrait-multimethod matrix (model MTMM) by structural equations and different hierarchical regression analyses. Results showed a significant informant effect and a different incremental validity related to which combination of sources was considered. A multi-informant perspective rather than a single one increased the predictive value. Our results suggest that mother-father or child-father combinations seem to be the best way to optimize the multi-informant method in order to predict children's behavioral problems based on perceived parental acceptance.

  20. Quality evaluation of official accident reports conducted by Labour Authorities in Andalusia (Spain).

    PubMed

    Salguero-Caparros, Francisco; Suarez-Cebador, Manuel; Carrillo-Castrillo, Jesús A; Rubio-Romero, Juan Carlos

    2018-01-01

    A public accident investigation is carried out when the consequences of the incident are significant or the accident has occurred in unusual circumstances. We evaluated the quality of the official accident investigations being conducted by Safety Specialists of the Labour Authorities in Andalusia. To achieve this objective, we analysed 98 occupational accident investigations conducted by the Labour Authorities in Andalusia in the last quarter of 2014. Various phases in the accident investigation process were examined, such as the use of the Eurostat variables within European Statistics on Accidents at Work (ESAW), detection of causes, determination of preventive measures, cost analysis of the accidents, identification of noncompliance with legal requirements or the investigation method used. The results of this study show that 77% of the official occupational accident investigation reports analysed were conducted in accordance with all the quality criteria recommended in the literature. To enhance glogal learning, and optimize allocation of resources, we propose the development of a harmonized European model for the public investigation of occupational accidents. Further it would be advisable to create a common classification and coding system for the causes of accidents for all European Union Member States.

  1. Computerized morphometry as an aid in distinguishing recurrent versus nonrecurrent meningiomas.

    PubMed

    Noy, Shawna; Vlodavsky, Euvgeni; Klorin, Geula; Drumea, Karen; Ben Izhak, Ofer; Shor, Eli; Sabo, Edmond

    2011-06-01

    To use novel digital and morphometric methods to identify variables able to better predict the recurrence of intracranial meningiomas. Histologic images from 30 previously diagnosed meningioma tumors that recurred over 10 years of follow-up were consecutively selected from the Rambam Pathology Archives. Images were captured and morphometrically analyzed. Novel algorithms of digital pattern recognition using Fourier transformation and fractal and nuclear texture analyses were applied to evaluate the overall growth pattern complexity of the tumors, as well as the chromatin texture of individual tumor nuclei. The extracted parameters were then correlated with patient prognosis. Kaplan-Meier analyses revealed statistically significant associations between tumor morphometric parameters and recurrence times. Tumors with less nuclear orientation, more nuclear density, higher fractal dimension, and less regular chromatin textures tended to recur faster than those with a higher degree of nuclear order, less pattern complexity, lower density, and more homogeneous chromatin nuclear textures (p < 0.01). To our knowledge, these digital morphometric methods were used for the first time to accurately predict tumor recurrence in patients with intracranial meningiomas. The use of these methods may bring additional valuable information to the clinician regarding the optimal management of these patients.

  2. Revealing biological information using data structuring and automated learning.

    PubMed

    Mohorianu, Irina; Moulton, Vincent

    2010-11-01

    The intermediary steps between a biological hypothesis, concretized in the input data, and meaningful results, validated using biological experiments, commonly employ bioinformatics tools. Starting with storage of the data and ending with a statistical analysis of the significance of the results, every step in a bioinformatics analysis has been intensively studied and the resulting methods and models patented. This review summarizes the bioinformatics patents that have been developed mainly for the study of genes, and points out the universal applicability of bioinformatics methods to other related studies such as RNA interference. More specifically, we overview the steps undertaken in the majority of bioinformatics analyses, highlighting, for each, various approaches that have been developed to reveal details from different perspectives. First we consider data warehousing, the first task that has to be performed efficiently, optimizing the structure of the database, in order to facilitate both the subsequent steps and the retrieval of information. Next, we review data mining, which occupies the central part of most bioinformatics analyses, presenting patents concerning differential expression, unsupervised and supervised learning. Last, we discuss how networks of interactions of genes or other players in the cell may be created, which help draw biological conclusions and have been described in several patents.

  3. Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    NASA Astrophysics Data System (ADS)

    Hoell, Simon; Omenzetter, Piotr

    2017-07-01

    Considering jointly damage sensitive features (DSFs) of signals recorded by multiple sensors, applying advanced transformations to these DSFs and assessing systematically their contribution to damage detectability and localisation can significantly enhance the performance of structural health monitoring systems. This philosophy is explored here for partial autocorrelation coefficients (PACCs) of acceleration responses. They are interrogated with the help of the linear discriminant analysis based on the Fukunaga-Koontz transformation using datasets of the healthy and selected reference damage states. Then, a simple but efficient fast forward selection procedure is applied to rank the DSF components with respect to statistical distance measures specialised for either damage detection or localisation. For the damage detection task, the optimal feature subsets are identified based on the statistical hypothesis testing. For damage localisation, a hierarchical neuro-fuzzy tool is developed that uses the DSF ranking to establish its own optimal architecture. The proposed approaches are evaluated experimentally on data from non-destructively simulated damage in a laboratory scale wind turbine blade. The results support our claim of being able to enhance damage detectability and localisation performance by transforming and optimally selecting DSFs. It is demonstrated that the optimally selected PACCs from multiple sensors or their Fukunaga-Koontz transformed versions can not only improve the detectability of damage via statistical hypothesis testing but also increase the accuracy of damage localisation when used as inputs into a hierarchical neuro-fuzzy network. Furthermore, the computational effort of employing these advanced soft computing models for damage localisation can be significantly reduced by using transformed DSFs.

  4. Simulation study to determine the impact of different design features on design efficiency in discrete choice experiments

    PubMed Central

    Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana

    2016-01-01

    Objectives Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. Design and methods A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2–20), alternatives (2–5), attributes (2–20) and attribute levels (2–5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Outcome Relative d-efficiency was used to measure the optimality of each DCE design. Results DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Conclusions Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. PMID:27436671

  5. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  6. Secondary mediation and regression analyses of the PTClinResNet database: determining causal relationships among the International Classification of Functioning, Disability and Health levels for four physical therapy intervention trials.

    PubMed

    Mulroy, Sara J; Winstein, Carolee J; Kulig, Kornelia; Beneck, George J; Fowler, Eileen G; DeMuth, Sharon K; Sullivan, Katherine J; Brown, David A; Lane, Christianne J

    2011-12-01

    Each of the 4 randomized clinical trials (RCTs) hosted by the Physical Therapy Clinical Research Network (PTClinResNet) targeted a different disability group (low back disorder in the Muscle-Specific Strength Training Effectiveness After Lumbar Microdiskectomy [MUSSEL] trial, chronic spinal cord injury in the Strengthening and Optimal Movements for Painful Shoulders in Chronic Spinal Cord Injury [STOMPS] trial, adult stroke in the Strength Training Effectiveness Post-Stroke [STEPS] trial, and pediatric cerebral palsy in the Pediatric Endurance and Limb Strengthening [PEDALS] trial for children with spastic diplegic cerebral palsy) and tested the effectiveness of a muscle-specific or functional activity-based intervention on primary outcomes that captured pain (STOMPS, MUSSEL) or locomotor function (STEPS, PEDALS). The focus of these secondary analyses was to determine causal relationships among outcomes across levels of the International Classification of Functioning, Disability and Health (ICF) framework for the 4 RCTs. With the database from PTClinResNet, we used 2 separate secondary statistical approaches-mediation analysis for the MUSSEL and STOMPS trials and regression analysis for the STEPS and PEDALS trials-to test relationships among muscle performance, primary outcomes (pain related and locomotor related), activity and participation measures, and overall quality of life. Predictive models were stronger for the 2 studies with pain-related primary outcomes. Change in muscle performance mediated or predicted reductions in pain for the MUSSEL and STOMPS trials and, to some extent, walking speed for the STEPS trial. Changes in primary outcome variables were significantly related to changes in activity and participation variables for all 4 trials. Improvement in activity and participation outcomes mediated or predicted increases in overall quality of life for the 3 trials with adult populations. Variables included in the statistical models were limited to those measured in the 4 RCTs. It is possible that other variables also mediated or predicted the changes in outcomes. The relatively small sample size in the PEDALS trial limited statistical power for those analyses. Evaluating the mediators or predictors of change between each ICF level and for 2 fundamentally different outcome variables (pain versus walking) provided insights into the complexities inherent across 4 prevalent disability groups.

  7. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  8. Comments on `A Cautionary Note on the Interpretation of EOFs'.

    NASA Astrophysics Data System (ADS)

    Behera, Swadhin K.; Rao, Suryachandra A.; Saji, Hameed N.; Yamagata, Toshio

    2003-04-01

    The misleading aspect of the statistical analyses used in Dommenget and Latif, which raises concerns on some of the reported climate modes, is demonstrated. Adopting simple statistical techniques, the physical existence of the Indian Ocean dipole mode is shown and then the limitations of varimax and regression analyses in capturing the climate mode are discussed.

  9. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    PubMed

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  10. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    NASA Astrophysics Data System (ADS)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  11. Current Use of Underage Alcohol Compliance Checks by Enforcement Agencies in the U.S.

    PubMed Central

    Erickson, Darin J.; Lenk, Kathleen M.; Sanem, Julia R.; Nelson, Toben F.; Jones-Webb, Rhonda; Toomey, Traci L.

    2014-01-01

    Background Compliance checks conducted by law enforcement agents can significantly reduce the likelihood of illegal alcohol sales to underage individuals, but these checks need to be conducted using optimal methods to maintain effectiveness. Materials and Methods We conducted a national survey of local and state enforcement agencies in 2010–2011 to assess: (1) how many agencies are currently conducting underage alcohol compliance checks, (2) how many agencies that conduct compliance checks use optimal methods—including checking all establishments in the jurisdiction, conducting checks at least 3–4 times per year, conducting follow-up checks within 3 months, and penalizing the licensee (not only the server/clerk) for failing a compliance check, and (3) characteristics of the agencies that conduct compliance checks. Results Just over one third of local law enforcement agencies and over two thirds of state agencies reported conducting compliance checks. However, only a small percentage of the agencies (4–6%) reported using all of the optimal methods to maximize effectiveness of these compliance checks. Local law enforcement agencies with an alcohol-related division, those with at least one full-time officer assigned to work on alcohol, and those in larger communities were significantly more likely to conduct compliance checks. State agencies with more full-time agents and those located in states where the state agency or both state and local enforcement agencies have primary responsibility (vs. only the local law agency) for enforcing alcohol retail laws were also more likely to conduct compliance checks; however, these agency characteristics did not remain statistically significant in the multivariate analyses. Conclusions Continued effort is needed to increase the number of local and state agencies conducting compliance checks using optimal methods to reduce youth access to alcohol. PMID:24716443

  12. Factorial experimental design for the culture of human embryonic stem cells as aggregates in stirred suspension bioreactors reveals the potential for interaction effects between bioprocess parameters.

    PubMed

    Hunt, Megan M; Meng, Guoliang; Rancourt, Derrick E; Gates, Ian D; Kallos, Michael S

    2014-01-01

    Traditional optimization of culture parameters for the large-scale culture of human embryonic stem cells (ESCs) as aggregates is carried out in a stepwise manner whereby the effect of varying each culture parameter is investigated individually. However, as evidenced by the wide range of published protocols and culture performance indicators (growth rates, pluripotency marker expression, etc.), there is a lack of systematic investigation into the true effect of varying culture parameters especially with respect to potential interactions between culture variables. Here we describe the design and execution of a two-parameter, three-level (3(2)) factorial experiment resulting in nine conditions that were run in duplicate 125-mL stirred suspension bioreactors. The two parameters investigated here were inoculation density and agitation rate, which are easily controlled, but currently, poorly characterized. Cell readouts analyzed included fold expansion, maximum density, and exponential growth rate. Our results reveal that the choice of best case culture parameters was dependent on which cell property was chosen as the primary output variable. Subsequent statistical analyses via two-way analysis of variance indicated significant interaction effects between inoculation density and agitation rate specifically in the case of exponential growth rates. Results indicate that stepwise optimization has the potential to miss out on the true optimal case. In addition, choosing an optimum condition for a culture output of interest from the factorial design yielded similar results when repeated with the same cell line indicating reproducibility. We finally validated that human ESCs remain pluripotent in suspension culture as aggregates under our optimal conditions and maintain their differentiation capabilities as well as a stable karyotype and strong expression levels of specific human ESC markers over several passages in suspension bioreactors.

  13. Influence of micellar calcium and phosphorus on rennet coagulation properties of cows milk.

    PubMed

    Malacarne, Massimo; Franceschi, Piero; Formaggioni, Paolo; Sandri, Sandro; Mariani, Primo; Summer, Andrea

    2014-05-01

    The main requirement for milk processed in most cheese typologies is its rennet coagulation ability. Despite the increasing number of studies, the causes for abnormal coagulation of milk are not fully understood. The aim of this study was to ascertain relationships between milk characteristics and its rennet coagulation ability, focusing on the influence of calcium (Ca) and phosphorus (P). Ca and P are essential constituents of the micelles. Micellar P can be present as part of colloidal calcium phosphate (inorganic-P) or covalently bound to caseins as phosphate groups (casein-P). Eighty one herd milk samples (SCC<400 000 cell/ml) were classified as Optimal (8), Suboptimal (39) Poor (29) and Non-coagulating milk (5), according to their rennet coagulation parameters as assessed by lactodynamographic test. Samples were analysed for their chemical composition (basic composition, protein fractions, minerals and salt equilibria), physicochemical parameters (pH and titratable acidity) and rheological properties. Optimal milk was characterised by the highest contents of major constituents, protein fractions and minerals, lowest content of chloride and highest values of titratable acidity. Non-coagulating milk was characterised by the highest values of pH and the lowest of titratable acidity. At micellar level, Optimal milk showed the highest values of colloidal Ca, casein-P and colloidal Mg (g/100 g casein), while Non-coagulating milk showed the lowest values. Interestingly, there was no statistical difference regarding the content of colloidal inorganic-P (g/100 g casein) between Optimal and Non-coagulating milks. Overall, high mineralisation of the micelle (expressed as g inorganic-P/100 g casein) positively affect its rennetability. However, excessive mineralisation could lead to a reduction of the phosphate groups (g casein-P/100 g casein) available for curd formation.

  14. Optimizing donor scheduling before recruitment: An effective approach to increasing apheresis platelet collections.

    PubMed

    Lokhandwala, Parvez M; Shike, Hiroko; Wang, Ming; Domen, Ronald E; George, Melissa R

    2018-01-01

    Typical approach for increasing apheresis platelet collections is to recruit new donors. Here, we investigated the effectiveness of an alternative strategy: optimizing donor scheduling, prior to recruitment, at a hospital-based blood donor center. Analysis of collections, during the 89 consecutive months since opening of donor center, was performed. Linear regression and segmented time-series analyses were performed to calculate growth rates of collections and to test for statistical differences, respectively. Pre-intervention donor scheduling capacity was 39/month. In the absence of active donor recruitment, during the first 29 months, the number of collections rose gradually to 24/month (growth-rate of 0.70/month). However, between month-30 and -55, collections exhibited a plateau at 25.6 ± 3.0 (growth-rate of -0.09/month) (p<0.0001). This plateau-phase coincided with donor schedule approaching saturation (65.6 ± 7.6% schedule booked). Scheduling capacity was increased by following two interventions: adding an apheresis instrument (month-56) and adding two more collection days/week (month-72). Consequently, the scheduling capacity increased to 130/month. Post-interventions, apheresis platelet collections between month-56 and -81 exhibited a spontaneous renewed growth at a rate of 0.62/month (p<0.0001), in absence of active donor recruitment. Active donor recruitment in month-82 and -86, when the donor schedule had been optimized to accommodate further growth, resulted in a dramatic but transient surge in collections. Apheresis platelet collections plateau at nearly 2/3rd of the scheduling capacity. Optimizing the scheduling capacity prior to active donor recruitment is an effective strategy to increase platelet collections at a hospital-based donor center.

  15. Current use of underage alcohol compliance checks by enforcement agencies in the United States.

    PubMed

    Erickson, Darin J; Lenk, Kathleen M; Sanem, Julia R; Nelson, Toben F; Jones-Webb, Rhonda; Toomey, Traci L

    2014-06-01

    Compliance checks conducted by law enforcement agents can significantly reduce the likelihood of illegal alcohol sales to underage individuals, but these checks need to be conducted using optimal methods to maintain effectiveness. We conducted a national survey of local and state enforcement agencies from 2010 to 2011 to assess: (i) how many agencies are currently conducting underage alcohol compliance checks, (ii) how many agencies that conduct compliance checks use optimal methods-including checking all establishments in the jurisdiction, conducting checks at least 3 to 4 times per year, conducting follow-up checks within 3 months, and penalizing the licensee (not only the server/clerk) for failing a compliance check, and (iii) characteristics of the agencies that conduct compliance checks. Just over one-third of local law enforcement agencies and over two-thirds of state agencies reported conducting compliance checks. However, only a small percentage of the agencies (4 to 6%) reported using all of the optimal methods to maximize effectiveness of these compliance checks. Local law enforcement agencies with an alcohol-related division, those with at least 1 full-time officer assigned to work on alcohol, and those in larger communities were significantly more likely to conduct compliance checks. State agencies with more full-time agents and those located in states where the state agency or both state and local enforcement agencies have primary responsibility (vs. only the local law agency) for enforcing alcohol retail laws were also more likely to conduct compliance checks; however, these agency characteristics did not remain statistically significant in the multivariate analyses. Continued effort is needed to increase the number of local and state agencies conducting compliance checks using optimal methods to reduce youth access to alcohol. Copyright © 2014 by the Research Society on Alcoholism.

  16. Dynamic Optimization

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.

  17. Trends in selected streamflow statistics at 19 long-term streamflow-gaging stations indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico, 1922-2009

    USGS Publications Warehouse

    Barbie, Dana L.; Wehmeyer, Loren L.

    2012-01-01

    Trends in selected streamflow statistics during 1922-2009 were evaluated at 19 long-term streamflow-gaging stations considered indicative of outflows from Texas to Arkansas, Louisiana, Galveston Bay, and the Gulf of Mexico. The U.S. Geological Survey, in cooperation with the Texas Water Development Board, evaluated streamflow data from streamflow-gaging stations with more than 50 years of record that were active as of 2009. The outflows into Arkansas and Louisiana were represented by 3 streamflow-gaging stations, and outflows into the Gulf of Mexico, including Galveston Bay, were represented by 16 streamflow-gaging stations. Monotonic trend analyses were done using the following three streamflow statistics generated from daily mean values of streamflow: (1) annual mean daily discharge, (2) annual maximum daily discharge, and (3) annual minimum daily discharge. The trend analyses were based on the nonparametric Kendall's Tau test, which is useful for the detection of monotonic upward or downward trends with time. A total of 69 trend analyses by Kendall's Tau were computed - 19 periods of streamflow multiplied by the 3 streamflow statistics plus 12 additional trend analyses because the periods of record for 2 streamflow-gaging stations were divided into periods representing pre- and post-reservoir impoundment. Unless otherwise described, each trend analysis used the entire period of record for each streamflow-gaging station. The monotonic trend analysis detected 11 statistically significant downward trends, 37 instances of no trend, and 21 statistically significant upward trends. One general region studied, which seemingly has relatively more upward trends for many of the streamflow statistics analyzed, includes the rivers and associated creeks and bayous to Galveston Bay in the Houston metropolitan area. Lastly, the most western river basins considered (the Nueces and Rio Grande) had statistically significant downward trends for many of the streamflow statistics analyzed.

  18. Comparison of optimization strategy and similarity metric in atlas-to-subject registration using statistical deformation model

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.

    2015-03-01

    A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.

  19. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  20. Bayesian Tracking of Emerging Epidemics Using Ensemble Optimal Statistical Interpolation

    PubMed Central

    Cobb, Loren; Krishnamurthy, Ashok; Mandel, Jan; Beezley, Jonathan D.

    2014-01-01

    We present a preliminary test of the Ensemble Optimal Statistical Interpolation (EnOSI) method for the statistical tracking of an emerging epidemic, with a comparison to its popular relative for Bayesian data assimilation, the Ensemble Kalman Filter (EnKF). The spatial data for this test was generated by a spatial susceptible-infectious-removed (S-I-R) epidemic model of an airborne infectious disease. Both tracking methods in this test employed Poisson rather than Gaussian noise, so as to handle epidemic data more accurately. The EnOSI and EnKF tracking methods worked well on the main body of the simulated spatial epidemic, but the EnOSI was able to detect and track a distant secondary focus of infection that the EnKF missed entirely. PMID:25113590

  1. Implementing optimal thinning strategies

    Treesearch

    Kurt H. Riitters; J. Douglas Brodie

    1984-01-01

    Optimal thinning regimes for achieving several management objectives were derived from two stand-growth simulators by dynamic programming. Residual mean tree volumes were then plotted against stand density management diagrams. The results supported the use of density management diagrams for comparing, checking, and implementing the results of optimization analyses....

  2. Optimism and Physical Health: A Meta-analytic Review

    PubMed Central

    Rasmussen, Heather N.; Greenhouse, Joel B.

    2010-01-01

    Background Prior research links optimism to physical health, but the strength of the association has not been systematically evaluated. Purpose The purpose of this study is to conduct a meta-analytic review to determine the strength of the association between optimism and physical health. Methods The findings from 83 studies, with 108 effect sizes (ESs), were included in the analyses, using random-effects models. Results Overall, the mean ES characterizing the relationship between optimism and physical health outcomes was 0.17, p<.001. ESs were larger for studies using subjective (versus objective) measures of physical health. Subsidiary analyses were also conducted grouping studies into those that focused solely on mortality, survival, cardiovascular outcomes, physiological markers (including immune function), immune function only, cancer outcomes, outcomes related to pregnancy, physical symptoms, or pain. In each case, optimism was a significant predictor of health outcomes or markers, all p<.001. Conclusions Optimism is a significant predictor of positive physical health outcomes. PMID:19711142

  3. Generalized networking engineering: optimal pricing and routing in multiservice networks

    NASA Astrophysics Data System (ADS)

    Mitra, Debasis; Wang, Qiong

    2002-07-01

    One of the functions of network engineering is to allocate resources optimally to forecasted demand. We generalize the mechanism by incorporating price-demand relationships into the problem formulation, and optimizing pricing and routing jointly to maximize total revenue. We consider a network, with fixed topology and link bandwidths, that offers multiple services, such as voice and data, each having characteristic price elasticity of demand, and quality of service and policy requirements on routing. Prices, which depend on service type and origin-destination, determine demands, that are routed, subject to their constraints, so as to maximize revenue. We study the basic properties of the optimal solution and prove that link shadow costs provide the basis for both optimal prices and optimal routing policies. We investigate the impact of input parameters, such as link capacities and price elasticities, on prices, demand growth, and routing policies. Asymptotic analyses, in which network bandwidth is scaled to grow, give results that are noteworthy for their qualitative insights. Several numerical examples illustrate the analyses.

  4. Cryogenic Tank Structure Sizing With Structural Optimization Method

    NASA Technical Reports Server (NTRS)

    Wang, J. T.; Johnson, T. F.; Sleight, D. W.; Saether, E.

    2001-01-01

    Structural optimization methods in MSC /NASTRAN are used to size substructures and to reduce the weight of a composite sandwich cryogenic tank for future launch vehicles. Because the feasible design space of this problem is non-convex, many local minima are found. This non-convex problem is investigated in detail by conducting a series of analyses along a design line connecting two feasible designs. Strain constraint violations occur for some design points along the design line. Since MSC/NASTRAN uses gradient-based optimization procedures. it does not guarantee that the lowest weight design can be found. In this study, a simple procedure is introduced to create a new starting point based on design variable values from previous optimization analyses. Optimization analysis using this new starting point can produce a lower weight design. Detailed inputs for setting up the MSC/NASTRAN optimization analysis and final tank design results are presented in this paper. Approaches for obtaining further weight reductions are also discussed.

  5. Investigations of quantum heuristics for optimization

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui

    We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.

  6. Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications

    DTIC Science & Technology

    2015-06-24

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly

  7. Performance optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1991-01-01

    As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less

  9. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Optimism and well-being: A prospective multi-method and multi-dimensional examination of optimism as a resilience factor following the occurrence of stressful life events

    PubMed Central

    Kleiman, Evan M.; Chiara, Alexandra M.; Liu, Richard T.; Jager-Hyman, Shari G.; Choi, Jimmy Y.; Alloy, Lauren B.

    2016-01-01

    Optimism has been conceptualized variously as positive expectations for the future (Scheier & Carver, 1985), optimistic attributions (Peterson & Seligman, 1984), illusion of control (Alloy & Abramson, 1979), and self-enhancing biases (Weinstein, 1980). Relatively little research has examined these multiple dimensions of optimism in relation to psychological and physical health. The current study assessed the multidimensional nature of optimism within a prospective vulnerability-stress framework. Initial principal component analyses revealed the following dimensions: Positive Expectations (PE), Inferential Style (IS), Sense of Invulnerability (SI), and Overconfidence (O). Prospective follow-up analyses demonstrated that PE was associated with fewer depressive episodes and moderated the effect of stressful life events on depressive symptoms. SI also moderated the effect of life stress on anxiety symptoms. Generally, our findings indicated that optimism is a multifaceted construct and not all forms of optimism have the same effects on well-being. Specifically, our findings indicted that PE may be the most relevant to depression, whereas SI may be the most relevant to anxiety. PMID:26558316

  11. MaxReport: An Enhanced Proteomic Result Reporting Tool for MaxQuant.

    PubMed

    Zhou, Tao; Li, Chuyu; Zhao, Wene; Wang, Xinru; Wang, Fuqiang; Sha, Jiahao

    2016-01-01

    MaxQuant is a proteomic software widely used for large-scale tandem mass spectrometry data. We have designed and developed an enhanced result reporting tool for MaxQuant, named as MaxReport. This tool can optimize the results of MaxQuant and provide additional functions for result interpretation. MaxReport can generate report tables for protein N-terminal modifications. It also supports isobaric labelling based relative quantification at the protein, peptide or site level. To obtain an overview of the results, MaxReport performs general descriptive statistical analyses for both identification and quantification results. The output results of MaxReport are well organized and therefore helpful for proteomic users to better understand and share their data. The script of MaxReport, which is freely available at http://websdoor.net/bioinfo/maxreport/, is developed using Python code and is compatible across multiple systems including Windows and Linux.

  12. Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.

    PubMed

    Fang, Cheng; Xiao, Zhiyan

    2016-01-01

    Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.

  13. A method to evaluate process performance by integrating time and resources

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  14. Improving the effectiveness of psychotherapy in two public hospitals in Nairobi

    PubMed Central

    Falkenström, Fredrik; Gee, Matthew David; Kuria, Mary Wangari; Othieno, Caleb Joseph; Kumar, Manasi

    2017-01-01

    This paper is the first in a planned series of papers studying the effectiveness of psychotherapy and counselling in Nairobi. It describes a method for checking the effectiveness of psychotherapy and improving service quality in a Kenyan context. Rather than prematurely imposing psychotherapy protocols developed in Western countries in another cultural context, we believe that first studying psychological interventions as they are practised may generate understanding of which psychological problems are common, what interventions therapists use, and what seems to be effective in reducing psychiatric problems. The initial step is to assess outcome of psychological treatments as they are conducted. This is followed by statistical analyses aimed at identifying patient groups who are not improving at acceptable rates. Therapists will then be trained in a ‘best practice’ approach, and controlled trials are used in a final step, testing new interventions specifically targeted at patient groups with sub-optimal outcomes. PMID:29093948

  15. Improving the effectiveness of psychotherapy in two public hospitals in Nairobi.

    PubMed

    Falkenström, Fredrik; Gee, Matthew David; Kuria, Mary Wangari; Othieno, Caleb Joseph; Kumar, Manasi

    2017-08-01

    This paper is the first in a planned series of papers studying the effectiveness of psychotherapy and counselling in Nairobi. It describes a method for checking the effectiveness of psychotherapy and improving service quality in a Kenyan context. Rather than prematurely imposing psychotherapy protocols developed in Western countries in another cultural context, we believe that first studying psychological interventions as they are practised may generate understanding of which psychological problems are common, what interventions therapists use, and what seems to be effective in reducing psychiatric problems. The initial step is to assess outcome of psychological treatments as they are conducted. This is followed by statistical analyses aimed at identifying patient groups who are not improving at acceptable rates. Therapists will then be trained in a 'best practice' approach, and controlled trials are used in a final step, testing new interventions specifically targeted at patient groups with sub-optimal outcomes.

  16. MALDI versus ESI: The Impact of the Ion Source on Peptide Identification.

    PubMed

    Nadler, Wiebke Maria; Waidelich, Dietmar; Kerner, Alexander; Hanke, Sabrina; Berg, Regina; Trumpp, Andreas; Rösli, Christoph

    2017-03-03

    For mass spectrometry-based proteomic analyses, electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI) are the commonly used ionization techniques. To investigate the influence of the ion source on peptide detection in large-scale proteomics, an optimized GeLC/MS workflow was developed and applied either with ESI/MS or with MALDI/MS for the proteomic analysis of different human cell lines of pancreatic origin. Statistical analysis of the resulting data set with more than 72 000 peptides emphasized the complementary character of the two methods, as the percentage of peptides identified with both approaches was as low as 39%. Significant differences between the resulting peptide sets were observed with respect to amino acid composition, charge-related parameters, hydrophobicity, and modifications of the detected peptides and could be linked to factors governing the respective ion yields in ESI and MALDI.

  17. Degradation of creatinine using boron-doped diamond electrode: Statistical modeling and degradation mechanism.

    PubMed

    Zhang, Zhefeng; Xian, Jiahui; Zhang, Chunyong; Fu, Degang

    2017-09-01

    This study investigated the degradation performance and mechanism of creatinine (a urine metabolite) with boron-doped diamond (BDD) anodes. Experiments were performed using a synthetic creatinine solution containing two supporting electrolytes (NaCl and Na 2 SO 4 ). A three-level central composite design was adopted to optimize the degradation process, a mathematical model was thus constructed and used to explore the optimum operating conditions. A maximum mineralization percentage of 80% following with full creatinine removal had been achieved within 120 min of electrolysis, confirming the strong oxidation capability of BDD anodes. Moreover, the results obtained suggested that supporting electrolyte concentration should be listed as one of the most important parameters in BDD technology. Lastly, based on the results from quantum chemistry calculations and LC/MS analyses, two different reaction pathways which governed the electrocatalytic oxidation of creatinine irrespective of the supporting electrolytes were identified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Probabilistic Structural Analysis of SSME Turbopump Blades: Probabilistic Geometry Effects

    NASA Technical Reports Server (NTRS)

    Nagpal, V. K.

    1985-01-01

    A probabilistic study was initiated to evaluate the precisions of the geometric and material properties tolerances on the structural response of turbopump blades. To complete this study, a number of important probabilistic variables were identified which are conceived to affect the structural response of the blade. In addition, a methodology was developed to statistically quantify the influence of these probabilistic variables in an optimized way. The identified variables include random geometric and material properties perturbations, different loadings and a probabilistic combination of these loadings. Influences of these probabilistic variables are planned to be quantified by evaluating the blade structural response. Studies of the geometric perturbations were conducted for a flat plate geometry as well as for a space shuttle main engine blade geometry using a special purpose code which uses the finite element approach. Analyses indicate that the variances of the perturbations about given mean values have significant influence on the response.

  19. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  20. Hope, optimism and survival in a randomised trial of chemotherapy for metastatic colorectal cancer.

    PubMed

    Schofield, Penelope E; Stockler, M R; Zannino, D; Tebbutt, N C; Price, T J; Simes, R J; Wong, N; Pavlakis, N; Ransom, D; Moylan, E; Underhill, C; Wyld, D; Burns, I; Ward, R; Wilcken, N; Jefford, M

    2016-01-01

    Psychological responses to cancer are widely believed to affect survival. We investigated associations between hope, optimism, anxiety, depression, health utility and survival in patients starting first-line chemotherapy for metastatic colorectal cancer. Four hundred twenty-nine subjects with metastatic colorectal cancer in a randomised controlled trial of chemotherapy completed baseline questionnaires assessing the following: hopefulness, optimism, anxiety and depression and health utility. Hazard ratios (HRs) and P values were calculated with Cox models for overall survival (OS) and progression-free survival (PFS) in univariable and multivariable analyses. Median follow-up was 31 months. Univariable analyses showed that OS was associated negatively with depression (HR 2.04, P < 0.001) and positively with health utility (HR 0.56, P < 0.001) and hopefulness (HR 0.75, P = 0.013). In multivariable analysis, OS was also associated negatively with depression (HR 1.72, P < 0.001) and positively with health utility (HR 0.73, P = 0.014), but not with optimism, anxiety or hopefulness. PFS was not associated with hope, optimism, anxiety or depression in any analyses. Depression and health utility, but not optimism, hope or anxiety, were associated with survival after controlling for known prognostic factors in patients with advanced colorectal cancer. Further research is required to understand the nature of the relationship between depression and survival. If a causal mechanism is identified, this may lead to interventional possibilities.

  1. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    NASA Astrophysics Data System (ADS)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  2. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  3. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  4. Comparison of apical transportation between ProFile and ProTaper NiTi rotary instruments.

    PubMed

    Iqbal, M K; Firic, S; Tulcan, J; Karabucak, B; Kim, S

    2004-06-01

    To use a newly developed radiographic technique to compare apical transportation and loss of working length (WL) between.06 taper ProFile Series 29 and ProTaper nickel-titanium (NiTi) rotary instruments in vitro. Mesio-buccal canals of 40 extracted mandibular molars were randomly divided into two groups. Group 1 was instrumented with ProFile and group 2 with ProTaper instruments according to the manufacturers' directions. A specially constructed radiographic jig with a Schick digital radiographic system (Schick Technologies Inc., Long Island City, NY, USA) was used to take pre- and postoperative radiographs of the samples at predetermined angulations. Using AutoCAD 2000 (Autodesk Inc., San Rafael, CA, USA), the central axes of initial and final instruments were radiographically superimposed to determine the loss of WL and degree of transportation at D(0), D(1), D(2) and D(4) from the WL. Data were analysed using repeated-measures anova. A statistically significant difference in apical transportation was found at the D(4) level between the two groups (P = 0.05). There was no statistical significance regarding postinstrumentation change in WL between groups. Spearman's Bivariate Correlation analysis indicated no statistically significant relationship between the radius of curvature and transportation. The results indicate that both ProTaper and ProFile instruments are comparable to each other in regards to their ability to optimally enlarge root canal with minimal transportation and loss of WL in vitro.

  5. Factors associated with frailty in chronically ill older adults.

    PubMed

    Hackstaff, Lynn

    2009-01-01

    An ex post facto analysis of a secondary dataset examined relationships between physical frailty, depression, and the self-perceived domains of health status and quality-of-life in older adults. The randomized sample included 992 community-dwelling, chronically ill, and functionally impaired adults age 65 and older who received care from a Southern California Kaiser Permanente medical center between 1998 and 2002. Physical frailty represents a level of physiologic vulnerability and functional loss that results in dependence on others for basic, daily living needs (Fried et al., 2001). The purpose of the study was to identify possible intervention junctures related to self-efficacy of older adults in order to help optimize their functionality. Multivariate correlation analyses showed statistically significant positive correlations between frailty level and depression (r = .18; p = < .05), number of medical conditions (r = .09; p = < .05), and self-rated quality-of-life (r = .24; p = < .05). Frailty level showed a statistically significant negative correlation with self-perceived health status (r = -.25; p = < .05). Notably, no statistically significant correlation was found between age and frailty level (r = -.03; p = < .05). In linear regression, self-perceived health status had a partial variance with frailty level (part r = -.18). The significant correlations found support further research to identify interventions to help vulnerable, older adults challenge self-perceived capabilities so that they may achieve optimum functionality through increased physical activity earlier on, and increased self-efficacy to support successful adaptation to aging-related losses.

  6. An investigation into the effects of temporal resolution on hepatic dynamic contrast-enhanced MRI in volunteers and in patients with hepatocellular carcinoma

    NASA Astrophysics Data System (ADS)

    Gill, Andrew B.; Black, Richard T.; Bowden, David J.; Priest, Andrew N.; Graves, Martin J.; Lomas, David J.

    2014-06-01

    This study investigated the effect of temporal resolution on the dual-input pharmacokinetic (PK) modelling of dynamic contrast-enhanced MRI (DCE-MRI) data from normal volunteer livers and from patients with hepatocellular carcinoma. Eleven volunteers and five patients were examined at 3 T. Two sections, one optimized for the vascular input functions (VIF) and one for the tissue, were imaged within a single heart-beat (HB) using a saturation-recovery fast gradient echo sequence. The data was analysed using a dual-input single-compartment PK model. The VIFs and/or uptake curves were then temporally sub-sampled (at interval ▵t = [2-20] s) before being subject to the same PK analysis. Statistical comparisons of tumour and normal tissue PK parameter values using a 5% significance level gave rise to the same study results when temporally sub-sampling the VIFs to HB < ▵t <4 s. However, sub-sampling to ▵t > 4 s did adversely affect the statistical comparisons. Temporal sub-sampling of just the liver/tumour tissue uptake curves at ▵t ≤ 20 s, whilst using high temporal resolution VIFs, did not substantially affect PK parameter statistical comparisons. In conclusion, there is no practical advantage to be gained from acquiring very high temporal resolution hepatic DCE-MRI data. Instead the high temporal resolution could be usefully traded for increased spatial resolution or SNR.

  7. Assessment of quality outcomes for robotic pancreaticoduodenectomy: identification of the learning curve.

    PubMed

    Boone, Brian A; Zenati, Mazen; Hogg, Melissa E; Steve, Jennifer; Moser, Arthur James; Bartlett, David L; Zeh, Herbert J; Zureikat, Amer H

    2015-05-01

    Quality assessment is an important instrument to ensure optimal surgical outcomes, particularly during the adoption of new surgical technology. The use of the robotic platform for complex pancreatic resections, such as the pancreaticoduodenectomy, requires close monitoring of outcomes during its implementation phase to ensure patient safety is maintained and the learning curve identified. To report the results of a quality analysis and learning curve during the implementation of robotic pancreaticoduodenectomy (RPD). A retrospective review of a prospectively maintained database of 200 consecutive patients who underwent RPD in a large academic center from October 3, 2008, through March 1, 2014, was evaluated for important metrics of quality. Patients were analyzed in groups of 20 to minimize demographic differences and optimize the ability to detect statistically meaningful changes in performance. Robotic pancreaticoduodenectomy. Optimization of perioperative outcome parameters. No statistical differences in mortality rates or major morbidity were noted during the study. Statistical improvements in estimated blood loss and conversions to open surgery occurred after 20 cases (600 mL vs 250 mL [P = .002] and 35.0% vs 3.3% [P < .001], respectively), incidence of pancreatic fistula after 40 cases (27.5% vs 14.4%; P = .04), and operative time after 80 cases (581 minutes vs 417 minutes [P < .001]). Complication rates, lengths of stay, and readmission rates showed continuous improvement that did not reach statistical significance. Outcomes for the last 120 cases (representing optimized metrics beyond the learning curve) included a mean operative time of 417 minutes, median estimated blood loss of 250 mL, a conversion rate of 3.3%, 90-day mortality of 3.3%, a clinically significant (grade B/C) pancreatic fistula rate of 6.9%, and a median length of stay of 9 days. Continuous assessment of quality metrics allows for safe implementation of RPD. We identified several inflexion points corresponding to optimization of performance metrics for RPD that can be used as benchmarks for surgeons who are adopting this technology.

  8. Identifying an optimal cutpoint value for the diagnosis of hypertriglyceridemia in the nonfasting state

    PubMed Central

    White, Khendi T.; Moorthy, M.V.; Akinkuolie, Akintunde O.; Demler, Olga; Ridker, Paul M; Cook, Nancy R.; Mora, Samia

    2015-01-01

    Background Nonfasting triglycerides are similar to or superior to fasting triglycerides at predicting cardiovascular events. However, diagnostic cutpoints are based on fasting triglycerides. We examined the optimal cutpoint for increased nonfasting triglycerides. Methods Baseline nonfasting (<8 hours since last meal) samples were obtained from 6,391 participants in the Women’s Health Study, followed prospectively for up to 17 years. The optimal diagnostic threshold for nonfasting triglycerides, determined by logistic regression models using c-statistics and Youden index (sum of sensitivity and specificity minus one), was used to calculate hazard ratios for incident cardiovascular events. Performance was compared to thresholds recommended by the American Heart Association (AHA) and European guidelines. Results The optimal threshold was 175 mg/dL (1.98 mmol/L), corresponding to a c-statistic of 0.656 that was statistically better than the AHA cutpoint of 200 mg/dL (c-statistic of 0.628). For nonfasting triglycerides above and below 175 mg/dL, adjusting for age, hypertension, smoking, hormone use, and menopausal status, the hazard ratio for cardiovascular events was 1.88 (95% CI, 1.52–2.33, P<0.001), and for triglycerides measured at 0–4 and 4–8 hours since last meal, hazard ratios (95%CIs) were 2.05 (1.54– 2.74) and 1.68 (1.21–2.32), respectively. Performance of this optimal cutpoint was validated using ten-fold cross-validation and bootstrapping of multivariable models that included standard risk factors plus total and HDL cholesterol, diabetes, body-mass index, and C-reactive protein. Conclusions In this study of middle aged and older apparently healthy women, we identified a diagnostic threshold for nonfasting hypertriglyceridemia of 175 mg/dL (1.98 mmol/L), with the potential to more accurately identify cases than the currently recommended AHA cutpoint. PMID:26071491

  9. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    EPA Science Inventory

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  10. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    PubMed

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  11. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    USDA-ARS?s Scientific Manuscript database

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  12. HomeStyles, A Web-Based Childhood Obesity Prevention Program for Families With Preschool Children: Protocol for a Randomized Controlled Trial

    PubMed Central

    2017-01-01

    Background The home environment is where young children spend most of their time, and is critically important to supporting behaviors that promote health and prevent obesity. However, the home environment and lifestyle patterns remain understudied, and few interventions have investigated parent-led makeovers designed to create home environments that are supportive of optimal child health and healthy child weights. Objective The aim of the HomeStyles randomized controlled trial (RCT) is to determine whether the Web-based HomeStyles intervention enables and motivates parents to shape the weight-related aspects of their home environments and lifestyle behavioral practices (diet, exercise, and sleep) to be more supportive of their preschool children’s optimal health and weight. Methods A rigorous RCT utilizing an experimental group and an attention control group, receiving a bona fide contemporaneous treatment equal in nonspecific treatment effects and differing only in subject matter content, will test the effect of HomeStyles on a diverse sample of families with preschool children. This intervention is based on social cognitive theory and uses a social ecological framework, and will assess: intrapersonal characteristics (dietary intake, physical activity level, and sleep) of parents and children; family interpersonal or social characteristics related to diet, physical activity, media use, and parental values and self-efficacy for obesity-preventive practices; and home environment food availability, physical activity space and supports in and near the home, and media availability and controls in the home. Results Enrollment for this study has been completed and statistical data analyses are currently underway. Conclusions This paper describes the HomeStyles intervention with regards to: rationale, the intervention’s logic model, sample eligibility criteria and recruitment, experimental group and attention control intervention content, study design, instruments, data management, and planned analyses. PMID:28442452

  13. Extracellular enzyme kinetics scale with resource availability

    USGS Publications Warehouse

    Sinsabaugh, Robert L.; Belnap, Jayne; Findlay, Stuart G.; Follstad Shah, Jennifer J.; Hill, Brian H.; Kuehn, Kevin A.; Kuske, Cheryl; Litvak, Marcy E.; Martinez, Noelle G.; Moorhead, Daryl L.; Warnock, Daniel D.

    2014-01-01

    Microbial community metabolism relies on external digestion, mediated by extracellular enzymes that break down complex organic matter into molecules small enough for cells to assimilate. We analyzed the kinetics of 40 extracellular enzymes that mediate the degradation and assimilation of carbon, nitrogen and phosphorus by diverse aquatic and terrestrial microbial communities (1160 cases). Regression analyses were conducted by habitat (aquatic and terrestrial), enzyme class (hydrolases and oxidoreductases) and assay methodology (low affinity and high affinity substrates) to relate potential reaction rates to substrate availability. Across enzyme classes and habitats, the scaling relationships between apparent Vmax and apparent Km followed similar power laws with exponents of 0.44 to 0.67. These exponents, called elasticities, were not statistically distinct from a central value of 0.50, which occurs when the Km of an enzyme equals substrate concentration, a condition optimal for maintenance of steady state. We also conducted an ecosystem scale analysis of ten extracellular hydrolase activities in relation to soil and sediment organic carbon (2,000–5,000 cases/enzyme) that yielded elasticities near 1.0 (0.9 ± 0.2, n = 36). At the metabolomic scale, the elasticity of extracellular enzymatic reactions is the proportionality constant that connects the C:N:P stoichiometries of organic matter and ecoenzymatic activities. At the ecosystem scale, the elasticity of extracellular enzymatic reactions shows that organic matter ultimately limits effective enzyme binding sites. Our findings suggest that one mechanism by which microbial communities maintain homeostasis is regulating extracellular enzyme expression to optimize the short-term responsiveness of substrate acquisition. The analyses also show that, like elemental stoichiometry, the fundamental attributes of enzymatic reactions can be extrapolated from biochemical to community and ecosystem scales.

  14. Consumer consciousness on meat and the environment - Exploring differences.

    PubMed

    Pohjolainen, Pasi; Tapio, Petri; Vinnari, Markus; Jokinen, Pekka; Räsänen, Pekka

    2016-06-01

    Global environmental challenges require changes in both the production and the consumption of goods. In this paper we analyse how consumers perceive the high environmental burden of meat. We analysed consumer environmental consciousness, including problem awareness and a support to action dimensions, latter including perceived self-efficacy as well as solutions to problems. The solutions were positioned on a continuum from increasing the efficiency of production to discussing sufficiency levels in consumption practices (techno-optimism, local meat, organic meat and meat reduction, respectively). We used a statistically representative survey sample (n = 1890) from the population of Finland and cluster analysis to explore differences among consumers. The analysis revealed that most Finns seem to be rather unsure of the study topic. At the same time they tend to have a comparably high level of self-efficacy (55 per cent of respondents) and endorsement of particularly local meat solution type (55%), followed by organic meat (35%), meat reduction (25%) and techno-optimism (15%), though the neutral stand was the most common one across the data. We also identified six consumer groups that reveal not only a high number of Highly unsure consumers (40%), but also some Rather conscious (20%) and a relatively small number of Highly conscious (8%). In addition, there were also easily observable groups of Careless conscious (14%), Rather unsure (9%) and Resistant (8%). The results highlight the need for a multitude of political actions to guide meat consumption, as there are groups that may benefit from practical tools for making dietary changes as well as groups in need for more comprehensive selection of measures, including environmental information. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  16. [Optimism of 13-year olds in Poland in the context of self-assessment of health and selected family characteristics].

    PubMed

    Stalmach, Małgorzata; Jodkowska, Maria; Tabak, Izabela; Oblacińska, Anna

    2013-01-01

    To examine the level of optimism in 13-year-olds and the relationship between optimism and self-reported health and family psychosocial and economic factors. Adolescents at the age of 13 years (n=605) and their parents, identified in the third stage of a prospective cohort study in 2008, was analysed. To examine the level of optimism the short Polish version of the Wagnilda and Young scale (Resilience Scale) were used. The level of optimism and the relationship between family socio-economic factors and family functioning in the family (parenting practices, satisfaction with family contacts) were examined. For the evaluation of probability of a high level of optimism among 13-year-old girls and boys the multivariate model of logistic was used. Girls had a significantly higher level of optimism. Girls and boys with positive attitude to life rated their health significantly better than their peers with negative attitude. Univariate analyses showed that with the level of optimism father's education level among girls and the professional status of the parents among boys, was significantly associated. Family affluence reported by children, positive parenting and satisfaction with family contacts, were significantly associated with the attitude to life, both in girls and boys. The level of optimism among boys was also related with the level of discipline by the mother and the level of control exercised by both parents. Finally, for girls multiple regression analyses showed that father's positive parenting was a predictor of high level of optimism [OR=0,45; CI(OR):0,23-0,85; p=0,014]. In boys, mother's positive parenting [OR=0,39; CI(OR):0,19-0,82; p=0,013] and appropriate father's control were found to be significant predictors of optimism [OR=0,33; CI(OR): 0,13-0,84; p=0,020]. Very high self-reported health by the majority of young people with positive attitude to life shows that optimism is a strong predictor of subjective health. Positive parenting practices and good level of parental control, have a significant impact on optimism in teenagers.

  17. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  18. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  19. Bidirectional relationship between sleep and optimism with depressive mood as a mediator: A longitudinal study of Chinese working adults.

    PubMed

    Lau, Esther Yuet Ying; Harry Hui, C; Cheung, Shu-Fai; Lam, Jasmine

    2015-11-01

    Sleep and optimism are important psycho-biological and personality constructs, respectively. However, very little work has examined the causal relationship between them, and none has examined the potential mechanisms operating in the relationship. This study aimed to understand whether sleep quality was a cause or an effect of optimism, and whether depressive mood could explain the relationship. Internet survey data were collected from 987 Chinese working adults (63.4% female, 92.4% full-time workers, 27.0% married, 90.2% Hong Kong residents, mean age=32.59 at three time-points, spanning about 19 months). Measures included a Chinese attributional style questionnaire, the Pittsburgh Sleep Quality Index, and the Depression Anxiety Stress Scale. Cross-sectional analyses revealed moderate correlations among sleep quality, depressive mood, and optimism. Cross-lagged analyses showed a bidirectional causality between optimism and sleep. Path analysis demonstrated that depressive mood fully mediated the influence of optimism on sleep quality, and it partially mediated the influence of sleep quality on optimism. Optimism improves sleep. Poor sleep makes a pessimist. The effects of sleep quality on optimism could not be fully explained by depressive mood, highlighting the unique role of sleep on optimism. Understanding the mechanisms of the feedback loop of sleep quality, mood, and optimism may provide insights for clinical interventions for individuals presented with mood-related problems. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Change-point analysis of geophysical time-series: application to landslide displacement rate (Séchilienne rock avalanche, France)

    NASA Astrophysics Data System (ADS)

    Amorese, D.; Grasso, J.-R.; Garambois, S.; Font, M.

    2018-05-01

    The rank-sum multiple change-point method is a robust statistical procedure designed to search for the optimal number and the location of change points in an arbitrary continue or discrete sequence of values. As such, this procedure can be used to analyse time-series data. Twelve years of robust data sets for the Séchilienne (French Alps) rockslide show a continuous increase in average displacement rate from 50 to 280 mm per month, in the 2004-2014 period, followed by a strong decrease back to 50 mm per month in the 2014-2015 period. When possible kinematic phases are tentatively suggested in previous studies, its solely rely on the basis of empirical threshold values. In this paper, we analyse how the use of a statistical algorithm for change-point detection helps to better understand time phases in landslide kinematics. First, we test the efficiency of the statistical algorithm on geophysical benchmark data, these data sets (stream flows and Northern Hemisphere temperatures) being already analysed by independent statistical tools. Second, we apply the method to 12-yr daily time-series of the Séchilienne landslide, for rainfall and displacement data, from 2003 December to 2015 December, in order to quantitatively extract changes in landslide kinematics. We find two strong significant discontinuities in the weekly cumulated rainfall values: an average rainfall rate increase is resolved in 2012 April and a decrease in 2014 August. Four robust changes are highlighted in the displacement time-series (2008 May, 2009 November-December-2010 January, 2012 September and 2014 March), the 2010 one being preceded by a significant but weak rainfall rate increase (in 2009 November). Accordingly, we are able to quantitatively define five kinematic stages for the Séchilienne rock avalanche during this period. The synchronization between the rainfall and displacement rate, only resolved at the end of 2009 and beginning of 2010, corresponds to a remarkable change (fourfold increase in mean displacement rate) in the landslide kinematic. This suggests that an increase of the rainfall is able to drive an increase of the landslide displacement rate, but that most of the kinematics of the landslide is not directly attributable to rainfall amount. The detailed exploration of the characteristics of the five kinematic stages suggests that the weekly averaged displacement rates are more tied to the frequency or rainy days than to the rainfall rate values. These results suggest the pattern of Séchilienne rock avalanche is consistent with the previous findings that landslide kinematics is dependent upon not only rainfall but also soil moisture conditions (as known as being more strongly related to precipitation frequency than to precipitation amount). Finally, our analysis of the displacement rate time-series pinpoints a susceptibility change of slope response to rainfall, as being slower before the end of 2009 than after, respectively. The kinematic history as depicted by statistical tools opens new routes to understand the apparent complexity of Séchilienne landslide kinematic.

  1. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  2. An optimization model to agroindustrial sector in antioquia (Colombia, South America)

    NASA Astrophysics Data System (ADS)

    Fernandez, J.

    2015-06-01

    This paper develops a proposal of a general optimization model for the flower industry, which is defined by using discrete simulation and nonlinear optimization, whose mathematical models have been solved by using ProModel simulation tools and Gams optimization. It defines the operations that constitute the production and marketing of the sector, statistically validated data taken directly from each operation through field work, the discrete simulation model of the operations and the linear optimization model of the entire industry chain are raised. The model is solved with the tools described above and presents the results validated in a case study.

  3. Free-form Airfoil Shape Optimization Under Uncertainty Using Maximum Expected Value and Second-order Second-moment Strategies

    NASA Technical Reports Server (NTRS)

    Huyse, Luc; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Free-form shape optimization of airfoils poses unexpected difficulties. Practical experience has indicated that a deterministic optimization for discrete operating conditions can result in dramatically inferior performance when the actual operating conditions are different from the - somewhat arbitrary - design values used for the optimization. Extensions to multi-point optimization have proven unable to adequately remedy this problem of "localized optimization" near the sampled operating conditions. This paper presents an intrinsically statistical approach and demonstrates how the shortcomings of multi-point optimization with respect to "localized optimization" can be overcome. The practical examples also reveal how the relative likelihood of each of the operating conditions is automatically taken into consideration during the optimization process. This is a key advantage over the use of multipoint methods.

  4. Inactivation disinfection property of Moringa Oleifera seed extract: optimization and kinetic studies

    NASA Astrophysics Data System (ADS)

    Idris, M. A.; Jami, M. S.; Hammed, A. M.

    2017-05-01

    This paper presents the statistical optimization study of disinfection inactivation parameters of defatted Moringa oleifera seed extract on Pseudomonas aeruginosa bacterial cells. Three level factorial design was used to estimate the optimum range and the kinetics of the inactivation process was also carried. The inactivation process involved comparing different disinfection models of Chicks-Watson, Collins-Selleck and Homs models. The results from analysis of variance (ANOVA) of the statistical optimization process revealed that only contact time was significant. The optimum disinfection range of the seed extract was 125 mg/L, 30 minutes and 120rpm agitation. At the optimum dose, the inactivation kinetics followed the Collin-Selleck model with coefficient of determination (R2) of 0.6320. This study is the first of its kind in determining the inactivation kinetics of pseudomonas aeruginosa using the defatted seed extract.

  5. Statistical considerations in monitoring birds over large areas

    USGS Publications Warehouse

    Johnson, D.H.

    2000-01-01

    The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.

  6. A statistical experiment design approach for optimizing biodegradation of weathered crude oil in coastal sediments.

    PubMed

    Mohajeri, Leila; Aziz, Hamidi Abdul; Isa, Mohamed Hasnain; Zahed, Mohammad Ali

    2010-02-01

    This work studied the bioremediation of weathered crude oil (WCO) in coastal sediment samples using central composite face centered design (CCFD) under response surface methodology (RSM). Initial oil concentration, biomass, nitrogen and phosphorus concentrations were used as independent variables (factors) and oil removal as dependent variable (response) in a 60 days trial. A statistically significant model for WCO removal was obtained. The coefficient of determination (R(2)=0.9732) and probability value (P<0.0001) demonstrated significance for the regression model. Numerical optimization based on desirability function were carried out for initial oil concentration of 2, 16 and 30 g per kg sediment and 83.13, 78.06 and 69.92 per cent removal were observed respectively, compare to 77.13, 74.17 and 69.87 per cent removal for un-optimized results.

  7. Tunneling Statistics for Analysis of Spin-Readout Fidelity

    NASA Astrophysics Data System (ADS)

    Gorman, S. K.; He, Y.; House, M. G.; Keizer, J. G.; Keith, D.; Fricke, L.; Hile, S. J.; Broome, M. A.; Simmons, M. Y.

    2017-09-01

    We investigate spin and charge dynamics of a quantum dot of phosphorus atoms coupled to a radio-frequency single-electron transistor (SET) using full counting statistics. We show how the magnetic field plays a role in determining the bunching or antibunching tunneling statistics of the donor dot and SET system. Using the counting statistics, we show how to determine the lowest magnetic field where spin readout is possible. We then show how such a measurement can be used to investigate and optimize single-electron spin-readout fidelity.

  8. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  9. Optimization studies on compression coated floating-pulsatile drug delivery of bisoprolol.

    PubMed

    Jagdale, Swati C; Bari, Nilesh A; Kuchekar, Bhanudas S; Chabukswar, Aniruddha R

    2013-01-01

    The purpose of the present work was to design and optimize compression coated floating pulsatile drug delivery systems of bisoprolol. Floating pulsatile concept was applied to increase the gastric residence of the dosage form having lag phase followed by a burst release. The prepared system consisted of two parts: a core tablet containing the active ingredient and an erodible outer shell with gas generating agent. The rapid release core tablet (RRCT) was prepared by using superdisintegrants with active ingredient. Press coating of optimized RRCT was done by polymer. A 3² full factorial design was used for optimization. The amount of Polyox WSR205 and Polyox WSR N12K was selected as independent variables. Lag period, drug release, and swelling index were selected as dependent variables. Floating pulsatile release formulation (FPRT) F13 at level 0 (55 mg) for Polyox WSR205 and level +1 (65 mg) for Polyox WSR N12K showed lag time of 4 h with >90% drug release. The data were statistically analyzed using ANOVA, and P < 0.05 was statistically significant. Release kinetics of the optimized formulation best fitted the zero order model. In vivo study confirms burst effect at 4 h in indicating the optimization of the dosage form.

  10. Optimization Studies on Compression Coated Floating-Pulsatile Drug Delivery of Bisoprolol

    PubMed Central

    Jagdale, Swati C.; Bari, Nilesh A.; Kuchekar, Bhanudas S.; Chabukswar, Aniruddha R.

    2013-01-01

    The purpose of the present work was to design and optimize compression coated floating pulsatile drug delivery systems of bisoprolol. Floating pulsatile concept was applied to increase the gastric residence of the dosage form having lag phase followed by a burst release. The prepared system consisted of two parts: a core tablet containing the active ingredient and an erodible outer shell with gas generating agent. The rapid release core tablet (RRCT) was prepared by using superdisintegrants with active ingredient. Press coating of optimized RRCT was done by polymer. A 32 full factorial design was used for optimization. The amount of Polyox WSR205 and Polyox WSR N12K was selected as independent variables. Lag period, drug release, and swelling index were selected as dependent variables. Floating pulsatile release formulation (FPRT) F13 at level 0 (55 mg) for Polyox WSR205 and level +1 (65 mg) for Polyox WSR N12K showed lag time of 4 h with >90% drug release. The data were statistically analyzed using ANOVA, and P < 0.05 was statistically significant. Release kinetics of the optimized formulation best fitted the zero order model. In vivo study confirms burst effect at 4 h in indicating the optimization of the dosage form. PMID:24367788

  11. Computer Optimization of Biodegradable Nanoparticles Fabricated by Dispersion Polymerization.

    PubMed

    Akala, Emmanuel O; Adesina, Simeon; Ogunwuyi, Oluwaseun

    2015-12-22

    Quality by design (QbD) in the pharmaceutical industry involves designing and developing drug formulations and manufacturing processes which ensure predefined drug product specifications. QbD helps to understand how process and formulation variables affect product characteristics and subsequent optimization of these variables vis-à-vis final specifications. Statistical design of experiments (DoE) identifies important parameters in a pharmaceutical dosage form design followed by optimizing the parameters with respect to certain specifications. DoE establishes in mathematical form the relationships between critical process parameters together with critical material attributes and critical quality attributes. We focused on the fabrication of biodegradable nanoparticles by dispersion polymerization. Aided by a statistical software, d-optimal mixture design was used to vary the components (crosslinker, initiator, stabilizer, and macromonomers) to obtain twenty nanoparticle formulations (PLLA-based nanoparticles) and thirty formulations (poly-ɛ-caprolactone-based nanoparticles). Scheffe polynomial models were generated to predict particle size (nm), zeta potential, and yield (%) as functions of the composition of the formulations. Simultaneous optimizations were carried out on the response variables. Solutions were returned from simultaneous optimization of the response variables for component combinations to (1) minimize nanoparticle size; (2) maximize the surface negative zeta potential; and (3) maximize percent yield to make the nanoparticle fabrication an economic proposition.

  12. Did Groundwater Processes Shape the Saharan Landscape during the Previous Wet Periods? a Remote Sensing and Geostatistical Approach

    NASA Astrophysics Data System (ADS)

    Farag, A. Z. A.; Sultan, M.; Elkadiri, R.; Abdelhalim, A.

    2014-12-01

    An integrated approach using remote sensing, landscape analysis and statistical methods was conducted to assess the role of groundwater sapping in shaping the Saharan landscape. A GIS-based logistic regression model was constructed to automatically delineate the spatial distribution of the sapping features over areas occupied by the Nubian Sandstone Aquifer System (NSAS): (1) an inventory was compiled of known locations of sapping features identified either in the field or from satellite datasets (e.g. Orbview-3 and Google Earth Digital Globe imagery); (2) spatial analyses were conducted in a GIS environment and seven geomorphological and geological predisposing factors (i.e. slope, stream density, cross-sectional and profile curvature, minimum and maximum curvature, and lithology) were identified; (3) a binary logistic regression model was constructed, optimized and validated to describe the relationship between the sapping locations and the set of controlling factors and (4) the generated model (prediction accuracy: 90.1%) was used to produce a regional sapping map over the NSAS. Model outputs indicate: (1) groundwater discharge and structural control played an important role in excavating the Saharan natural depressions as evidenced by the wide distribution of sapping features (areal extent: 1180 km2) along the fault-controlled escarpments of the Libyan Plateau; (2) proximity of mapped sapping features to reported paleolake and tufa deposits suggesting a causal effect. Our preliminary observations (from satellite imagery) and statistical analyses together with previous studies in the North Western Sahara Aquifer System (North Africa), Sinai Peninsula, Negev Desert, and The Plateau of Najd (Saudi Arabia) indicate extensive occurrence of sapping features along the escarpments bordering the northern margins of the Saharan-Arabian Desert; these areas share similar hydrologic settings with the NSAS domains and they too witnessed wet climatic periods in the Mid-Late Quaternary.

  13. Phenotyping asthma, rhinitis and eczema in MeDALL population-based birth cohorts: an allergic comorbidity cluster.

    PubMed

    Garcia-Aymerich, J; Benet, M; Saeys, Y; Pinart, M; Basagaña, X; Smit, H A; Siroux, V; Just, J; Momas, I; Rancière, F; Keil, T; Hohmann, C; Lau, S; Wahn, U; Heinrich, J; Tischer, C G; Fantini, M P; Lenzi, J; Porta, D; Koppelman, G H; Postma, D S; Berdel, D; Koletzko, S; Kerkhof, M; Gehring, U; Wickman, M; Melén, E; Hallberg, J; Bindslev-Jensen, C; Eller, E; Kull, I; Lødrup Carlsen, K C; Carlsen, K-H; Lambrecht, B N; Kogevinas, M; Sunyer, J; Kauffmann, F; Bousquet, J; Antó, J M

    2015-08-01

    Asthma, rhinitis and eczema often co-occur in children, but their interrelationships at the population level have been poorly addressed. We assessed co-occurrence of childhood asthma, rhinitis and eczema using unsupervised statistical techniques. We included 17 209 children at 4 years and 14 585 at 8 years from seven European population-based birth cohorts (MeDALL project). At each age period, children were grouped, using partitioning cluster analysis, according to the distribution of 23 variables covering symptoms 'ever' and 'in the last 12 months', doctor diagnosis, age of onset and treatments of asthma, rhinitis and eczema; immunoglobulin E sensitization; weight; and height. We tested the sensitivity of our estimates to subject and variable selections, and to different statistical approaches, including latent class analysis and self-organizing maps. Two groups were identified as the optimal way to cluster the data at both age periods and in all sensitivity analyses. The first (reference) group at 4 and 8 years (including 70% and 79% of children, respectively) was characterized by a low prevalence of symptoms and sensitization, whereas the second (symptomatic) group exhibited more frequent symptoms and sensitization. Ninety-nine percentage of children with comorbidities (co-occurrence of asthma, rhinitis and/or eczema) were included in the symptomatic group at both ages. The children's characteristics in both groups were consistent in all sensitivity analyses. At 4 and 8 years, at the population level, asthma, rhinitis and eczema can be classified together as an allergic comorbidity cluster. Future research including time-repeated assessments and biological data will help understanding the interrelationships between these diseases. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Does lymphopenia or macrocytosis reflect 6-thioguanine levels in patients with inflammatory bowel disease treated with azathioprine or 6-mercaptopurine?

    PubMed

    Heerasing, N M; Ng, J F; Dowling, D

    2016-04-01

    The thiopurine drugs, 6-mercaptopurine and azathioprine, remain the mainstay of immunomodulator therapy for inflammatory bowel disease (IBD). Optimal management depends on achieving therapeutic levels of 6-thioguanine (6-TGN), but measuring thiopurine metabolites is associated with significant cost. Thiopurines cause lymphopenia and an increase in mean corpuscular volume (MCV). It is unclear whether any clinically useful correlation exists between 6-TGN levels and lymphocyte count or MCV. The aim of this study is to investigate the correlation between 6-TGN levels and lymphocyte count and MCV in thiopurine-treated patients with IBD. We analysed a prospectively acquired database of 67 patients with IBD treated with thiopurine therapy. The data were analysed looking at the relationship between 6-TGN levels and both lymphocyte count and MCV by using the Spearman's rank correlation coefficient. Twenty-seven (40%) patients had therapeutic 6-TGN levels. Thirty-three (49%) patients had sub-therapeutic 6-TGN levels. A weak positive correlation between 6-TGN levels and lymphocyte count was demonstrated, but this was not statistically significant (Spearman's R = 0.14, P = 0.23). Spearman's rank correlation coefficient between 6-TGN levels and MCV was statistically significant (R = 0.42, P = 0.0005). MCV >101 fL excluded a subtherapeutic 6-TGN level with positive predictive value of 92%. There is no specific lymphopenia that can be assumed to indicate a therapeutic 6-TGN level. The relationship between 6-TGN levels and MCV is likely to be clinically relevant. If MCV is elevated, 6-TGN is unlikely to be sub-therapeutic. MCV is a potential surrogate marker which can rule out sub-therapeutic thiopurine metabolites in patients with IBD treated with azathioprine or 6-mercaptopurine. © 2016 Royal Australasian College of Physicians.

  15. Which is the optimal risk stratification system for surgically treated localized primary GIST? Comparison of three contemporary prognostic criteria in 171 tumors and a proposal for a modified Armed Forces Institute of Pathology risk criteria.

    PubMed

    Goh, Brian K P; Chow, Pierce K H; Yap, Wai-Ming; Kesavan, Sittampalam M; Song, In-Chin; Paul, Pradeep G; Ooi, Boon-Swee; Chung, Yaw-Fui A; Wong, Wai-Keong

    2008-08-01

    This study aims to validate and compare the performance of the National Institute of Health (NIH) criteria, Huang modified NIH criteria, and Armed Forces Institute of Pathology (AFIP) risk criteria for gastrointestinal stromal tumors (GISTs) in a large series of localized primary GISTs surgically treated at a single institution to determine the ideal risk stratification system for GIST. The clinicopathological features of 171 consecutive patients who underwent surgical resection for GISTs were retrospectively reviewed. Statistical analyses were performed to compare the prognostic value of the three risk criteria by analyzing the discriminatory ability linear trend, homogeneity, monotonicity of gradients, and Akaike information criteria. The median actuarial recurrence-free survival (RFS) for all 171 patients was 70%. On multivariate analyses, size >10 cm, mitotic count >5/50 high-power field, tumor necrosis, and serosal involvement were independent prognostic factors of RFS. All three risk criteria demonstrated a statistically significant difference in the recurrence rate, median actuarial RFS, actuarial 5-year RFS, and tumor-specific death across the different stages. Comparison of the various risk-stratification systems demonstrated that our proposed modified AFIP criteria had the best independent predictive value of RFS when compared with the other systems. The NIH, modified NIH, and AFIP criteria are useful in the prognostication of GIST, and the AFIP risk criteria provided the best prognostication among the three systems for primary localized GIST. However, remarkable prognostic heterogeneity exists in the AFIP high-risk category, and with our proposed modification, this system provides the most accurate prognostic information.

  16. Grading the neuroendocrine tumors of the lung: an evidence-based proposal.

    PubMed

    Rindi, G; Klersy, C; Inzani, F; Fellegara, G; Ampollini, L; Ardizzoni, A; Campanini, N; Carbognani, P; De Pas, T M; Galetta, D; Granone, P L; Righi, L; Rusca, M; Spaggiari, L; Tiseo, M; Viale, G; Volante, M; Papotti, M; Pelosi, G

    2014-02-01

    Lung neuroendocrine tumors are catalogued in four categories by the World Health Organization (WHO 2004) classification. Its reproducibility and prognostic efficacy was disputed. The WHO 2010 classification of digestive neuroendocrine neoplasms is based on Ki67 proliferation assessment and proved prognostically effective. This study aims at comparing these two classifications and at defining a prognostic grading system for lung neuroendocrine tumors. The study included 399 patients who underwent surgery and with at least 1 year follow-up between 1989 and 2011. Data on 21 variables were collected, and performance of grading systems and their components was compared by Cox regression and multivariable analyses. All statistical tests were two-sided. At Cox analysis, WHO 2004 stratified patients into three major groups with statistically significant survival difference (typical carcinoid vs atypical carcinoid (AC), P=0.021; AC vs large-cell/small-cell lung neuroendocrine carcinomas, P<0.001). Optimal discrimination in three groups was observed by Ki67% (Ki67% cutoffs: G1 <4, G2 4-<25, G3 ≥25; G1 vs G2, P=0.021; and G2 vs G3, P≤0.001), mitotic count (G1 ≤2, G2 >2-47, G3 >47; G1 vs G2, P≤0.001; and G2 vs G3, P≤0.001), and presence of necrosis (G1 absent, G2 <10% of sample, G3 >10% of sample; G1 vs G2, P≤0.001; and G2 vs G3, P≤0.001) at uni and multivariable analyses. The combination of these three variables resulted in a simple and effective grading system. A three-tiers grading system based on Ki67 index, mitotic count, and necrosis with cutoffs specifically generated for lung neuroendocrine tumors is prognostically effective and accurate.

  17. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  18. Post Hoc Analyses of ApoE Genotype-Defined Subgroups in Clinical Trials.

    PubMed

    Kennedy, Richard E; Cutter, Gary R; Wang, Guoqiao; Schneider, Lon S

    2016-01-01

    Many post hoc analyses of clinical trials in Alzheimer's disease (AD) and mild cognitive impairment (MCI) are in small Phase 2 trials. Subject heterogeneity may lead to statistically significant post hoc results that cannot be replicated in larger follow-up studies. We investigated the extent of this problem using simulation studies mimicking current trial methods with post hoc analyses based on ApoE4 carrier status. We used a meta-database of 24 studies, including 3,574 subjects with mild AD and 1,171 subjects with MCI/prodromal AD, to simulate clinical trial scenarios. Post hoc analyses examined if rates of progression on the Alzheimer's Disease Assessment Scale-cognitive (ADAS-cog) differed between ApoE4 carriers and non-carriers. Across studies, ApoE4 carriers were younger and had lower baseline scores, greater rates of progression, and greater variability on the ADAS-cog. Up to 18% of post hoc analyses for 18-month trials in AD showed greater rates of progression for ApoE4 non-carriers that were statistically significant but unlikely to be confirmed in follow-up studies. The frequency of erroneous conclusions dropped below 3% with trials of 100 subjects per arm. In MCI, rates of statistically significant differences with greater progression in ApoE4 non-carriers remained below 3% unless sample sizes were below 25 subjects per arm. Statistically significant differences for ApoE4 in post hoc analyses often reflect heterogeneity among small samples rather than true differential effect among ApoE4 subtypes. Such analyses must be viewed cautiously. ApoE genotype should be incorporated into the design stage to minimize erroneous conclusions.

  19. Methodological Standards for Meta-Analyses and Qualitative Systematic Reviews of Cardiac Prevention and Treatment Studies: A Scientific Statement From the American Heart Association.

    PubMed

    Rao, Goutham; Lopez-Jimenez, Francisco; Boyd, Jack; D'Amico, Frank; Durant, Nefertiti H; Hlatky, Mark A; Howard, George; Kirley, Katherine; Masi, Christopher; Powell-Wiley, Tiffany M; Solomonides, Anthony E; West, Colin P; Wessel, Jennifer

    2017-09-05

    Meta-analyses are becoming increasingly popular, especially in the fields of cardiovascular disease prevention and treatment. They are often considered to be a reliable source of evidence for making healthcare decisions. Unfortunately, problems among meta-analyses such as the misapplication and misinterpretation of statistical methods and tests are long-standing and widespread. The purposes of this statement are to review key steps in the development of a meta-analysis and to provide recommendations that will be useful for carrying out meta-analyses and for readers and journal editors, who must interpret the findings and gauge methodological quality. To make the statement practical and accessible, detailed descriptions of statistical methods have been omitted. Based on a survey of cardiovascular meta-analyses, published literature on methodology, expert consultation, and consensus among the writing group, key recommendations are provided. Recommendations reinforce several current practices, including protocol registration; comprehensive search strategies; methods for data extraction and abstraction; methods for identifying, measuring, and dealing with heterogeneity; and statistical methods for pooling results. Other practices should be discontinued, including the use of levels of evidence and evidence hierarchies to gauge the value and impact of different study designs (including meta-analyses) and the use of structured tools to assess the quality of studies to be included in a meta-analysis. We also recommend choosing a pooling model for conventional meta-analyses (fixed effect or random effects) on the basis of clinical and methodological similarities among studies to be included, rather than the results of a test for statistical heterogeneity. © 2017 American Heart Association, Inc.

  20. Simulation study to determine the impact of different design features on design efficiency in discrete choice experiments.

    PubMed

    Vanniyasingam, Thuva; Cunningham, Charles E; Foster, Gary; Thabane, Lehana

    2016-07-19

    Discrete choice experiments (DCEs) are routinely used to elicit patient preferences to improve health outcomes and healthcare services. While many fractional factorial designs can be created, some are more statistically optimal than others. The objective of this simulation study was to investigate how varying the number of (1) attributes, (2) levels within attributes, (3) alternatives and (4) choice tasks per survey will improve or compromise the statistical efficiency of an experimental design. A total of 3204 DCE designs were created to assess how relative design efficiency (d-efficiency) is influenced by varying the number of choice tasks (2-20), alternatives (2-5), attributes (2-20) and attribute levels (2-5) of a design. Choice tasks were created by randomly allocating attribute and attribute level combinations into alternatives. Relative d-efficiency was used to measure the optimality of each DCE design. DCE design complexity influenced statistical efficiency. Across all designs, relative d-efficiency decreased as the number of attributes and attribute levels increased. It increased for designs with more alternatives. Lastly, relative d-efficiency converges as the number of choice tasks increases, where convergence may not be at 100% statistical optimality. Achieving 100% d-efficiency is heavily dependent on the number of attributes, attribute levels, choice tasks and alternatives. Further exploration of overlaps and block sizes are needed. This study's results are widely applicable for researchers interested in creating optimal DCE designs to elicit individual preferences on health services, programmes, policies and products. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Effects of depression, diabetes distress, diabetes self-efficacy, and diabetes self-management on glycemic control among Chinese population with type 2 diabetes mellitus.

    PubMed

    Lin, Keke; Park, Chang; Li, Mingzi; Wang, Xiudong; Li, Xiushu; Li, Wei; Quinn, Laurie

    2017-09-01

    To examine the direct and indirect effects of depression, diabetes distress, diabetes self-efficacy and diabetes self-management on glycemic control among a group of T2DM patients in China. A convenience sample of 254 participants were selected from three outpatient departments in Beijing, China. They were surveyed using a self-administered questionnaire. Diabetes-related information was retrieved from their medical records. Descriptive statistics, independent student t tests, Chi-square tests, correlation analyses and Generalized Structural Equation Modeling were used. Only 91 (35.82%) participants achieved optimal glycemic control of HbA1c<7.0% (53mmol/mol). Only diabetes self-management had a direct effect on glycemic control (OR=0.95, P<0.001). Depression and diabetes distress had only indirect effects on glycemic control through both diabetes self-efficacy and diabetes self-management. Diabetes self-efficacy only had an indirect effect on glycemic control through diabetes self-management. Glycemic control among Chinese population with T2DM was suboptimal. Future interventions should focus on decreasing depressive symptoms and diabetes distress levels, and, therefore, improve diabetes self-efficacy and self-management practices and, ultimately, reach the optimal goal of glycemic control. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Scaling laws of marine predator search behaviour.

    PubMed

    Sims, David W; Southall, Emily J; Humphries, Nicolas E; Hays, Graeme C; Bradshaw, Corey J A; Pitchford, Jonathan W; James, Alex; Ahmed, Mohammed Z; Brierley, Andrew S; Hindell, Mark A; Morritt, David; Musyl, Michael K; Righton, David; Shepard, Emily L C; Wearmouth, Victoria J; Wilson, Rory P; Witt, Matthew J; Metcalfe, Julian D

    2008-02-28

    Many free-ranging predators have to make foraging decisions with little, if any, knowledge of present resource distribution and availability. The optimal search strategy they should use to maximize encounter rates with prey in heterogeneous natural environments remains a largely unresolved issue in ecology. Lévy walks are specialized random walks giving rise to fractal movement trajectories that may represent an optimal solution for searching complex landscapes. However, the adaptive significance of this putative strategy in response to natural prey distributions remains untested. Here we analyse over a million movement displacements recorded from animal-attached electronic tags to show that diverse marine predators-sharks, bony fishes, sea turtles and penguins-exhibit Lévy-walk-like behaviour close to a theoretical optimum. Prey density distributions also display Lévy-like fractal patterns, suggesting response movements by predators to prey distributions. Simulations show that predators have higher encounter rates when adopting Lévy-type foraging in natural-like prey fields compared with purely random landscapes. This is consistent with the hypothesis that observed search patterns are adapted to observed statistical patterns of the landscape. This may explain why Lévy-like behaviour seems to be widespread among diverse organisms, from microbes to humans, as a 'rule' that evolved in response to patchy resource distributions.

  3. Approaches towards the enhanced production of Rapamycin by Streptomyces hygroscopicus MTCC 4003 through mutagenesis and optimization of process parameters by Taguchi orthogonal array methodology.

    PubMed

    Dutta, Subhasish; Basak, Bikram; Bhunia, Biswanath; Sinha, Ankan; Dey, Apurba

    2017-05-01

    The present research was conducted to define the approaches for enhanced production of rapamycin (Rap) by Streptomyces hygroscopicus microbial type culture collection (MTCC) 4003. Both physical mutagenesis by ultraviolet ray (UV) and chemical mutagenesis by N-methyl-N-nitro-N-nitrosoguanidine (NTG) have been applied successfully for the improvement of Rap production. Enhancing Rap yield by novel sequential UV mutagenesis technique followed by fermentation gives a significant difference in getting economically scalable amount of this industrially important macrolide compound. Mutant obtained through NTG mutagenesis (NTG-30-27) was found to be superior to others as it initially produced 67% higher Rap than wild type. Statistical optimization of nutritional and physiochemical parameters was carried out to find out most influential factors responsible for enhanced Rap yield by NTG-30-27 which was performed using Taguchi orthogonal array approach. Around 72% enhanced production was achieved with nutritional factors at their assigned level at 23 °C, 120 rpm and pH 7.6. Results were analysed in triplicate basis where validation and purification was carried out using high performance liquid chromatography. Stability study and potency of extracted Rap was supported by turbidimetric assay taking Candida albicans MTCC 227 as test organism.

  4. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    PubMed Central

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  5. A primer on receiver operating characteristic analysis and diagnostic efficiency statistics for pediatric psychology: we are ready to ROC.

    PubMed

    Youngstrom, Eric A

    2014-03-01

    To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores <8 had a diagnostic likelihood ratio <0.3, and scores >30 had a diagnostic likelihood ratio of 7.4. This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses.

  6. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    PubMed

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  7. Confidence bands for measured economically optimal nitrogen rates

    USDA-ARS?s Scientific Manuscript database

    While numerous researchers have computed economically optimal N rate (EONR) values from measured yield – N rate data, nearly all have neglected to compute or estimate the statistical reliability of these EONR values. In this study, a simple method for computing EONR and its confidence bands is descr...

  8. Optimal Predictions in Everyday Cognition: The Wisdom of Individuals or Crowds?

    ERIC Educational Resources Information Center

    Mozer, Michael C.; Pashler, Harold; Homaei, Hadjar

    2008-01-01

    Griffiths and Tenenbaum (2006) asked individuals to make predictions about the duration or extent of everyday events (e.g., cake baking times), and reported that predictions were optimal, employing Bayesian inference based on veridical prior distributions. Although the predictions conformed strikingly to statistics of the world, they reflect…

  9. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    USDA-ARS?s Scientific Manuscript database

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  10. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    DTIC Science & Technology

    2013-08-01

    in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey

  11. Optimization of Xylanase Production from Penicillium sp.WX-Z1 by a Two-Step Statistical Strategy: Plackett-Burman and Box-Behnken Experimental Design

    PubMed Central

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO3, MgSO4, and CaCl2. The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO3, 12.71; MgSO4, 0.96; and CaCl2, 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor. PMID:22949884

  12. Optimization of Xylanase production from Penicillium sp.WX-Z1 by a two-step statistical strategy: Plackett-Burman and Box-Behnken experimental design.

    PubMed

    Cui, Fengjie; Zhao, Liming

    2012-01-01

    The objective of the study was to optimize the nutrition sources in a culture medium for the production of xylanase from Penicillium sp.WX-Z1 using Plackett-Burman design and Box-Behnken design. The Plackett-Burman multifactorial design was first employed to screen the important nutrient sources in the medium for xylanase production by Penicillium sp.WX-Z1 and subsequent use of the response surface methodology (RSM) was further optimized for xylanase production by Box-Behnken design. The important nutrient sources in the culture medium, identified by the initial screening method of Placket-Burman, were wheat bran, yeast extract, NaNO(3), MgSO(4), and CaCl(2). The optimal amounts (in g/L) for maximum production of xylanase were: wheat bran, 32.8; yeast extract, 1.02; NaNO(3), 12.71; MgSO(4), 0.96; and CaCl(2), 1.04. Using this statistical experimental design, the xylanase production under optimal condition reached 46.50 U/mL and an increase in xylanase activity of 1.34-fold was obtained compared with the original medium for fermentation carried out in a 30-L bioreactor.

  13. Cancer Statistics Animator

    Cancer.gov

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  14. Interpretation of the results of statistical measurements. [search for basic probability model

    NASA Technical Reports Server (NTRS)

    Olshevskiy, V. V.

    1973-01-01

    For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.

  15. Optimal Regulation of Structural Systems with Uncertain Parameters.

    DTIC Science & Technology

    1981-02-02

    been addressed, in part, by Statistical Energy Analysis . Moti- vated by a concern with high frequency vibration and acoustical- structural...Parameter Systems," AFOSR-TR-79-0753 (May, 1979). 25. R. H. Lyon, Statistical Energy Analysis of Dynamical Systems: Theory and Applications, (M.I.T...Press, Cambridge, Mass., 1975). 26. E. E. Ungar, " Statistical Energy Analysis of Vibrating Systems," Trans. ASME, J. Eng. Ind. 89, 626 (1967). 139 27

  16. Probabilistic models in human sensorimotor control

    PubMed Central

    Wolpert, Daniel M.

    2009-01-01

    Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731

  17. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  18. Comparison of statistical models to estimate parasite growth rate in the induced blood stage malaria model.

    PubMed

    Wockner, Leesa F; Hoffmann, Isabell; O'Rourke, Peter; McCarthy, James S; Marquart, Louise

    2017-08-25

    The efficacy of vaccines aimed at inhibiting the growth of malaria parasites in the blood can be assessed by comparing the growth rate of parasitaemia in the blood of subjects treated with a test vaccine compared to controls. In studies using induced blood stage malaria (IBSM), a type of controlled human malaria infection, parasite growth rate has been measured using models with the intercept on the y-axis fixed to the inoculum size. A set of statistical models was evaluated to determine an optimal methodology to estimate parasite growth rate in IBSM studies. Parasite growth rates were estimated using data from 40 subjects published in three IBSM studies. Data was fitted using 12 statistical models: log-linear, sine-wave with the period either fixed to 48 h or not fixed; these models were fitted with the intercept either fixed to the inoculum size or not fixed. All models were fitted by individual, and overall by study using a mixed effects model with a random effect for the individual. Log-linear models and sine-wave models, with the period fixed or not fixed, resulted in similar parasite growth rate estimates (within 0.05 log 10 parasites per mL/day). Average parasite growth rate estimates for models fitted by individual with the intercept fixed to the inoculum size were substantially lower by an average of 0.17 log 10 parasites per mL/day (range 0.06-0.24) compared with non-fixed intercept models. Variability of parasite growth rate estimates across the three studies analysed was substantially higher (3.5 times) for fixed-intercept models compared with non-fixed intercept models. The same tendency was observed in models fitted overall by study. Modelling data by individual or overall by study had minimal effect on parasite growth estimates. The analyses presented in this report confirm that fixing the intercept to the inoculum size influences parasite growth estimates. The most appropriate statistical model to estimate the growth rate of blood-stage parasites in IBSM studies appears to be a log-linear model fitted by individual and with the intercept estimated in the log-linear regression. Future studies should use this model to estimate parasite growth rates.

  19. The analgesic effects of exogenous melatonin in humans.

    PubMed

    Andersen, Lars Peter Holst

    2016-10-01

    The hormone, melatonin is produced with circadian rhythm by the pineal gland in humans. The melatonin rhythm provides an endogenous synchronizer, modulating e.g. blood pressure, body temperature, cortisol rhythm, sleep-awake-cycle, immune function and anti-oxidative defence. Interestingly, a number of experimental animal studies demonstrate significant dose-dependent anti-nociceptive effects of exogenous melatonin. Similarly, recent experimental- and clinical studies in humans indicate significant analgesic effects. In study I, we systematically reviewed all randomized studies investigating clinical effects of perioperative melatonin. Meta-analyses demonstrated significant analgesic and anxiolytic effects of melatonin in surgical patients, equating reductions of 20 mm and 19 mm, respectively on a VAS, compared with placebo. Profound heterogeneity between the included studies was, however, present. In study II, we aimed to investigate the analgesic, anti-hyperalgesic and anti-inflammatory effects of exogenous melatonin in a validated human inflammatory pain model, the human burn model. The study was performed as a randomized, double blind placebo-controlled crossover study. Primary outcomes were pain during the burn injury and areas of secondary hyperalgesia. No significant effects of exogenous melatonin were observed with respect to primary or secondary outcomes, compared to placebo. Study III and IV estimated the pharmacokinetic variables of exogenous melatonin. Oral melatonin demonstrated a t max value of 41 minutes. Bioavailability of oral melatonin was only 3%. Elimination t 1/2 were approximately 45 minutes following both oral and intravenous administration, respectively. High-dose intravenous melatonin was not associated with increased sedation, in terms of simple reaction times, compared to placebo. Similarly, no other adverse effects were reported. In Study V, we aimed to re-analyse data obtained from a randomized analgesic drug trial by a selection of standard statistical test. Furthermore, we presented an integrated assessment method of longitudinally measured pain intensity and opioid consumption. Our analyses documented that the employed statistical method impacted the statistical significance of post-operative analgesic outcomes. Furthermore, the novel integrated assessment method combines two interdependent outcomes, lowers the risk of type 2 errors, increases the statistical power, and provides a more accurate description of post-operative analgesic efficacy. Exogenous melatonin may offer an effective and safe analgesic drug. At this moment, however, the results of human studies have been contradictory. High-quality randomized experimental- and clinical studies are still needed to establish a "genuine" analgesic effect of the drug in humans. Other perioperative effects of exogenous melatonin should also be investigated, before melatonin can be introduced for clinical routine use in surgical patients. Despite promising experimental and clinical findings, several unanswered questions also relate to optimal dosage, timing of administration and administration route of exogenous melatonin.

  20. Dissecting the genetics of complex traits using summary association statistics.

    PubMed

    Pasaniuc, Bogdan; Price, Alkes L

    2017-02-01

    During the past decade, genome-wide association studies (GWAS) have been used to successfully identify tens of thousands of genetic variants associated with complex traits and diseases. These studies have produced extensive repositories of genetic variation and trait measurements across large numbers of individuals, providing tremendous opportunities for further analyses. However, privacy concerns and other logistical considerations often limit access to individual-level genetic data, motivating the development of methods that analyse summary association statistics. Here, we review recent progress on statistical methods that leverage summary association data to gain insights into the genetic basis of complex traits and diseases.

  1. Statistical innovations in diagnostic device evaluation.

    PubMed

    Yu, Tinghui; Li, Qin; Gray, Gerry; Yue, Lilly Q

    2016-01-01

    Due to rapid technological development, innovations in diagnostic devices are proceeding at an extremely fast pace. Accordingly, the needs for adopting innovative statistical methods have emerged in the evaluation of diagnostic devices. Statisticians in the Center for Devices and Radiological Health at the Food and Drug Administration have provided leadership in implementing statistical innovations. The innovations discussed in this article include: the adoption of bootstrap and Jackknife methods, the implementation of appropriate multiple reader multiple case study design, the application of robustness analyses for missing data, and the development of study designs and data analyses for companion diagnostics.

  2. Identification of differentially expressed genes and false discovery rate in microarray studies.

    PubMed

    Gusnanto, Arief; Calza, Stefano; Pawitan, Yudi

    2007-04-01

    To highlight the development in microarray data analysis for the identification of differentially expressed genes, particularly via control of false discovery rate. The emergence of high-throughput technology such as microarrays raises two fundamental statistical issues: multiplicity and sensitivity. We focus on the biological problem of identifying differentially expressed genes. First, multiplicity arises due to testing tens of thousands of hypotheses, rendering the standard P value meaningless. Second, known optimal single-test procedures such as the t-test perform poorly in the context of highly multiple tests. The standard approach of dealing with multiplicity is too conservative in the microarray context. The false discovery rate concept is fast becoming the key statistical assessment tool replacing the P value. We review the false discovery rate approach and argue that it is more sensible for microarray data. We also discuss some methods to take into account additional information from the microarrays to improve the false discovery rate. There is growing consensus on how to analyse microarray data using the false discovery rate framework in place of the classical P value. Further research is needed on the preprocessing of the raw data, such as the normalization step and filtering, and on finding the most sensitive test procedure.

  3. The impact of occupational therapy and lifestyle interventions on older persons' health, well-being, and occupational adaptation.

    PubMed

    Johansson, Ann; Björklund, Anita

    2016-01-01

    The aim of this study was to investigate whether a four-month occupational based health-promoting programme for older persons living in community dwellings could maintain/improve their general health and well-being. Further, the aim was to explore whether the programme facilitated the older persons' occupational adaptation. The study had a quasi-experimental design, with a non-equivalent control group combined with semi-structured interviews. The intervention group comprised 22 participants, and the control group 18. Outcomes were measured using the Short Form 36, Life Satisfaction Index-Z and Meaningful Activity Participation Assessment. Content analysis, based on concepts from the Model of Occupational Adaptation, was used to analyse the interviews. The intervention group showed statistically significant improvements in general health variables such as vitality and mental health, and positive trends for psychological well-being. There were no statistically significant differences between the intervention group and the control group, but the groups were not fully matched. The qualitative analysis based on Occupational Adaptation pointed out social aspects as a compliment to the overall results. Participating in meaningful, challenging activities in different environments stimulates the occupational adaptation process; this is something occupational therapists could use to empower older persons to find their optimal occupational lives.

  4. Determination Of The Activity Space By The Stereometric Method

    NASA Astrophysics Data System (ADS)

    Deloison, Y.; Crete, N.; Mollard, R.

    1980-07-01

    To determine the activity space of a sitting subject, it is necessary to go beyond the mere statistical description of morphology and the knowledge of the displacement volume. An anlysis of the positions or variations of the positions of the diverse segmental elements (arms, hands, lower limbs, etc...) in the course of a given activity is required. Of the various methods used to locate quickly and accurately the spatial positions of anatomical points, stereometry makes it possible to plot the three-dimensional coordinates of any point in space in relation to a fixed trirectangle frame of reference determined by the stereome-tric measuring device. Thus, regardless of the orientation and posture of the subject, his segmental elements can be easily pin-pointed, throughout the experiment, within the space they occupy. Using this method, it is possible for a sample of operators seated at an operation station and applying either manual controls or pedals and belonging to a population statistically defined from the data collected and the analyses produced by the anthropometric study to determine a contour line of reach capability marking out the usable working space and to know, within this working space, a contour line of preferential activity that is limited, in space, by the whole range of optimal reach capability of all the subjects.

  5. VAGINAL PROGESTERONE VERSUS CERVICAL CERCLAGE FOR THE PREVENTION OF PRETERM BIRTH IN WOMEN WITH A SONOGRAPHIC SHORT CERVIX, SINGLETON GESTATION, AND PREVIOUS PRETERM BIRTH: A SYSTEMATIC REVIEW AND INDIRECT COMPARISON META-ANALYSIS

    PubMed Central

    CONDE-AGUDELO, Agustin; ROMERO, Roberto; NICOLAIDES, Kypros; CHAIWORAPONGSA, Tinnakorn; O'BRIEN, John M.; CETINGOZ, Elcin; DA FONSECA, Eduardo; CREASY, George; SOMA-PILLAY, Priya; FUSEY, Shalini; CAM, Cetin; ALFIREVIC, Zarko; HASSAN, Sonia S.

    2012-01-01

    OBJECTIVE No randomized controlled trial has directly compared vaginal progesterone and cervical cerclage for the prevention of preterm birth in women with a sonographic short cervix in the midtrimester, singleton gestation, and previous spontaneous preterm birth. We performed an indirect comparison of vaginal progesterone versus cerclage, using placebo/no cerclage as the common comparator. STUDY DESIGN Adjusted indirect meta-analysis of randomized controlled trials. RESULTS Four studies evaluating vaginal progesterone versus placebo (158 patients) and five evaluating cerclage versus no cerclage (504 patients) were included. Both interventions were associated with a statistically significant reduction in the risk of preterm birth <32 weeks of gestation and composite perinatal morbidity and mortality compared with placebo/no cerclage. Adjusted indirect meta-analyses did not show statistically significant differences between vaginal progesterone and cerclage in reducing preterm birth or adverse perinatal outcomes. CONCLUSION Based on state-of-the-art methodology for indirect comparisons, either vaginal progesterone or cerclage are equally efficacious in the prevention of preterm birth in women with a sonographic short cervix in the midtrimester, singleton gestation, and previous preterm birth. The selection of the optimal treatment may depend upon adverse events, cost and patient/clinician preferences. PMID:23157855

  6. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group

    PubMed Central

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T.; Pereira, Carol; Rosenkranz, Susan L.; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu (Jeanne); Wang, Rui; Lok, Judith

    2017-01-01

    Abstract The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. PMID:28350899

  7. Statistical mechanics of multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.

    2009-02-01

    We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over all balanced bipartitions. We search for those (maximally multipartite entangled) states whose purity is minimum for all bipartitions and recast this optimization problem into a problem of statistical mechanics.

  8. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior

    ERIC Educational Resources Information Center

    Gagliardi, Annie; Feldman, Naomi H.; Lidz, Jeffrey

    2017-01-01

    Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the…

  9. Active Structural Acoustic Control as an Approach to Acoustic Optimization of Lightweight Structures

    DTIC Science & Technology

    2001-06-01

    appropriate approach based on Statistical Energy Analysis (SEA) would facilitate investigations of the structural behavior at a high modal density. On the way...higher frequency investigations an approach based on the Statistical Energy Analysis (SEA) is recommended to describe the structural dynamic behavior

  10. Results of Li-Tho trial: a prospective randomized study on effectiveness of LigaSure® in lung resections.

    PubMed

    Bertolaccini, Luca; Viti, Andrea; Cavallo, Antonio; Terzi, Alberto

    2014-04-01

    The role of electro-thermal bipolar tissue sealing system (LigaSure(®), (LS); Covidien, Inc., CO, USA) in thoracic surgery is still undefined. Reports of its use are still limited. The objective of the trial was to evaluate the cost and benefits of LS in major lung resection surgery. A randomized blinded study of a consecutive series of 100 patients undergoing lobectomy was undertaken. After muscle-sparing thoracotomy and classification of lung fissures according to Craig-Walker, patients with fissure Grade 2-4 were randomized to Stapler group or LS group fissure completion. Recorded parameters were analysed for differences in selected intraoperative and postoperative outcomes. Statistical analysis was performed with the bootstrap method. Pearson's χ(2) test and Fisher's exact test were used to calculate probability value for dichotomous variables comparison. Cost-benefit evaluation was performed using Pareto optimal analysis. There were no significant differences between groups, regarding demographic and baseline characteristics. No patient was withdrawn from the study; no adverse effect was recorded. There was no mortality or major complications in both groups. There were no statistically significant differences as to operative time or morbidity between patients in the LS group compared with the Stapler group. In the LS group, there was a not statistically significant increase of postoperative air leaks in the first 24 postoperative hours, while a statistically significant increase of drainage amount was observed in the LS group. No statistically significant difference in hospital length of stay was observed. Overall, the LS group had a favourable multi-criteria analysis of cost/benefit ratio with a good 'Pareto optimum'. LS is a safe device for thoracic surgery and can be a valid alternative to Staplers. In this setting, LS allows functional lung tissue preservation. As to costs, LS seems equivalent to Staplers.

  11. How Near is a Near-Optimal Solution: Confidence Limits for the Global Optimum.

    DTIC Science & Technology

    1980-05-01

    or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use independent near...approximate or near-optimal solutions are the only practical solutions available. This paper identifies and compares some procedures which use inde- pendent...The objective of this paper is to indicate some relatively new statistical procedures for obtaining an upper confidence limit on G Each of these

  12. A Comprehensive Review of Swarm Optimization Algorithms

    PubMed Central

    2015-01-01

    Many swarm optimization algorithms have been introduced since the early 60’s, Evolutionary Programming to the most recent, Grey Wolf Optimization. All of these algorithms have demonstrated their potential to solve many optimization problems. This paper provides an in-depth survey of well-known optimization algorithms. Selected algorithms are briefly explained and compared with each other comprehensively through experiments conducted using thirty well-known benchmark functions. Their advantages and disadvantages are also discussed. A number of statistical tests are then carried out to determine the significant performances. The results indicate the overall advantage of Differential Evolution (DE) and is closely followed by Particle Swarm Optimization (PSO), compared with other considered approaches. PMID:25992655

  13. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  14. ISSUES IN THE STATISTICAL ANALYSIS OF SMALL-AREA HEALTH DATA. (R825173)

    EPA Science Inventory

    The availability of geographically indexed health and population data, with advances in computing, geographical information systems and statistical methodology, have opened the way for serious exploration of small area health statistics based on routine data. Such analyses may be...

  15. Optimal Power Allocation for CC-HARQ-based Cognitive Radio with Statistical CSI in Nakagami Slow Fading Channels

    NASA Astrophysics Data System (ADS)

    Xu, Ding; Li, Qun

    2017-01-01

    This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.

  16. Variability aware compact model characterization for statistical circuit design optimization

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Qian, Kun; Spanos, Costas J.

    2012-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.

  17. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  18. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.

    PubMed

    Kamiura, Moto; Sano, Kohei

    2017-10-01

    The principle of optimism in the face of uncertainty is known as a heuristic in sequential decision-making problems. Overtaking method based on this principle is an effective algorithm to solve multi-armed bandit problems. It was defined by a set of some heuristic patterns of the formulation in the previous study. The objective of the present paper is to redefine the value functions of Overtaking method and to unify the formulation of them. The unified Overtaking method is associated with upper bounds of confidence intervals of expected rewards on statistics. The unification of the formulation enhances the universality of Overtaking method. Consequently we newly obtain Overtaking method for the exponentially distributed rewards, numerically analyze it, and show that it outperforms UCB algorithm on average. The present study suggests that the principle of optimism in the face of uncertainty should be regarded as the statistics-based consequence of the law of large numbers for the sample mean of rewards and estimation of upper bounds of expected rewards, rather than as a heuristic, in the context of multi-armed bandit problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  20. Functional Laterality of Task-Evoked Activation in Sensorimotor Cortex of Preterm Infants: An Optimized 3 T fMRI Study Employing a Customized Neonatal Head Coil.

    PubMed

    Scheef, Lukas; Nordmeyer-Massner, Jurek A; Smith-Collins, Adam Pr; Müller, Nicole; Stegmann-Woessner, Gaby; Jankowski, Jacob; Gieseke, Jürgen; Born, Mark; Seitz, Hermann; Bartmann, Peter; Schild, Hans H; Pruessmann, Klaas P; Heep, Axel; Boecker, Henning

    2017-01-01

    Functional magnetic resonance imaging (fMRI) in neonates has been introduced as a non-invasive method for studying sensorimotor processing in the developing brain. However, previous neonatal studies have delivered conflicting results regarding localization, lateralization, and directionality of blood oxygenation level dependent (BOLD) responses in sensorimotor cortex (SMC). Amongst the confounding factors in interpreting neonatal fMRI studies include the use of standard adult MR-coils providing insufficient signal to noise, and liberal statistical thresholds, compromising clinical interpretation at the single subject level. Here, we employed a custom-designed neonatal MR-coil adapted and optimized to the head size of a newborn in order to improve robustness, reliability and validity of neonatal sensorimotor fMRI. Thirteen preterm infants with a median gestational age of 26 weeks were scanned at term-corrected age using a prototype 8-channel neonatal head coil at 3T (Achieva, Philips, Best, NL). Sensorimotor stimulation was elicited by passive extension/flexion of the elbow at 1 Hz in a block design. Analysis of temporal signal to noise ratio (tSNR) was performed on the whole brain and the SMC, and was compared to data acquired with an 'adult' 8 channel head coil published previously. Task-evoked activation was determined by single-subject SPM8 analyses, thresholded at p < 0.05, whole-brain FWE-corrected. Using a custom-designed neonatal MR-coil, we found significant positive BOLD responses in contralateral SMC after unilateral passive sensorimotor stimulation in all neonates (analyses restricted to artifact-free data sets = 8/13). Improved imaging characteristics of the neonatal MR-coil were evidenced by additional phantom and in vivo tSNR measurements: phantom studies revealed a 240% global increase in tSNR; in vivo studies revealed a 73% global and a 55% local (SMC) increase in tSNR, as compared to the 'adult' MR-coil. Our findings strengthen the importance of using optimized coil settings for neonatal fMRI, yielding robust and reproducible SMC activation at the single subject level. We conclude that functional lateralization of SMC activation, as found in children and adults, is already present in the newborn period.

Top