Sample records for continuous models methodology

  1. An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing

    DTIC Science & Technology

    2002-08-01

    simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital

  2. Policy Implications for Continuous Employment Decisions of High School Principals: An Alternative Methodological Approach for Using High-Stakes Testing Outcomes

    ERIC Educational Resources Information Center

    Young, I. Phillip; Fawcett, Paul

    2013-01-01

    Several teacher models exist for using high-stakes testing outcomes to make continuous employment decisions for principals. These models are reviewed, and specific flaws are noted if these models are retrofitted for principals. To address these flaws, a different methodology is proposed on the basis of actual field data. Specially addressed are…

  3. Discrete and continuous dynamics modeling of a mass moving on a flexible structure

    NASA Technical Reports Server (NTRS)

    Herman, Deborah Ann

    1992-01-01

    A general discrete methodology for modeling the dynamics of a mass that moves on the surface of a flexible structure is developed. This problem was motivated by the Space Station/Mobile Transporter system. A model reduction approach is developed to make the methodology applicable to large structural systems. To validate the discrete methodology, continuous formulations are also developed. Three different systems are examined: (1) simply-supported beam, (2) free-free beam, and (3) free-free beam with two points of contact between the mass and the flexible beam. In addition to validating the methodology, parametric studies were performed to examine how the system's physical properties affect its dynamics.

  4. VIII. THE PAST, PRESENT, AND FUTURE OF DEVELOPMENTAL METHODOLOGY.

    PubMed

    Little, Todd D; Wang, Eugene W; Gorrall, Britt K

    2017-06-01

    This chapter selectively reviews the evolution of quantitative practices in the field of developmental methodology. The chapter begins with an overview of the past in developmental methodology, discussing the implementation and dissemination of latent variable modeling and, in particular, longitudinal structural equation modeling. It then turns to the present state of developmental methodology, highlighting current methodological advances in the field. Additionally, this section summarizes ample quantitative resources, ranging from key quantitative methods journal articles to the various quantitative methods training programs and institutes. The chapter concludes with the future of developmental methodology and puts forth seven future innovations in the field. The innovations discussed span the topics of measurement, modeling, temporal design, and planned missing data designs. Lastly, the chapter closes with a brief overview of advanced modeling techniques such as continuous time models, state space models, and the application of Bayesian estimation in the field of developmental methodology. © 2017 The Society for Research in Child Development, Inc.

  5. Evidence for the Continuous Latent Structure of Mania in the Epidemiologic Catchment Area from Multiple Latent Structure and Construct Validation Methodologies

    PubMed Central

    Prisciandaro, James J.; Roberts, John E.

    2011-01-01

    Background Although psychiatric diagnostic systems have conceptualized mania as a discrete phenomenon, appropriate latent structure investigations testing this conceptualization are lacking. In contrast to these diagnostic systems, several influential theories of mania have suggested a continuous conceptualization. The present study examined whether mania has a continuous or discrete latent structure using a comprehensive approach including taxometric, information-theoretic latent distribution modeling (ITLDM), and predictive validity methodologies in the Epidemiologic Catchment Area (ECA) study. Methods Eight dichotomous manic symptom items were submitted to a variety of latent structural analyses; including factor analyses, taxometric procedures, and ITLDM; in 10,105 ECA community participants. Additionally, a variety of continuous and discrete models of mania were compared in terms of their relative abilities to predict outcomes (i.e., health service utilization, internalizing and externalizing disorders, and suicidal behavior). Results Taxometric and ITLDM analyses consistently supported a continuous conceptualization of mania. In ITLDM analyses, a continuous model of mania demonstrated 6:52:1 odds over the best fitting latent class model of mania. Factor analyses suggested that the continuous structure of mania was best represented by a single latent factor. Predictive validity analyses demonstrated a consistent superior ability of continuous models of mania relative to discrete models. Conclusions The present study provided three independent lines of support for a continuous conceptualization of mania. The implications of a continuous model of mania are discussed. PMID:20507671

  6. BSM2 Plant-Wide Model construction and comparative analysis with other methodologies for integrated modelling.

    PubMed

    Grau, P; Vanrolleghem, P; Ayesa, E

    2007-01-01

    In this paper, a new methodology for integrated modelling of the WWTP has been used for the construction of the Benchmark Simulation Model N degrees 2 (BSM2). The transformations-approach proposed in this methodology does not require the development of specific transformers to interface unit process models and allows the construction of tailored models for a particular WWTP guaranteeing the mass and charge continuity for the whole model. The BSM2 PWM constructed as case study, is evaluated by means of simulations under different scenarios and its validity in reproducing water and sludge lines in WWTP is demonstrated. Furthermore the advantages that this methodology presents compared to other approaches for integrated modelling are verified in terms of flexibility and coherence.

  7. SU-F-T-350: Continuous Leaf Optimization (CLO) for IMRT Leaf Sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, T; Chen, M; Jiang, S

    Purpose: To study a new step-and-shoot IMRT leaf sequencing model that avoids the two main pitfalls of conventional leaf sequencing: (1) target fluence being stratified into a fixed number of discrete levels and/or (2) aperture leaf positions being restricted to a discrete set of locations. These assumptions induce error into the sequence or reduce the feasible region of potential plans, respectively. Methods: We develop a one-dimensional (single leaf pair) methodology that does not make assumptions (1) or (2) that can be easily extended to a multi-row model. The proposed continuous leaf optimization (CLO) methodology takes in an existing set ofmore » apertures and associated intensities, or solution “seed,” and improves the plan without the restrictiveness of 1or (2). It then uses a first-order descent algorithm to converge onto a locally optimal solution. A seed solution can come from models that assume (1) and (2), thus allowing the CLO model to improve upon existing leaf sequencing methodologies. Results: The CLO model was applied to 208 generated target fluence maps in one dimension. In all cases for all tested sequencing strategies, the CLO model made improvements on the starting seed objective function. The CLO model also was able to keep MUs low. Conclusion: The CLO model can improve upon existing leaf sequencing methods by avoiding the restrictions of (1) and (2). By allowing for more flexible leaf positioning, error can be reduced when matching some target fluence. This study lays the foundation for future models and solution methodologies that can incorporate continuous leaf positions explicitly into the IMRT treatment planning model. Supported by Cancer Prevention & Research Institute of Texas (CPRIT) - ID RP150485.« less

  8. On the continuity of mean total normal stress in geometrical multiscale cardiovascular problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanco, Pablo J., E-mail: pjblanco@lncc.br; INCT-MACC, Instituto Nacional de Ciência e Tecnologia em Medicina Assistida por Computação Científica, Petrópolis; Deparis, Simone, E-mail: simone.deparis@epfl.ch

    2013-10-15

    In this work an iterative strategy to implicitly couple dimensionally-heterogeneous blood flow models accounting for the continuity of mean total normal stress at interface boundaries is developed. Conservation of mean total normal stress in the coupling of heterogeneous models is mandatory to satisfy energetic consistency between them. Nevertheless, existing methodologies are based on modifications of the Navier–Stokes variational formulation, which are undesired when dealing with fluid–structure interaction or black box codes. The proposed methodology makes possible to couple one-dimensional and three-dimensional fluid–structure interaction models, enforcing the continuity of mean total normal stress while just imposing flow rate data or evenmore » the classical Neumann boundary data to the models. This is accomplished by modifying an existing iterative algorithm, which is also able to account for the continuity of the vessel area, when required. Comparisons are performed to assess differences in the convergence properties of the algorithms when considering the continuity of mean normal stress and the continuity of mean total normal stress for a wide range of flow regimes. Finally, examples in the physiological regime are shown to evaluate the importance, or not, of considering the continuity of mean total normal stress in hemodynamics simulations.« less

  9. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY

    PubMed Central

    Somogyi, Endre; Hagar, Amit; Glazier, James A.

    2017-01-01

    Living tissues are dynamic, heterogeneous compositions of objects, including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes. Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology (CCOPM) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models. PMID:29282379

  10. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.

    PubMed

    Somogyi, Endre; Hagar, Amit; Glazier, James A

    2016-12-01

    Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.

  11. IMPROVEMENT OF EXPOSURE-DOSE MODELS: APPLICATION OF CONTINUOUS BREATH SAMPLING TO DETERMINE VOC DOSE AND BODY BURDEN

    EPA Science Inventory

    This is a continuation of an Internal Grant research project with the focus on completing the research due to initial funding delays and then analyzing and reporting the research results. This project will employ a new continuous breath sampling methodology to investigate dose a...

  12. Creation and implementation of an effective physician compensation methodology for a nonprofit medical foundation.

    PubMed

    Ferch, A W

    2000-01-01

    The foundation has determined that the adjusted gross billing methodology is a viable method to be considered for a nonprofit medical foundation in compensating physicians. The foundation continues to experiment with the margin formula and is exploring other potential formulas, but believes with certain modifications the percentage of adjusted gross billing methodology can be effective and useful because of its simplicity, ease of administration, and motivational effect on the physicians. The primary improvement to the model needed would be the ability to adjust the formula on a frequent basis for individual practice variations. Modifications will continue to be made as circumstances change, but the basic principles will remain constant.

  13. Using landscape topology to compare continuous metaheuristics: a framework and case study on EDAs and ridge structure.

    PubMed

    Morgan, R; Gallagher, M

    2012-01-01

    In this paper we extend a previously proposed randomized landscape generator in combination with a comparative experimental methodology to study the behavior of continuous metaheuristic optimization algorithms. In particular, we generate two-dimensional landscapes with parameterized, linear ridge structure, and perform pairwise comparisons of algorithms to gain insight into what kind of problems are easy and difficult for one algorithm instance relative to another. We apply this methodology to investigate the specific issue of explicit dependency modeling in simple continuous estimation of distribution algorithms. Experimental results reveal specific examples of landscapes (with certain identifiable features) where dependency modeling is useful, harmful, or has little impact on mean algorithm performance. Heat maps are used to compare algorithm performance over a large number of landscape instances and algorithm trials. Finally, we perform a meta-search in the landscape parameter space to find landscapes which maximize the performance between algorithms. The results are related to some previous intuition about the behavior of these algorithms, but at the same time lead to new insights into the relationship between dependency modeling in EDAs and the structure of the problem landscape. The landscape generator and overall methodology are quite general and extendable and can be used to examine specific features of other algorithms.

  14. When Can Categorical Variables Be Treated as Continuous? A Comparison of Robust Continuous and Categorical SEM Estimation Methods under Suboptimal Conditions

    ERIC Educational Resources Information Center

    Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria

    2012-01-01

    A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…

  15. CFD-RANS prediction of individual exposure from continuous release of hazardous airborne materials in complex urban environments

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.; Berbekar, E.; Harms, F.; Leitl, B.

    2017-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict individual exposure (maximum dosages) of an airborne material which is released continuously from a point source. The present work addresses the question whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict individual exposure for various exposure times. This is feasible by providing the two RANS concentration moments (mean and variance) and a turbulent time scale to a deterministic model. The whole effort is focused on the prediction of individual exposure inside a complex real urban area. The capabilities of the proposed methodology are validated against wind-tunnel data (CUTE experiment). The present simulations were performed 'blindly', i.e. the modeller had limited information for the inlet boundary conditions and the results were kept unknown until the end of the COST Action ES1006. Thus, a high uncertainty of the results was expected. The general performance of the methodology due to this 'blind' strategy is good. The validation metrics fulfil the acceptance criteria. The effect of the grid and the turbulence model on the model performance is examined.

  16. Calibration methodology for proportional counters applied to yield measurements of a neutron burst.

    PubMed

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  17. Probabilistic assessment methodology for continuous-type petroleum accumulations

    USGS Publications Warehouse

    Crovelli, R.A.

    2003-01-01

    The analytic resource assessment method, called ACCESS (Analytic Cell-based Continuous Energy Spreadsheet System), was developed to calculate estimates of petroleum resources for the geologic assessment model, called FORSPAN, in continuous-type petroleum accumulations. The ACCESS method is based upon mathematical equations derived from probability theory in the form of a computer spreadsheet system. ?? 2003 Elsevier B.V. All rights reserved.

  18. On the fit of models to covariances and methodology to the Bulletin.

    PubMed

    Bentler, P M

    1992-11-01

    It is noted that 7 of the 10 top-cited articles in the Psychological Bulletin deal with methodological topics. One of these is the Bentler-Bonett (1980) article on the assessment of fit in covariance structure models. Some context is provided on the popularity of this article. In addition, a citation study of methodology articles appearing in the Bulletin since 1978 was carried out. It verified that publications in design, evaluation, measurement, and statistics continue to be important to psychological research. Some thoughts are offered on the role of the journal in making developments in these areas more accessible to psychologists.

  19. Identifying Continuous Quality Improvement Priorities in Maternal, Infant, and Early Childhood Home Visiting.

    PubMed

    Preskitt, Julie; Fifolt, Matthew; Ginter, Peter M; Rucks, Andrew; Wingate, Martha S

    2016-01-01

    The purpose of this article was to describe a methodology to identify continuous quality improvement (CQI) priorities for one state's Maternal, Infant, and Early Childhood Home Visiting program from among the 40 required constructs associated with 6 program benchmarks. The authors discuss how the methodology provided consensus on system CQI quality measure priorities and describe variation among the 3 service delivery models used within the state. Q-sort methodology was used by home visiting (HV) service delivery providers (home visitors) to prioritize HV quality measures for the overall state HV system as well as their service delivery model. There was general consensus overall and among the service delivery models on CQI quality measure priorities, although some variation was observed. Measures associated with Maternal, Infant, and Early Childhood Home Visiting benchmark 1, Improved Maternal and Newborn Health, and benchmark 3, Improvement in School Readiness and Achievement, were the highest ranked. The Q-sort exercise allowed home visitors an opportunity to examine priorities within their service delivery model as well as for the overall First Teacher HV system. Participants engaged in meaningful discussions regarding how and why they selected specific quality measures and developed a greater awareness and understanding of a systems approach to HV within the state. The Q-sort methodology presented in this article can easily be replicated by other states to identify CQI priorities at the local and state levels and can be used effectively in states that use a single HV service delivery model or those that implement multiple evidence-based models for HV service delivery.

  20. Systematic iteration between model and methodology: A proposed approach to evaluating unintended consequences.

    PubMed

    Morell, Jonathan A

    2018-06-01

    This article argues that evaluators could better deal with unintended consequences if they improved their methods of systematically and methodically combining empirical data collection and model building over the life cycle of an evaluation. This process would be helpful because it can increase the timespan from when the need for a change in methodology is first suspected to the time when the new element of the methodology is operational. The article begins with an explanation of why logic models are so important in evaluation, and why the utility of models is limited if they are not continually revised based on empirical evaluation data. It sets the argument within the larger context of the value and limitations of models in the scientific enterprise. Following will be a discussion of various issues that are relevant to model development and revision. What is the relevance of complex system behavior for understanding predictable and unpredictable unintended consequences, and the methods needed to deal with them? How might understanding of unintended consequences be improved with an appreciation of generic patterns of change that are independent of any particular program or change effort? What are the social and organizational dynamics that make it rational and adaptive to design programs around single-outcome solutions to multi-dimensional problems? How does cognitive bias affect our ability to identify likely program outcomes? Why is it hard to discern change as a result of programs being embedded in multi-component, continually fluctuating, settings? The last part of the paper outlines a process for actualizing systematic iteration between model and methodology, and concludes with a set of research questions that speak to how the model/data process can be made efficient and effective. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A methodological approach for using high-level Petri Nets to model the immune system response.

    PubMed

    Pennisi, Marzio; Cavalieri, Salvatore; Motta, Santo; Pappalardo, Francesco

    2016-12-22

    Mathematical and computational models showed to be a very important support tool for the comprehension of the immune system response against pathogens. Models and simulations allowed to study the immune system behavior, to test biological hypotheses about diseases and infection dynamics, and to improve and optimize novel and existing drugs and vaccines. Continuous models, mainly based on differential equations, usually allow to qualitatively study the system but lack in description; conversely discrete models, such as agent based models and cellular automata, permit to describe in detail entities properties at the cost of losing most qualitative analyses. Petri Nets (PN) are a graphical modeling tool developed to model concurrency and synchronization in distributed systems. Their use has become increasingly marked also thanks to the introduction in the years of many features and extensions which lead to the born of "high level" PN. We propose a novel methodological approach that is based on high level PN, and in particular on Colored Petri Nets (CPN), that can be used to model the immune system response at the cellular scale. To demonstrate the potentiality of the approach we provide a simple model of the humoral immune system response that is able of reproducing some of the most complex well-known features of the adaptive response like memory and specificity features. The methodology we present has advantages of both the two classical approaches based on continuous and discrete models, since it allows to gain good level of granularity in the description of cells behavior without losing the possibility of having a qualitative analysis. Furthermore, the presented methodology based on CPN allows the adoption of the same graphical modeling technique well known to life scientists that use PN for the modeling of signaling pathways. Finally, such an approach may open the floodgates to the realization of multi scale models that integrate both signaling pathways (intra cellular) models and cellular (population) models built upon the same technique and software.

  2. Research and development activities in unified control-structure modeling and design

    NASA Technical Reports Server (NTRS)

    Nayak, A. P.

    1985-01-01

    Results of work sponsored by JPL and other organizations to develop a unified control/structures modeling and design capability for large space structures is presented. Recent analytical results are presented to demonstrate the significant interdependence between structural and control properties. A new design methodology is suggested in which the structure, material properties, dynamic model and control design are all optimized simultaneously. The development of a methodology for global design optimization is recommended as a long term goal. It is suggested that this methodology should be incorporated into computer aided engineering programs, which eventually will be supplemented by an expert system to aid design optimization. Recommendations are also presented for near term research activities at JPL. The key recommendation is to continue the development of integrated dynamic modeling/control design techniques, with special attention given to the development of structural models specially tailored to support design.

  3. [Modeling continuous scaling of NDVI based on fractal theory].

    PubMed

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  4. Guiding principles of USGS methodology for assessment of undiscovered conventional oil and gas resources

    USGS Publications Warehouse

    Charpentier, R.R.; Klett, T.R.

    2005-01-01

    During the last 30 years, the methodology for assessment of undiscovered conventional oil and gas resources used by the Geological Survey has undergone considerable change. This evolution has been based on five major principles. First, the U.S. Geological Survey has responsibility for a wide range of U.S. and world assessments and requires a robust methodology suitable for immaturely explored as well as maturely explored areas. Second, the assessments should be based on as comprehensive a set of geological and exploration history data as possible. Third, the perils of methods that solely use statistical methods without geological analysis are recognized. Fourth, the methodology and course of the assessment should be documented as transparently as possible, within the limits imposed by the inevitable use of subjective judgement. Fifth, the multiple uses of the assessments require a continuing effort to provide the documentation in such ways as to increase utility to the many types of users. Undiscovered conventional oil and gas resources are those recoverable volumes in undiscovered, discrete, conventional structural or stratigraphic traps. The USGS 2000 methodology for these resources is based on a framework of assessing numbers and sizes of undiscovered oil and gas accumulations and the associated risks. The input is standardized on a form termed the Seventh Approximation Data Form for Conventional Assessment Units. Volumes of resource are then calculated using a Monte Carlo program named Emc2, but an alternative analytic (non-Monte Carlo) program named ASSESS also can be used. The resource assessment methodology continues to change. Accumulation-size distributions are being examined to determine how sensitive the results are to size-distribution assumptions. The resource assessment output is changing to provide better applicability for economic analysis. The separate methodology for assessing continuous (unconventional) resources also has been evolving. Further studies of the relationship between geologic models of conventional and continuous resources will likely impact the respective resource assessment methodologies. ?? 2005 International Association for Mathematical Geology.

  5. Stochastic model for fatigue crack size and cost effective design decisions. [for aerospace structures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1975-01-01

    This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.

  6. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes

    PubMed Central

    Zhang, Hong; Pei, Yun

    2016-01-01

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266

  7. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    PubMed

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  8. Process-oriented Observational Metrics for CMIP6 Climate Model Assessments

    NASA Astrophysics Data System (ADS)

    Jiang, J. H.; Su, H.

    2016-12-01

    Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.

  9. Infrared Algorithm Development for Ocean Observations with EOS/MODIS

    NASA Technical Reports Server (NTRS)

    Brown, Otis B.

    1997-01-01

    Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.

  10. Thermal modeling of head disk interface system in heat assisted magnetic recording

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vemuri, Sesha Hari; Seung Chung, Pil; Jhon, Myung S., E-mail: mj3a@andrew.cmu.edu

    2014-05-07

    A thorough understanding of the temperature profiles introduced by the heat assisted magnetic recording is required to maintain the hotspot at the desired location on the disk with minimal heat damage to other components. Here, we implement a transient mesoscale modeling methodology termed lattice Boltzmann method (LBM) for phonons (which are primary carriers of energy) in the thermal modeling of the head disk interface (HDI) components, namely, carbon overcoat (COC). The LBM can provide more accurate results compared to conventional Fourier methodology by capturing the nanoscale phenomena due to ballistic heat transfer. We examine the in-plane and out-of-plane heat transfermore » in the COC via analyzing the temperature profiles with a continuously focused and pulsed laser beam on a moving disk. Larger in-plane hotspot widening is observed in continuously focused laser beam compared to a pulsed laser. A pulsed laser surface develops steeper temperature gradients compared to continuous hotspot. Furthermore, out-of-plane heat transfer from the COC to the media is enhanced with a continuous laser beam then a pulsed laser, while the temperature takes around 140 fs to reach the bottom surface of the COC. Our study can lead to a realistic thermal model describing novel HDI material design criteria for the next generation of hard disk drives with ultra high recording densities.« less

  11. Models for Effective Service Delivery in Special Education Programs

    ERIC Educational Resources Information Center

    Epler, Pam; Ross, Rorie

    2015-01-01

    Educators today are challenged with the task of designing curricula and standards for students of varying abilities. While technology and innovation steadily improve classroom learning, teachers and administrators continue to struggle in developing the best methodologies and practices for students with disabilities. "Models for Effective…

  12. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  13. Methodologies, Models and Algorithms for Patients Rehabilitation.

    PubMed

    Fardoun, H M; Mashat, A S

    2016-01-01

    This editorial is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The objective of this focus theme is to present current solutions by means of technologies and human factors related to the use of Information and Communication Technologies (ICT) for improving patient rehabilitation. The focus theme examines distinctive measurements of strengthening methodologies, models and algorithms for disabled people in terms of rehabilitation and health care, and to explore the extent to which ICT is a useful tool in this process. The focus theme records a set of solutions for ICT systems developed to improve the rehabilitation process of disabled people and to help them in carrying out their daily life. The development and subsequent setting up of computers for the patients' rehabilitation process is of continuous interest and growth.

  14. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias

    With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less

  15. Formulation of a correlated variables methodology for assessment of continuous gas resources with an application to the Woodford play, Arkoma Basin, eastern Oklahoma

    USGS Publications Warehouse

    Olea, R.A.; Houseknecht, D.W.; Garrity, C.P.; Cook, T.A.

    2011-01-01

    Shale gas is a form of continuous unconventional hydrocarbon accumulation whose resource estimation is unfeasible through the inference of pore volume. Under these circumstances, the usual approach is to base the assessment on well productivity through estimated ultimate recovery (EUR). Unconventional resource assessments that consider uncertainty are typically done by applying analytical procedures based on classical statistics theory that ignores geographical location, does not take into account spatial correlation, and assumes independence of EUR from other variables that may enter into the modeling. We formulate a new, more comprehensive approach based on sequential simulation to test methodologies known to be capable of more fully utilizing the data and overcoming unrealistic simplifications. Theoretical requirements demand modeling of EUR as areal density instead of well EUR. The new experimental methodology is illustrated by evaluating a gas play in the Woodford Shale in the Arkoma Basin of Oklahoma. Differently from previous assessments, we used net thickness and vitrinite reflectance as secondary variables correlated to cell EUR. In addition to the traditional probability distribution for undiscovered resources, the new methodology provides maps of EUR density and maps with probabilities to reach any given cell EUR, which are useful to visualize geographical variations in prospectivity.

  16. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically significant differences being found between the two AUCs (p =0.079). The four-category proposal for PCO2 was ≤ 43;(43,52];(52,65];> 65, for which the following values were obtained: AIC=258.1 and AUC=0.81. No statistically significant differences were found between the AUC of the four-category option and that of the continuous predictor, which yielded an AIC of 250.3 and an AUC of 0.825 (p =0.115). Conclusions Our proposed method provides clinicians with the number and location of cut points for categorising variables, and performs as successfully as the original continuous predictor when it comes to developing clinical prediction rules. PMID:23802742

  17. A Note on the Specification of Error Structures in Latent Interaction Models

    ERIC Educational Resources Information Center

    Mao, Xiulin; Harring, Jeffrey R.; Hancock, Gregory R.

    2015-01-01

    Latent interaction models have motivated a great deal of methodological research, mainly in the area of estimating such models. Product-indicator methods have been shown to be competitive with other methods of estimation in terms of parameter bias and standard error accuracy, and their continued popularity in empirical studies is due, in part, to…

  18. Maximum Likelihood Item Easiness Models for Test Theory without an Answer Key

    ERIC Educational Resources Information Center

    France, Stephen L.; Batchelder, William H.

    2015-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce…

  19. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  20. Adaptive Capacity in the Pacific Region: A Study of Continuous Professional Development for In-Service Teachers in Kiribati

    ERIC Educational Resources Information Center

    Martin, Tess; Thomson, Ian

    2018-01-01

    This study of I-Kiribati secondary school teachers used a project-based approach to investigate the notions of school-based and collaborative learning as a suitable model for in-service teacher continuous professional development (CPD). The design and methodology adopted by the study framed the argument that since collaborative behavior is…

  1. Continuing Chapter 1's Leadership in Modeling Best Practices in Evaluation. A Symposium Presentation.

    ERIC Educational Resources Information Center

    Ligon, Glynn

    This paper examines whether the Title I/Chapter 1 tradition of leading the way in educational evaluation will continue or whether Chapter 1 will change its role by delegating decision-making authority over evaluation methodology to state and local school systems. Whatever direction Chapter 1 takes, states, school systems, and schools must be held…

  2. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  3. Identification of nonlinear normal modes of engineering structures under broadband forcing

    NASA Astrophysics Data System (ADS)

    Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.

    2016-06-01

    The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.

  4. A PDE-based methodology for modeling, parameter estimation and feedback control in structural and structural acoustic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.; Smith, Ralph C.; Wang, Yun

    1994-01-01

    A problem of continued interest concerns the control of vibrations in a flexible structure and the related problem of reducing structure-borne noise in structural acoustic systems. In both cases, piezoceramic patches bonded to the structures have been successfully used as control actuators. Through the application of a controlling voltage, the patches can be used to reduce structural vibrations which in turn lead to methods for reducing structure-borne noise. A PDE-based methodology for modeling, estimating physical parameters, and implementing a feedback control scheme for problems of this type is discussed. While the illustrating example is a circular plate, the methodology is sufficiently general so as to be applicable in a variety of structural and structural acoustic systems.

  5. Ecological monitoring in a discrete-time prey-predator model.

    PubMed

    Gámez, M; López, I; Rodríguez, C; Varga, Z; Garay, J

    2017-09-21

    The paper is aimed at the methodological development of ecological monitoring in discrete-time dynamic models. In earlier papers, in the framework of continuous-time models, we have shown how a systems-theoretical methodology can be applied to the monitoring of the state process of a system of interacting populations, also estimating certain abiotic environmental changes such as pollution, climatic or seasonal changes. In practice, however, there may be good reasons to use discrete-time models. (For instance, there may be discrete cycles in the development of the populations, or observations can be made only at discrete time steps.) Therefore the present paper is devoted to the development of the monitoring methodology in the framework of discrete-time models of population ecology. By monitoring we mean that, observing only certain component(s) of the system, we reconstruct the whole state process. This may be necessary, e.g., when in a complex ecosystem the observation of the densities of certain species is impossible, or too expensive. For the first presentation of the offered methodology, we have chosen a discrete-time version of the classical Lotka-Volterra prey-predator model. This is a minimal but not trivial system where the methodology can still be presented. We also show how this methodology can be applied to estimate the effect of an abiotic environmental change, using a component of the population system as an environmental indicator. Although this approach is illustrated in a simplest possible case, it can be easily extended to larger ecosystems with several interacting populations and different types of abiotic environmental effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Continuous state-space representation of a bucket-type rainfall-runoff model: a case study with the GR4 model using state-space GR4 (version 1.0)

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2018-04-01

    In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.

  7. Life cycle cost modeling of conceptual space vehicles

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles

    1993-01-01

    This paper documents progress to date by the University of Dayton on the development of a life cycle cost model for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of a life cycle cost model. Cost categories are initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. The focus will be on operations and maintenance costs and other recurring costs. Secondary tasks performed concurrent with the development of the life cycle costing model include continual support and upgrade of the R&M model. The primary result of the completed research will be a methodology and a computer implementation of the methodology to provide for timely cost analysis in support of the conceptual design activities. The major objectives of this research are: to obtain and to develop improved methods for estimating manpower, spares, software and hardware costs, facilities costs, and other cost categories as identified by NASA personnel; to construct a life cycle cost model of a space transportation system for budget exercises and performance-cost trade-off analysis during the conceptual and development stages; to continue to support modifications and enhancements to the R&M model; and to continue to assist in the development of a simulation model to provide an integrated view of the operations and support of the proposed system.

  8. Expanding the Parameters for Excellence in Patient Assignments: Is Leveraging an Evidence-Data-Based Acuity Methodology Realistic?

    PubMed

    Gray, Joel; Kerfoot, Karlene

    2016-01-01

    Finding the balance of equitable assignments continues to be a challenge for health care organizations seeking to leverage evidence-based leadership practices. Ratios and subjective acuity strategies for nurse-patient staffing continue to be the dominant approach in health care organizations. In addition to ratio-based assignments and acuity-based assignment models driven by financial targets, more emphasis on using evidence-based leadership strategies to manage and create science for effective staffing is needed. In particular, nurse leaders are challenged to increase the sophistication of management of patient turnover (admissions, discharges, and transfers) and integrate tools from Lean methodologies and quality management strategies to determine the effectiveness of nurse-patient staffing.

  9. USGS Methodology for Assessing Continuous Petroleum Resources

    USGS Publications Warehouse

    Charpentier, Ronald R.; Cook, Troy A.

    2011-01-01

    The U.S. Geological Survey (USGS) has developed a new quantitative methodology for assessing resources in continuous (unconventional) petroleum deposits. Continuous petroleum resources include shale gas, coalbed gas, and other oil and gas deposits in low-permeability ("tight") reservoirs. The methodology is based on an approach combining geologic understanding with well productivities. The methodology is probabilistic, with both input and output variables as probability distributions, and uses Monte Carlo simulation to calculate the estimates. The new methodology is an improvement of previous USGS methodologies in that it better accommodates the uncertainties in undrilled or minimally drilled deposits that must be assessed using analogs. The publication is a collection of PowerPoint slides with accompanying comments.

  10. Reporting and Methodology of Multivariable Analyses in Prognostic Observational Studies Published in 4 Anesthesiology Journals: A Methodological Descriptive Review.

    PubMed

    Guglielminotti, Jean; Dechartres, Agnès; Mentré, France; Montravers, Philippe; Longrois, Dan; Laouénan, Cedric

    2015-10-01

    Prognostic research studies in anesthesiology aim to identify risk factors for an outcome (explanatory studies) or calculate the risk of this outcome on the basis of patients' risk factors (predictive studies). Multivariable models express the relationship between predictors and an outcome and are used in both explanatory and predictive studies. Model development demands a strict methodology and a clear reporting to assess its reliability. In this methodological descriptive review, we critically assessed the reporting and methodology of multivariable analysis used in observational prognostic studies published in anesthesiology journals. A systematic search was conducted on Medline through Web of Knowledge, PubMed, and journal websites to identify observational prognostic studies with multivariable analysis published in Anesthesiology, Anesthesia & Analgesia, British Journal of Anaesthesia, and Anaesthesia in 2010 and 2011. Data were extracted by 2 independent readers. First, studies were analyzed with respect to reporting of outcomes, design, size, methods of analysis, model performance (discrimination and calibration), model validation, clinical usefulness, and STROBE (i.e., Strengthening the Reporting of Observational Studies in Epidemiology) checklist. A reporting rate was calculated on the basis of 21 items of the aforementioned points. Second, they were analyzed with respect to some predefined methodological points. Eighty-six studies were included: 87.2% were explanatory and 80.2% investigated a postoperative event. The reporting was fairly good, with a median reporting rate of 79% (75% in explanatory studies and 100% in predictive studies). Six items had a reporting rate <36% (i.e., the 25th percentile), with some of them not identified in the STROBE checklist: blinded evaluation of the outcome (11.9%), reason for sample size (15.1%), handling of missing data (36.0%), assessment of colinearity (17.4%), assessment of interactions (13.9%), and calibration (34.9%). When reported, a few methodological shortcomings were observed, both in explanatory and predictive studies, such as an insufficient number of events of the outcome (44.6%), exclusion of cases with missing data (93.6%), or categorization of continuous variables (65.1%.). The reporting of multivariable analysis was fairly good and could be further improved by checking reporting guidelines and EQUATOR Network website. Limiting the number of candidate variables, including cases with missing data, and not arbitrarily categorizing continuous variables should be encouraged.

  11. Grand Rounds: A Method for Improving Student Learning and Client Care Continuity in a Student-Run Physical Therapy Pro Bono Clinic

    ERIC Educational Resources Information Center

    Black, Jill D.; Bauer, Kyle N.; Spano, Georgia E.; Voelkel, Sarah A.; Palombaro, Kerstin M.

    2017-01-01

    Background and Purpose: Grand Rounds is a teaching methodology that has existed in various forms in medical education for centuries. When a student-run pro bono clinic identified a growing challenge of providing continuity of care for clients and a lack of preparedness in students, they implemented a Grand Rounds model of case presentation within…

  12. Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling

    PubMed Central

    Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro

    2016-01-01

    This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator. PMID:26978370

  13. Design Methodology of a Dual-Halbach Array Linear Actuator with Thermal-Electromagnetic Coupling.

    PubMed

    Eckert, Paulo Roberto; Flores Filho, Aly Ferreira; Perondi, Eduardo; Ferri, Jeferson; Goltz, Evandro

    2016-03-11

    This paper proposes a design methodology for linear actuators, considering thermal and electromagnetic coupling with geometrical and temperature constraints, that maximizes force density and minimizes force ripple. The method allows defining an actuator for given specifications in a step-by-step way so that requirements are met and the temperature within the device is maintained under or equal to its maximum allowed for continuous operation. According to the proposed method, the electromagnetic and thermal models are built with quasi-static parametric finite element models. The methodology was successfully applied to the design of a linear cylindrical actuator with a dual quasi-Halbach array of permanent magnets and a moving-coil. The actuator can produce an axial force of 120 N and a stroke of 80 mm. The paper also presents a comparative analysis between results obtained considering only an electromagnetic model and the thermal-electromagnetic coupled model. This comparison shows that the final designs for both cases differ significantly, especially regarding its active volume and its electrical and magnetic loading. Although in this paper the methodology was employed to design a specific actuator, its structure can be used to design a wide range of linear devices if the parametric models are adjusted for each particular actuator.

  14. Motion and Stability of Saturated Soil Systems under Dynamic Loading.

    DTIC Science & Technology

    1985-04-04

    12 7.3 Experimental Verification of Theories ............................. 13 8. ADDITIONAL COMMENTS AND OTHER WORK, AT THE OHIO...theoretical/computational models. The continuing rsearch effort will extend and refine the theoretical models, allow for compressibility of soil as...motion of soil and water and, therefore, a correct theory of liquefaction should not include this assumption. Finite element methodologies have been

  15. Building Coherent Validation Arguments for the Measurement of Latent Constructs with Unified Statistical Frameworks

    ERIC Educational Resources Information Center

    Rupp, Andre A.

    2012-01-01

    In the focus article of this issue, von Davier, Naemi, and Roberts essentially coupled: (1) a short methodological review of structural similarities of latent variable models with discrete and continuous latent variables; and (2) 2 short empirical case studies that show how these models can be applied to real, rather than simulated, large-scale…

  16. Continuous electrocoagulation of cheese whey wastewater: an application of Response Surface Methodology.

    PubMed

    Tezcan Un, Umran; Kandemir, Ayse; Erginel, Nihal; Ocal, S Eren

    2014-12-15

    In this study, treatment of cheese whey wastewater was performed using a uniquely-designed continuous electrocoagulation reactor, not previously encountered in the literature. An iron horizontal rotating screw type anode was used in the continuous mode. An empirical model, in terms of effective operational factors, such as current density (40, 50, 60 mA/cm(2)), pH (3, 5, 7) and retention time (20, 40, 60 min), was developed through Response Surface Methodology. An optimal region characterized by low values of Chemical Oxygen Demand (COD) was determined. As a result of experiments, a linear effect in the removal efficiency of COD was obtained for current density and retention time, while the initial pH of the wastewater was found to have a quadratic effect in the removal efficiency of COD. The best fit nonlinear mathematical model, with a coefficient of determination value (R(2)) of 85%, was defined. An initial COD concentration of 15.500 mg/L was reduced to 2112 mg/L with a removal efficiency of 86.4%. In conclusion, it can be said that electrocoagulation was successfully applied for the treatment of cheese whey wastewater. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.

    PubMed

    Stránský, V; Thinová, L

    2017-11-01

    In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. New geometric design consistency model based on operating speed profiles for road safety evaluation.

    PubMed

    Camacho-Torregrosa, Francisco J; Pérez-Zuriaga, Ana M; Campoy-Ungría, J Manuel; García-García, Alfredo

    2013-12-01

    To assist in the on-going effort to reduce road fatalities as much as possible, this paper presents a new methodology to evaluate road safety in both the design and redesign stages of two-lane rural highways. This methodology is based on the analysis of road geometric design consistency, a value which will be a surrogate measure of the safety level of the two-lane rural road segment. The consistency model presented in this paper is based on the consideration of continuous operating speed profiles. The models used for their construction were obtained by using an innovative GPS-data collection method that is based on continuous operating speed profiles recorded from individual drivers. This new methodology allowed the researchers to observe the actual behavior of drivers and to develop more accurate operating speed models than was previously possible with spot-speed data collection, thereby enabling a more accurate approximation to the real phenomenon and thus a better consistency measurement. Operating speed profiles were built for 33 Spanish two-lane rural road segments, and several consistency measurements based on the global and local operating speed were checked. The final consistency model takes into account not only the global dispersion of the operating speed, but also some indexes that consider both local speed decelerations and speeds over posted speeds as well. For the development of the consistency model, the crash frequency for each study site was considered, which allowed estimating the number of crashes on a road segment by means of the calculation of its geometric design consistency. Consequently, the presented consistency evaluation method is a promising innovative tool that can be used as a surrogate measure to estimate the safety of a road segment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  20. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Modelling Continuing Load at Disaggregated Levels

    ERIC Educational Resources Information Center

    Seidel, Ewa

    2014-01-01

    The current methodology of estimating load in the following year at Flinders University has achieved reasonable accuracy in the previous capped funding environment, particularly at the university level, due largely to our university having stable intakes and student profiles. While historically within reasonable limits, variation in estimates at…

  2. An integrated quality function deployment and capital budgeting methodology for occupational safety and health as a systems thinking approach: the case of the construction industry.

    PubMed

    Bas, Esra

    2014-07-01

    In this paper, an integrated methodology for Quality Function Deployment (QFD) and a 0-1 knapsack model is proposed for occupational safety and health as a systems thinking approach. The House of Quality (HoQ) in QFD methodology is a systematic tool to consider the inter-relationships between two factors. In this paper, three HoQs are used to consider the interrelationships between tasks and hazards, hazards and events, and events and preventive/protective measures. The final priority weights of events are defined by considering their project-specific preliminary weights, probability of occurrence, and effects on the victim and the company. The priority weights of the preventive/protective measures obtained in the last HoQ are fed into a 0-1 knapsack model for the investment decision. Then, the selected preventive/protective measures can be adapted to the task design. The proposed step-by-step methodology can be applied to any stage of a project to design the workplace for occupational safety and health, and continuous improvement for safety is endorsed by the closed loop characteristic of the integrated methodology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Making Sense of the Meaning Literature: An Integrative Review of Meaning Making and Its Effects on Adjustment to Stressful Life Events

    ERIC Educational Resources Information Center

    Park, Crystal L.

    2010-01-01

    Interest in meaning and meaning making in the context of stressful life events continues to grow, but research is hampered by conceptual and methodological limitations. Drawing on current theories, the author first presents an integrated model of meaning making. This model distinguishes between the constructs of global and situational meaning and…

  4. Estimation of radionuclide (137Cs) emission rates from a nuclear power plant accident using the Lagrangian Particle Dispersion Model (LPDM).

    PubMed

    Park, Soon-Ung; Lee, In-Hye; Ju, Jae-Won; Joo, Seung Jin

    2016-10-01

    A methodology for the estimation of the emission rate of 137 Cs by the Lagrangian Particle Dispersion Model (LPDM) with the use of monitored 137 Cs concentrations around a nuclear power plant has been developed. This method has been employed with the MM5 meteorological model in the 600 km × 600 km model domain with the horizontal grid scale of 3 km × 3 km centered at the Fukushima nuclear power plant to estimate 137 Cs emission rate for the accidental period from 00 UTC 12 March to 00 UTC 6 April 2011. The Lagrangian Particles are released continuously with the rate of one particle per minute at the first level modelled, about 15 m above the power plant site. The presently developed method was able to simulate quite reasonably the estimated 137 Cs emission rate compared with other studies, suggesting the potential usefulness of the present method for the estimation of the emission rate from the accidental power plant without detailed inventories of reactors and fuel assemblies and spent fuels. The advantage of this method is not so complicated but can be applied only based on one-time forward LPDM simulation with monitored concentrations around the power plant, in contrast to other inverse models. It was also found that continuously monitored radionuclides concentrations from possibly many sites located in all directions around the power plant are required to get accurate continuous emission rates from the accident power plant. The current methodology can also be used to verify the previous version of radionuclides emissions used among other modeling groups for the cases of intermittent or discontinuous samplings. Copyright © 2016. Published by Elsevier Ltd.

  5. The inverse problem in electroencephalography using the bidomain model of electrical activity.

    PubMed

    Lopez Rincon, Alejandro; Shimoda, Shingo

    2016-12-01

    Acquiring information about the distribution of electrical sources in the brain from electroencephalography (EEG) data remains a significant challenge. An accurate solution would provide an understanding of the inner mechanisms of the electrical activity in the brain and information about damaged tissue. In this paper, we present a methodology for reconstructing brain electrical activity from EEG data by using the bidomain formulation. The bidomain model considers continuous active neural tissue coupled with a nonlinear cell model. Using this technique, we aim to find the brain sources that give rise to the scalp potential recorded by EEG measurements taking into account a non-static reconstruction. We simulate electrical sources in the brain volume and compare the reconstruction to the minimum norm estimates (MNEs) and low resolution electrical tomography (LORETA) results. Then, with the EEG dataset from the EEG Motor Movement/Imagery Database of the Physiobank, we identify the reaction to visual stimuli by calculating the time between stimulus presentation and the spike in electrical activity. Finally, we compare the activation in the brain with the registered activation using the LinkRbrain platform. Our methodology shows an improved reconstruction of the electrical activity and source localization in comparison with MNE and LORETA. For the Motor Movement/Imagery Database, the reconstruction is consistent with the expected position and time delay generated by the stimuli. Thus, this methodology is a suitable option for continuously reconstructing brain potentials. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  6. A Learning Framework for Control-Oriented Modeling of Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.

    Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and bigmore » data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.« less

  7. Intergenerational Stories and the "Othering" of Samoan Youth in Schools

    ERIC Educational Resources Information Center

    Yeh, Christine J.; Borrero, Noah E.; Tito, Patsy; Petaia, Lealaisalanoa Setu

    2014-01-01

    Through a critical cultural assets model, the authors use the methodological practices of collaboration, community site visits, document analysis, and interviews with cultural insiders to explore schools' continued rejection of academic belonging for people from "othered" communities. They explore the case of Samoan youth--a marginalized…

  8. Estimation of dose-response models for discrete and continuous data in weed science

    USDA-ARS?s Scientific Manuscript database

    Dose-response analysis is widely used in biological sciences and has application to a variety of risk assessment, bioassay, and calibration problems. In weed science, dose-response methodologies have typically relied on least squares estimation under an assumption of normality. Advances in computati...

  9. The Dynamics of Information Search Services.

    ERIC Educational Resources Information Center

    Lindquist, Mats G.

    Computer-based information search services (ISSs) of the type that provide online literature searches are analyzed from a systems viewpoint using a continuous simulation model. The methodology applied is "system dynamics," and the system language is DYNAMO. The analysis reveals that the observed growth and stagnation of a typical ISS can…

  10. An Educational Approach to Computationally Modeling Dynamical Systems

    ERIC Educational Resources Information Center

    Chodroff, Leah; O'Neal, Tim M.; Long, David A.; Hemkin, Sheryl

    2009-01-01

    Chemists have used computational science methodologies for a number of decades and their utility continues to be unabated. For this reason we developed an advanced lab in computational chemistry in which students gain understanding of general strengths and weaknesses of computation-based chemistry by working through a specific research problem.…

  11. Emergy Synthesis 8 ~ Emergy and environmental accounting: Theories, applications, and methodologies

    EPA Science Inventory

    With the assembly and review of the 12 research papers for this Special Issue of Ecological Modelling, our goal was to continue working with the journal to bring a strong group of papers, indicative of the forefront of emergy research, to the global energy research community and ...

  12. Nursing leadership. Serving those who serve others.

    PubMed

    Swearingen, Sandra; Liberman, Aaron

    2004-01-01

    Because of the current and projected continuance of an acute nursing shortage, increased attention is being focused on the workplace environment. This article encourages nursing leadership to examine the feasibility of implementing a servant-leadership model as a possible methodology for securing and retaining current and future nursing staff.

  13. Regression Tree-Based Methodology for Customizing Building Energy Benchmarks to Individual Commercial Buildings

    NASA Astrophysics Data System (ADS)

    Kaskhedikar, Apoorva Prakash

    According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.

  14. Assessing the changes of groundwater recharge / irrigation water use between SRI and traditional irrigation schemes in Central Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Tsai, Cheng-Bin

    2015-04-01

    To respond to agricultural water shortage impacted by climate change without affecting rice yield in the future, the application of water-saving irrigation, such as SRI methodology, is considered to be adopted in rice-cultivation in Taiwan. However, the flooded paddy fields could be considered as an important source of groundwater recharge in Central Taiwan. The water-saving benefit of this new methodology and its impact on the reducing of groundwater recharge should be integrally assessed in this area. The objective of this study was to evaluate the changes of groundwater recharge/ irrigation water use between the SRI and traditional irrigation schemes (continuous irrigation, rotational irrigation). An experimental paddy field located in the proximal area of the Choushui River alluvial fan (the largest groundwater pumping region in Taiwan) was chosen as the study area. The 3-D finite element groundwater model (FEMWATER) with the variable boundary condition analog functions, was applied in simulating groundwater recharge process and amount under traditional irrigation schemes and SRI methodology. The use of effective rainfall was taken into account or not in different simulation scenarios for each irrigation scheme. The simulation results showed that there were no significant variations of infiltration rate in the use of effective rainfall or not, but the low soil moisture setting in deep soil layers resulted in higher infiltration rate. Taking the use of effective rainfall into account, the average infiltration rate for continuous irrigation, rotational irrigation, and SRI methodology in the first crop season of 2013 were 4.04 mm/day, 4.00 mm/day and 3.92 mm/day, respectively. The groundwater recharge amount of SRI methodology was slightly lower than those of traditional irrigation schemes, reducing 4% and 2% compared with continuous irrigation and rotational irrigation, respectively. The field irrigation requirement amount of SRI methodology was significantly lower than those of traditional irrigation schemes, saving 35% and 9% compared with continuous irrigation and rotational irrigation, respectively. The SRI methodology significantly improved water-saving benefit compared with the disadvantage of reducing groundwater recharge. The results could be used as a basis for the relevant government agency to formulate the integral water resource management strategies in this area. Keywords: SRI, Paddy field, Infiltration, Groundwater recharge

  15. Classification and regression tree analysis vs. multivariable linear and logistic regression methods as statistical tools for studying haemophilia.

    PubMed

    Henrard, S; Speybroeck, N; Hermans, C

    2015-11-01

    Haemophilia is a rare genetic haemorrhagic disease characterized by partial or complete deficiency of coagulation factor VIII, for haemophilia A, or IX, for haemophilia B. As in any other medical research domain, the field of haemophilia research is increasingly concerned with finding factors associated with binary or continuous outcomes through multivariable models. Traditional models include multiple logistic regressions, for binary outcomes, and multiple linear regressions for continuous outcomes. Yet these regression models are at times difficult to implement, especially for non-statisticians, and can be difficult to interpret. The present paper sought to didactically explain how, why, and when to use classification and regression tree (CART) analysis for haemophilia research. The CART method is non-parametric and non-linear, based on the repeated partitioning of a sample into subgroups based on a certain criterion. Breiman developed this method in 1984. Classification trees (CTs) are used to analyse categorical outcomes and regression trees (RTs) to analyse continuous ones. The CART methodology has become increasingly popular in the medical field, yet only a few examples of studies using this methodology specifically in haemophilia have to date been published. Two examples using CART analysis and previously published in this field are didactically explained in details. There is increasing interest in using CART analysis in the health domain, primarily due to its ease of implementation, use, and interpretation, thus facilitating medical decision-making. This method should be promoted for analysing continuous or categorical outcomes in haemophilia, when applicable. © 2015 John Wiley & Sons Ltd.

  16. Provider connectedness and communication patterns: extending continuity of care in the context of the circle of care

    PubMed Central

    2013-01-01

    Background Continuity is an important aspect of quality of care, especially for complex patients in the community. We explored provider perceptions of continuity through a system’s lens. The circle of care was used as the system. Methods Soft systems methodology was used to understand and improve continuity for end of life patients in two communities. Participants: Physicians, nurses, pharmacists in two communities in British Columbia, involved in end of life care. Two debates/discussion groups were completed after the interviews and initial analysis to confirm findings. Interview recordings were qualitatively analyzed to extract components and enablers of continuity. Results 32 provider interviews were completed. Findings from this study support the three types of continuity described by Haggerty and Reid (information, management, and relationship continuity). This work extends their model by adding features of the circle of care that influence and enable continuity: Provider Connectedness the sense of knowing and trust between providers who share care of a patient; a set of ten communication patterns that are used to support continuity across the circle of care; and environmental factors outside the circle that can indirectly influence continuity. Conclusions We present an extended model of continuity of care. The components in the model can support health planners consider how health care is organized to promote continuity and by researchers when considering future continuity research. PMID:23941179

  17. Stochastic simulations on a model of circadian rhythm generation.

    PubMed

    Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin

    2008-01-01

    Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.

  18. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes.

    PubMed

    García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel

    2015-01-01

    A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 gVS/Ld. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Making Waves: New Developments in Toxicology With the Zebrafish.

    PubMed

    Horzmann, Katharine A; Freeman, Jennifer L

    2018-05-01

    The laboratory zebrafish (Danio rerio) is now an accepted model in toxicologic research. The zebrafish model fills a niche between in vitro models and mammalian biomedical models. The developmental characteristics of the small fish are strategically being used by scientists to study topics ranging from high-throughput toxicity screens to toxicity in multi- and transgenerational studies. High-throughput technology has increased the utility of zebrafish embryonic toxicity assays in screening of chemicals and drugs for toxicity or effect. Additionally, advances in behavioral characterization and experimental methodology allow for observation of recognizable phenotypic changes after xenobiotic exposure. Future directions in zebrafish research are predicted to take advantage of CRISPR-Cas9 genome editing methods in creating models of disease and interrogating mechanisms of action with fluorescent reporters or tagged proteins. Zebrafish can also model developmental origins of health and disease and multi- and transgenerational toxicity. The zebrafish has many advantages as a toxicologic model and new methodologies and areas of study continue to expand the usefulness and application of the zebrafish.

  20. 40 CFR Appendix B to Part 72 - Methodology for Conversion of Emissions Limits

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Methodology for Conversion of Emissions Limits B Appendix B to Part 72 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Pt. 72, App. B Appendix B to Part 72—Methodology for...

  1. 40 CFR Appendix A to Part 72 - Methodology for Annualization of Emissions Limits

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Methodology for Annualization of Emissions Limits A Appendix A to Part 72 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Pt. 72, App. A Appendix A to Part 72—Methodology for...

  2. 40 CFR Appendix A to Part 72 - Methodology for Annualization of Emissions Limits

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Methodology for Annualization of Emissions Limits A Appendix A to Part 72 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Pt. 72, App. A Appendix A to Part 72—Methodology for...

  3. 40 CFR Appendix B to Part 72 - Methodology for Conversion of Emissions Limits

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Methodology for Conversion of Emissions Limits B Appendix B to Part 72 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) PERMITS REGULATION Pt. 72, App. B Appendix B to Part 72—Methodology for...

  4. An Indirect System Identification Technique for Stable Estimation of Continuous-Time Parameters of the Vestibulo-Ocular Reflex (VOR)

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Wallin, Ragnar; Boyle, Richard D.

    2013-01-01

    The vestibulo-ocular reflex (VOR) is a well-known dual mode bifurcating system that consists of slow and fast modes associated with nystagmus and saccade, respectively. Estimation of continuous-time parameters of nystagmus and saccade models are known to be sensitive to estimation methodology, noise and sampling rate. The stable and accurate estimation of these parameters are critical for accurate disease modelling, clinical diagnosis, robotic control strategies, mission planning for space exploration and pilot safety, etc. This paper presents a novel indirect system identification method for the estimation of continuous-time parameters of VOR employing standardised least-squares with dual sampling rates in a sparse structure. This approach permits the stable and simultaneous estimation of both nystagmus and saccade data. The efficacy of this approach is demonstrated via simulation of a continuous-time model of VOR with typical parameters found in clinical studies and in the presence of output additive noise.

  5. Application of enhanced modern structured analysis techniques to Space Station Freedom electric power system requirements

    NASA Technical Reports Server (NTRS)

    Biernacki, John; Juhasz, John; Sadler, Gerald

    1991-01-01

    A team of Space Station Freedom (SSF) system engineers are in the process of extensive analysis of the SSF requirements, particularly those pertaining to the electrical power system (EPS). The objective of this analysis is the development of a comprehensive, computer-based requirements model, using an enhanced modern structured analysis methodology (EMSA). Such a model provides a detailed and consistent representation of the system's requirements. The process outlined in the EMSA methodology is unique in that it allows the graphical modeling of real-time system state transitions, as well as functional requirements and data relationships, to be implemented using modern computer-based tools. These tools permit flexible updating and continuous maintenance of the models. Initial findings resulting from the application of EMSA to the EPS have benefited the space station program by linking requirements to design, providing traceability of requirements, identifying discrepancies, and fostering an understanding of the EPS.

  6. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh

    2015-01-15

    Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowlymore » biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.« less

  7. An innovative statistical approach for analysing non-continuous variables in environmental monitoring: assessing temporal trends of TBT pollution.

    PubMed

    Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos

    2011-03-01

    The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.

  8. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  9. Budget Reduction in the Navy

    DTIC Science & Technology

    1990-12-01

    process; (4) the degree efbudgetary responsiveness in DOD/DON cutback budgeting to criteria developed from two theoretical models of fical reduction... developed from two theoretical models of fiscal reduction methodology. |V ..... A A,’ 4 . 0 f .; . . Dis Apm a al@r Di3t I peala iii, TARLK Or COUTENS...accompanying reshaping of U.S. forces include a continuation of the positive developments in Eastern Europe and the Soviet Union, completion of

  10. Helicopter-V/STOL dynamic wind and turbulence design methodology

    NASA Technical Reports Server (NTRS)

    Bailey, J. Earl

    1987-01-01

    Aircraft and helicopter accidents due to severe dynamic wind and turbulence continue to present challenging design problems. The development of the current set of design analysis tools for a aircraft wind and turbulence design began in the 1940's and 1950's. The areas of helicopter dynamic wind and turbulence modeling and vehicle response to severe dynamic wind inputs (microburst type phenomena) during takeoff and landing remain as major unsolved design problems from a lack of both environmental data and computational methodology. The development of helicopter and V/STOL dynamic wind and turbulence response computation methology is reviewed, the current state of the design art in industry is outlined, and comments on design methodology are made which may serve to improve future flight vehicle design.

  11. A New Methodology for Modeling National Command Level Decisionmaking in War Games and Simulations.

    DTIC Science & Technology

    1986-07-01

    Conclusions about Utility and Development Options, The Rand Corporation, R-2945-DNA, March 1983. Drucker , Peter F., Management : Tasks, Responsibilities...looks to the worst case can readily find himself paralyzed. Of course, it is also true ’The effort to affect an opponent’s image of oneself is part of...how to manage forces on a continuing basis. So long as the broad features of the NCL-specified plan continue to appear valid, it is the military com

  12. Effectiveness of the Comprehensive Approach to Rehabilitation (CARe) methodology: design of a cluster randomized controlled trial.

    PubMed

    Bitter, Neis A; Roeg, Diana P K; van Nieuwenhuizen, Chijs; van Weeghel, Jaap

    2015-07-22

    There is an increasing amount of evidence for the effectiveness of rehabilitation interventions for people with severe mental illness (SMI). In the Netherlands, a rehabilitation methodology that is well known and often applied is the Comprehensive Approach to Rehabilitation (CARe) methodology. The overall goal of the CARe methodology is to improve the client's quality of life by supporting the client in realizing his/her goals and wishes, handling his/her vulnerability and improving the quality of his/her social environment. The methodology is strongly influenced by the concept of 'personal recovery' and the 'strengths case management model'. No controlled effect studies have been conducted hitherto regarding the CARe methodology. This study is a two-armed cluster randomized controlled trial (RCT) that will be executed in teams from three organizations for sheltered and supported housing, which provide services to people with long-term severe mental illness. Teams in the intervention group will receive the multiple-day CARe methodology training from a specialized institute and start working according the CARe Methodology guideline. Teams in the control group will continue working in their usual way. Standardized questionnaires will be completed at baseline (T0), and 10 (T1) and 20 months (T2) post baseline. Primary outcomes are recovery, social functioning and quality of life. The model fidelity of the CARe methodology will be assessed at T1 and T2. This study is the first controlled effect study on the CARe methodology and one of the few RCTs on a broad rehabilitation method or strength-based approach. This study is relevant because mental health care organizations have become increasingly interested in recovery and rehabilitation-oriented care. The trial registration number is ISRCTN77355880 .

  13. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.

  14. Continuous quality improvement: a shared governance model that maximizes agent-specific knowledge.

    PubMed

    Burkoski, Vanessa; Yoon, Jennifer

    2013-01-01

    Motivate, Innovate, Celebrate: an innovative shared governance model through the establishment of continuous quality improvement (CQI) councils was implemented across the London Health Sciences Centre (LHSC). The model leverages agent-specific knowledge at the point of care and provides a structure aimed at building human resources capacity and sustaining enhancements to quality and safe care delivery. Interprofessional and cross-functional teams work through the CQI councils to identify, formulate, execute and evaluate CQI initiatives. In addition to a structure that facilitates collaboration, accountability and ownership, a corporate CQI Steering Committee provides the forum for scaling up and spreading this model. Point-of-care staff, clinical management and educators were trained in LEAN methodology and patient experience-based design to ensure sufficient knowledge and resources to support the implementation.

  15. Understanding Activity Engagement Across Weekdays and Weekend Days: A Multivariate Multiple Discrete-Continuous Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garikapati, Venu; Astroza, Sebastian; Bhat, Prerna C.

    This paper is motivated by the increasing recognition that modeling activity-travel demand for a single day of the week, as is done in virtually all travel forecasting models, may be inadequate in capturing underlying processes that govern activity-travel scheduling behavior. The considerable variability in daily travel suggests that there are important complementary relationships and competing tradeoffs involved in scheduling and allocating time to various activities across days of the week. Both limited survey data availability and methodological challenges in modeling week-long activity-travel schedules have precluded the development of multi-day activity-travel demand models. With passive and technology-based data collection methods increasinglymore » in vogue, the collection of multi-day travel data may become increasingly commonplace in the years ahead. This paper addresses the methodological challenge associated with modeling multi-day activity-travel demand by formulating a multivariate multiple discrete-continuous probit (MDCP) model system. The comprehensive framework ties together two MDCP model components, one corresponding to weekday time allocation and the other to weekend activity-time allocation. By tying the two MDCP components together, the model system also captures relationships in activity-time allocation between weekdays on the one hand and weekend days on the other. Model estimation on a week-long travel diary data set from the United Kingdom shows that there are significant inter-relationships between weekdays and weekend days in activity-travel scheduling behavior. The model system presented in this paper may serve as a higher-level multi-day activity scheduler in conjunction with existing daily activity-based travel models.« less

  16. Leveraging the Zachman framework implementation using action - research methodology - a case study: aligning the enterprise architecture and the business goals

    NASA Astrophysics Data System (ADS)

    Nogueira, Juan Manuel; Romero, David; Espadas, Javier; Molina, Arturo

    2013-02-01

    With the emergence of new enterprise models, such as technology-based enterprises, and the large quantity of information generated through technological advances, the Zachman framework continues to represent a modelling tool of great utility and value to construct an enterprise architecture (EA) that can integrate and align the IT infrastructure and business goals. Nevertheless, implementing an EA requires an important effort within an enterprise. Small technology-based enterprises and start-ups can take advantage of EAs and frameworks but, because these enterprises have limited resources to allocate for this task, an enterprise framework implementation is not feasible in most cases. This article proposes a new methodology based on action-research for the implementation of the business, system and technology models of the Zachman framework to assist and facilitate its implementation. Following the explanation of cycles of the proposed methodology, a case study is presented to illustrate the results of implementing the Zachman framework in a technology-based enterprise: PyME CREATIVA, using action-research approach.

  17. Cloud Computing in the Curricula of Schools of Computer Science and Information Systems

    ERIC Educational Resources Information Center

    Lawler, James P.

    2011-01-01

    The cloud continues to be a developing area of information systems. Evangelistic literature in the practitioner field indicates benefit for business firms but disruption for technology departments of the firms. Though the cloud currently is immature in methodology, this study defines a model program by which computer science and information…

  18. A heuristic neural network initialization scheme for modeling nonlinear functions in engineering mechanics: continuous development

    NASA Astrophysics Data System (ADS)

    Pei, Jin-Song; Mai, Eric C.

    2007-04-01

    This paper introduces a continuous effort towards the development of a heuristic initialization methodology for constructing multilayer feedforward neural networks to model nonlinear functions. In this and previous studies that this work is built upon, including the one presented at SPIE 2006, the authors do not presume to provide a universal method to approximate arbitrary functions, rather the focus is given to the development of a rational and unambiguous initialization procedure that applies to the approximation of nonlinear functions in the specific domain of engineering mechanics. The applications of this exploratory work can be numerous including those associated with potential correlation and interpretation of the inner workings of neural networks, such as damage detection. The goal of this study is fulfilled by utilizing the governing physics and mathematics of nonlinear functions and the strength of the sigmoidal basis function. A step-by-step graphical procedure utilizing a few neural network prototypes as "templates" to approximate commonly seen memoryless nonlinear functions of one or two variables is further developed in this study. Decomposition of complex nonlinear functions into a summation of some simpler nonlinear functions is utilized to exploit this prototype-based initialization methodology. Training examples are presented to demonstrate the rationality and effciency of the proposed methodology when compared with the popular Nguyen-Widrow initialization algorithm. Future work is also identfied.

  19. Process-oriented integration and coordination of healthcare services across organizational boundaries.

    PubMed

    Tello-Leal, Edgar; Chiotti, Omar; Villarreal, Pablo David

    2012-12-01

    The paper presents a methodology that follows a top-down approach based on a Model-Driven Architecture for integrating and coordinating healthcare services through cross-organizational processes to enable organizations providing high quality healthcare services and continuous process improvements. The methodology provides a modeling language that enables organizations conceptualizing an integration agreement, and identifying and designing cross-organizational process models. These models are used for the automatic generation of: the private view of processes each organization should perform to fulfill its role in cross-organizational processes, and Colored Petri Net specifications to implement these processes. A multi-agent system platform provides agents able to interpret Colored Petri-Nets to enable the communication between the Healthcare Information Systems for executing the cross-organizational processes. Clinical documents are defined using the HL7 Clinical Document Architecture. This methodology guarantees that important requirements for healthcare services integration and coordination are fulfilled: interoperability between heterogeneous Healthcare Information Systems; ability to cope with changes in cross-organizational processes; guarantee of alignment between the integrated healthcare service solution defined at the organizational level and the solution defined at technological level; and the distributed execution of cross-organizational processes keeping the organizations autonomy.

  20. Methodological Issues in Economic Evaluations Submitted to the Pan-Canadian Oncology Drug Review (pCODR).

    PubMed

    Masucci, Lisa; Beca, Jaclyn; Sabharwal, Mona; Hoch, Jeffrey S

    2017-12-01

    Public drug plans are faced with increasingly difficult funding decisions. In Canada, the pan-Canadian Oncology Drug Review (pCODR) makes funding recommendations to the provincial and territorial drug plans responsible for cancer drugs. Assessments of the economic models submitted by pharmaceutical manufacturers are publicly reported. The main objective of this research was to identify recurring methodological issues in economic models submitted to pCODR for funding reviews. The secondary objective was to explore whether there exists any observed relationships between reported methodological issues and funding recommendations made by pCODR's expert review committee. Publicly available Economic Guidance Reports from July 2011 (inception) until June 2014 for drug reviews with a final funding recommendation (N = 34) were independently examined by two authors. Major methodological issues from each review were abstracted and grouped into nine main categories. Each issue was also categorized based on perception of the reviewer's actions to manage it. The most commonly reported issues involved costing (59% of reviews), time horizon (56%), and model structure (36%). Several types of issues were identified that usually could not be resolved, such as quality of clinical data or uncertainty with indirect comparisons. Issues with costing or choice of utility estimates could usually be addressed or explored by reviewers. No statistically significant relationship was found between any methodological issue and funding recommendations from the expert review committee. The findings provide insights that can be used by parties who submit or review economic evidence for continuous improvement and consistency in economic modeling, reporting, and decision making.

  1. NPAC-Nozzle Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Barnhart, Paul J.

    1997-01-01

    A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.

  2. Predictive Modeling of Cardiac Ischemia

    NASA Technical Reports Server (NTRS)

    Anderson, Gary T.

    1996-01-01

    The goal of the Contextual Alarms Management System (CALMS) project is to develop sophisticated models to predict the onset of clinical cardiac ischemia before it occurs. The system will continuously monitor cardiac patients and set off an alarm when they appear about to suffer an ischemic episode. The models take as inputs information from patient history and combine it with continuously updated information extracted from blood pressure, oxygen saturation and ECG lines. Expert system, statistical, neural network and rough set methodologies are then used to forecast the onset of clinical ischemia before it transpires, thus allowing early intervention aimed at preventing morbid complications from occurring. The models will differ from previous attempts by including combinations of continuous and discrete inputs. A commercial medical instrumentation and software company has invested funds in the project with a goal of commercialization of the technology. The end product will be a system that analyzes physiologic parameters and produces an alarm when myocardial ischemia is present. If proven feasible, a CALMS-based system will be added to existing heart monitoring hardware.

  3. Abdominal surgery process modeling framework for simulation using spreadsheets.

    PubMed

    Boshkoska, Biljana Mileva; Damij, Talib; Jelenc, Franc; Damij, Nadja

    2015-08-01

    We provide a continuation of the existing Activity Table Modeling methodology with a modular spreadsheets simulation. The simulation model developed is comprised of 28 modeling elements for the abdominal surgery cycle process. The simulation of a two-week patient flow in an abdominal clinic with 75 beds demonstrates the applicability of the methodology. The simulation does not include macros, thus programming experience is not essential for replication or upgrading the model. Unlike the existing methods, the proposed solution employs a modular approach for modeling the activities that ensures better readability, the possibility of easily upgrading the model with other activities, and its easy extension and connectives with other similar models. We propose a first-in-first-served approach for simulation of servicing multiple patients. The uncertain time duration of the activities is modeled using the function "rand()". The patients movements from one activity to the next one is tracked with nested "if()" functions, thus allowing easy re-creation of the process without the need of complex programming. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  4. Advances in computer-aided well-test interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horne, R.N.

    1994-07-01

    Despite the feeling expressed several times over the past 40 years that well-test analysis had reached it peak development, an examination of recent advances shows continuous expansion in capability, with future improvement likely. The expansion in interpretation capability over the past decade arose mainly from the development of computer-aided techniques, which, although introduced 20 years ago, have come into use only recently. The broad application of computer-aided interpretation originated with the improvement of the methodologies and continued with the expansion in computer access and capability that accompanied the explosive development of the microcomputer industry. This paper focuses on the differentmore » pieces of the methodology that combine to constitute a computer-aided interpretation and attempts to compare some of the approaches currently used. Future directions of the approach are also discussed. The separate areas discussed are deconvolution, pressure derivatives, model recognition, nonlinear regression, and confidence intervals.« less

  5. Probabilistic self-organizing maps for continuous data.

    PubMed

    Lopez-Rubio, Ezequiel

    2010-10-01

    The original self-organizing feature map did not define any probability distribution on the input space. However, the advantages of introducing probabilistic methodologies into self-organizing map models were soon evident. This has led to a wide range of proposals which reflect the current emergence of probabilistic approaches to computational intelligence. The underlying estimation theories behind them derive from two main lines of thought: the expectation maximization methodology and stochastic approximation methods. Here, we present a comprehensive view of the state of the art, with a unifying perspective of the involved theoretical frameworks. In particular, we examine the most commonly used continuous probability distributions, self-organization mechanisms, and learning schemes. Special emphasis is given to the connections among them and their relative advantages depending on the characteristics of the problem at hand. Furthermore, we evaluate their performance in two typical applications of self-organizing maps: classification and visualization.

  6. Improving automation standards via semantic modelling: Application to ISA88.

    PubMed

    Dombayci, Canan; Farreres, Javier; Rodríguez, Horacio; Espuña, Antonio; Graells, Moisès

    2017-03-01

    Standardization is essential for automation. Extensibility, scalability, and reusability are important features for automation software that rely in the efficient modelling of the addressed systems. The work presented here is from the ongoing development of a methodology for semi-automatic ontology construction methodology from technical documents. The main aim of this work is to systematically check the consistency of technical documents and support the improvement of technical document consistency. The formalization of conceptual models and the subsequent writing of technical standards are simultaneously analyzed, and guidelines proposed for application to future technical standards. Three paradigms are discussed for the development of domain ontologies from technical documents, starting from the current state of the art, continuing with the intermediate method presented and used in this paper, and ending with the suggested paradigm for the future. The ISA88 Standard is taken as a representative case study. Linguistic techniques from the semi-automatic ontology construction methodology is applied to the ISA88 Standard and different modelling and standardization aspects that are worth sharing with the automation community is addressed. This study discusses different paradigms for developing and sharing conceptual models for the subsequent development of automation software, along with presenting the systematic consistency checking method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Comparative study of irrigation water use and groundwater recharge under various irrigation schemes in an agricultural region, central Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Tsai, Cheng-Bin

    2016-04-01

    The risk of rice production has increased notably due to climate change in Taiwan. To respond to growing agricultural water shortage without affecting normal food production in the future, the application of water-saving irrigation will be a substantial resolution. However, the adoption of water-saving irrigation may result in the reducing of groundwater recharge because continuous flooding in the paddy fields could be regarded as an important source for groundwater recharge. The aim of this study was to evaluate the irrigation water-saving benefit and groundwater recharge deficit when adopting the System of Rice Intensification, known as SRI methodology, in the Choushui River alluvial fan (the largest groundwater pumping and the most important rice-cropping region in central Taiwan). The three-dimensional finite element groundwater model, FEMWATER, was applied to simulate the infiltration process and groundwater recharge under SRI methodology and traditional irrigation schemes including continuous irrigation, and rotational irrigation in two rice-crop periods with hydro-climatic data of 2013. The irrigation water use was then calculated by water balance. The results showed that groundwater recharge amount of SRI methodology was slightly lower than those of traditional irrigation schemes, reduced 3.6% and 1.6% in the first crop period, and reduced 3.2% and 1.6% in the second crop period, compared with continuous irrigation and rotational irrigation, respectively. However, the SRI methodology achieved notably water-saving benefit compared to the disadvantage of reducing the groundwater recharge amount. The field irrigation requirement amount of SRI methodology was significantly lower than those of traditional irrigation schemes, saving 37% and 20% of irrigation water in the first crop period, and saving 53% and 35% in the second crop period, compared with continuous irrigation and rotational irrigation, respectively. Therefore, the amount of groundwater pumping for irrigation water use can be reduced when adopting the SRI methodology in the future. The reducing of groundwater recharge could be supplemented by using 1,500 hectares of fallow paddy fields, located at proximal-fan region, as recharge pools in the wet season. The adoption of water-saving irrigation would be helpful for the relevant government agency to formulate the integral water resource management strategies in this region. Keywords:Groundwater recharge, SRI, FEMWATER, Field irrigation requirement

  8. Dealing with Time in Health Economic Evaluation: Methodological Issues and Recommendations for Practice.

    PubMed

    O'Mahony, James F; Newall, Anthony T; van Rosmalen, Joost

    2015-12-01

    Time is an important aspect of health economic evaluation, as the timing and duration of clinical events, healthcare interventions and their consequences all affect estimated costs and effects. These issues should be reflected in the design of health economic models. This article considers three important aspects of time in modelling: (1) which cohorts to simulate and how far into the future to extend the analysis; (2) the simulation of time, including the difference between discrete-time and continuous-time models, cycle lengths, and converting rates and probabilities; and (3) discounting future costs and effects to their present values. We provide a methodological overview of these issues and make recommendations to help inform both the conduct of cost-effectiveness analyses and the interpretation of their results. For choosing which cohorts to simulate and how many, we suggest analysts carefully assess potential reasons for variation in cost effectiveness between cohorts and the feasibility of subgroup-specific recommendations. For the simulation of time, we recommend using short cycles or continuous-time models to avoid biases and the need for half-cycle corrections, and provide advice on the correct conversion of transition probabilities in state transition models. Finally, for discounting, analysts should not only follow current guidance and report how discounting was conducted, especially in the case of differential discounting, but also seek to develop an understanding of its rationale. Our overall recommendations are that analysts explicitly state and justify their modelling choices regarding time and consider how alternative choices may impact on results.

  9. Payload/orbiter contamination control requirement study: Spacelab configuration contamination study

    NASA Technical Reports Server (NTRS)

    Bareiss, L. E.; Hetrick, M. A.; Ress, E. B.; Strange, D. A.

    1976-01-01

    The assessment of the Spacelab carrier induced contaminant environment was continued, and the ability of Spacelab to meet established contamination control criteria for the space transportation system program was determined. The primary areas considered included: (1) updating, refining, and improving the Spacelab contamination computer model and contamination analysis methodology, (2) establishing the resulting adjusted induced environment predictions for comparison with the applicable criteria, (3) determining the Spacelab design and operational requirements necessary to meet the criteria, (4) conducting mission feasibility analyses of the combined Spacelab/Orbiter contaminant environment for specific proposed mission and payload mixes, and (5) establishing a preliminary Spacelab mission support plan as well as model interface requirements; A summary of those activities conducted to date with respect to the modelling, analysis, and predictions of the induced environment, including any modifications in approach or methodology utilized in the contamination assessment of the Spacelab carrier, was presented.

  10. The Identification and Validation Process of Proportional Reasoning Attributes: An Application of a Proportional Reasoning Modeling Framework

    ERIC Educational Resources Information Center

    Tjoe, Hartono; de la Torre, Jimmy

    2014-01-01

    In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the…

  11. Leading Schools as Living Systems: A Model of Organizational Survival--A Delphi Study

    ERIC Educational Resources Information Center

    Romero, Ricardo

    2012-01-01

    Purpose: The purpose of this study was to determine the most necessary and the most feasibly practicable future leadership behaviors of the educational leader of a California Schools to Watch-Taking Center Stage middle school necessary to lead a school organization toward continued survival. Methodology: The participants in the present study were…

  12. Towards Accomplished Practice in Learning Skills for Science (LSS): The Synergy between Design and Evaluation Methodology in a Reflective CPD Programme

    ERIC Educational Resources Information Center

    Scherz, Zahava; Bialer, Liora; Eylon, Bat-Sheva

    2011-01-01

    This study was carried out in the framework of continuous professional development (CPD) programmes following a CPD model aimed at promoting "accomplished practice" involving: pedagogical knowledge, content knowledge, pedagogical content knowledge and scholarship of teaching. Teachers were asked to bring evidence about their practice.…

  13. A New Predictive Model of Centerline Segregation in Continuous Cast Steel Slabs by Using Multivariate Adaptive Regression Splines Approach

    PubMed Central

    García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María

    2015-01-01

    The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.

  14. Semantic Interaction for Visual Analytics: Toward Coupling Cognition and Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander

    2014-07-01

    The dissertation discussed in this article [1] was written in the midst of an era of digitization. The world is becoming increasingly instrumented with sensors, monitoring, and other methods for generating data describing social, physical, and natural phenomena. Thus, data exist with the potential of being analyzed to uncover, or discover, the phenomena from which it was created. However, as the analytic models leveraged to analyze these data continue to increase in complexity and computational capability, how can visualizations and user interaction methodologies adapt and evolve to continue to foster discovery and sensemaking?

  15. Keeping data continuous when analyzing the prognostic impact of a tumor marker: an example with cathepsin D in breast cancer.

    PubMed

    Bossard, N; Descotes, F; Bremond, A G; Bobin, Y; De Saint Hilaire, P; Golfier, F; Awada, A; Mathevet, P M; Berrerd, L; Barbier, Y; Estève, J

    2003-11-01

    The prognostic value of cathepsin D has been recently recognized, but as many quantitative tumor markers, its clinical use remains unclear partly because of methodological issues in defining cut-off values. Guidelines have been proposed for analyzing quantitative prognostic factors, underlining the need for keeping data continuous, instead of categorizing them. Flexible approaches, parametric and non-parametric, have been proposed in order to improve the knowledge of the functional form relating a continuous factor to the risk. We studied the prognostic value of cathepsin D in a retrospective hospital cohort of 771 patients with breast cancer, and focused our overall survival analysis, based on the Cox regression, on two flexible approaches: smoothing splines and fractional polynomials. We also determined a cut-off value from the maximum likelihood estimate of a threshold model. These different approaches complemented each other for (1) identifying the functional form relating cathepsin D to the risk, and obtaining a cut-off value and (2) optimizing the adjustment for complex covariate like age at diagnosis in the final multivariate Cox model. We found a significant increase in the death rate, reaching 70% with a doubling of the level of cathepsin D, after the threshold of 37.5 pmol mg(-1). The proper prognostic impact of this marker could be confirmed and a methodology providing appropriate ways to use markers in clinical practice was proposed.

  16. Lean management systems: creating a culture of continuous quality improvement.

    PubMed

    Clark, David M; Silvester, Kate; Knowles, Simon

    2013-08-01

    This is the first in a series of articles describing the application of Lean management systems to Laboratory Medicine. Lean is the term used to describe a principle-based continuous quality improvement (CQI) management system based on the Toyota production system (TPS) that has been evolving for over 70 years. Its origins go back much further and are heavily influenced by the work of W Edwards Deming and the scientific method that forms the basis of most quality management systems. Lean has two fundamental elements--a systematic approach to process improvement by removing waste in order to maximise value for the end-user of the service and a commitment to respect, challenge and develop the people who work within the service to create a culture of continuous improvement. Lean principles have been applied to a growing number of Healthcare systems throughout the world to improve the quality and cost-effectiveness of services for patients and a number of laboratories from all the pathology disciplines have used Lean to shorten turnaround times, improve quality (reduce errors) and improve productivity. Increasingly, models used to plan and implement large scale change in healthcare systems, including the National Health Service (NHS) change model, have evidence-based improvement methodologies (such as Lean CQI) as a core component. Consequently, a working knowledge of improvement methodology will be a core skill for Pathologists involved in leadership and management.

  17. Direct use of linear time-domain aerodynamics in aeroservoelastic analysis: Aerodynamic model

    NASA Technical Reports Server (NTRS)

    Woods, J. A.; Gilbert, Michael G.

    1990-01-01

    The work presented here is the first part of a continuing effort to expand existing capabilities in aeroelasticity by developing the methodology which is necessary to utilize unsteady time-domain aerodynamics directly in aeroservoelastic design and analysis. The ultimate objective is to define a fully integrated state-space model of an aeroelastic vehicle's aerodynamics, structure and controls which may be used to efficiently determine the vehicle's aeroservoelastic stability. Here, the current status of developing a state-space model for linear or near-linear time-domain indicial aerodynamic forces is presented.

  18. A scaleable methodology for assessing the impacts of urban shade on the summer electricity use of residential homes

    NASA Astrophysics Data System (ADS)

    Taylor, Robert Vanderlei

    Our cities are experiencing unprecedented growth while net global temperatures continue to trend warmer making sustainable urban development and energy conservation pressing public issues. This research explores how urban landscaping -- in particular trees and buildings -- affect summer electricity use in residential homes. I studied the interactions of urban shade and temperature to explore how vegetation distribution and intensity could play a meaningful role in heat mitigation in urban environments. Only a few studies have reconciled modeled electricity savings from tree shade with actual electricity consumption data. This research proposes a methodology for modeling the isolated effects of urban shade (tree shade vs building shade) on buildings' summertime electricity consumption from micro to mesoscales, empirically validating the modeled shade with actual electricity billing data, and comparing the electric energetic impact of tree shade effects with building shade effects. This proposed methodology seeks to resolve three primary research questions: 1) What are the modeled quantities of urban shade associated with the area of interest (AOI)? 2) To what extent do the effects of shading from trees and buildings mitigate summertime heat in the AOI? 2) To what extent do the shade effects from trees and buildings reduce summertime electricity consumption in the AOI?

  19. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    PubMed Central

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  20. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions between the traction- and stress-based approaches are classified. Closed-form results under plane stress condition are also given. A generic failure criterion of either elliptic, parabolic or hyperbolic type is analyzed in a unified manner, with the classical von Mises (J2), Drucker-Prager, Mohr-Coulomb and many other frequently employed criteria recovered as its particular cases.

  1. An evidential reasoning extension to quantitative model-based failure diagnosis

    NASA Technical Reports Server (NTRS)

    Gertler, Janos J.; Anderson, Kenneth C.

    1992-01-01

    The detection and diagnosis of failures in physical systems characterized by continuous-time operation are studied. A quantitative diagnostic methodology has been developed that utilizes the mathematical model of the physical system. On the basis of the latter, diagnostic models are derived each of which comprises a set of orthogonal parity equations. To improve the robustness of the algorithm, several models may be used in parallel, providing potentially incomplete and/or conflicting inferences. Dempster's rule of combination is used to integrate evidence from the different models. The basic probability measures are assigned utilizing quantitative information extracted from the mathematical model and from online computation performed therewith.

  2. Operator function modeling: An approach to cognitive task analysis in supervisory control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.

    1987-01-01

    In a study of models of operators in complex, automated space systems, an operator function model (OFM) methodology was extended to represent cognitive as well as manual operator activities. Development continued on a software tool called OFMdraw, which facilitates construction of an OFM by permitting construction of a heterarchic network of nodes and arcs. Emphasis was placed on development of OFMspert, an expert system designed both to model human operation and to assist real human operators. The system uses a blackboard method of problem solving to make an on-line representation of operator intentions, called ACTIN (actions interpreter).

  3. A Novel Multiscale Physics Based Progressive Failure Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Waas, Anthony M.; Bednarcyk, Brett A.; Collier, Craig S.; Yarrington, Phillip W.

    2008-01-01

    A variable fidelity, multiscale, physics based finite element procedure for predicting progressive damage and failure of laminated continuous fiber reinforced composites is introduced. At every integration point in a finite element model, progressive damage is accounted for at the lamina-level using thermodynamically based Schapery Theory. Separate failure criteria are applied at either the global-scale or the microscale in two different FEM models. A micromechanics model, the Generalized Method of Cells, is used to evaluate failure criteria at the micro-level. The stress-strain behavior and observed failure mechanisms are compared with experimental results for both models.

  4. Spatial, spectral and temporal patterns of tropical forest cover change as observed with multiple scales of optical satellite data.

    Treesearch

    D.J. Hayes; W.B. Cohen

    2006-01-01

    This article describes the development of a methodology for scaling observations of changes in tropical forest cover to large areas at high temporal frequency from coarse-resolution satellite imagery. The approach for estimating proportional forest cover change as a continuous variable is based on a regression model that relates multispectral, multitemporal Moderate...

  5. Partners with Clinical Practice: Evaluating the Student and Staff Experiences of On-Line Continuing Professional Development for Qualified Nephrology Practitioners

    ERIC Educational Resources Information Center

    Hurst, Judith; Quinsee, Susannah

    2005-01-01

    The inclusion of online learning technologies into the higher education (HE) curriculum is frequently associated with the design and development of new models of learning. One could argue that e-learning even demands a reconfiguration of traditional methods of learning and teaching. However, this transformation in pedagogic methodology does not…

  6. Urban Quality Development and Management: Capacity Development and Continued Education for the Sustainable City

    ERIC Educational Resources Information Center

    Lehmann, Martin; Fryd, Ole

    2008-01-01

    Purpose: The purpose of this paper is to describe and discuss the development and the structure of a new international master on the subject of urban quality development and management (UQDM), and explore the potential of the process and the outcome in serving as models adoptable by faculty at other universities. Design/methodology/approach: The…

  7. Legislative and Policy Developments and Imperatives for Advancing the Primary Care Behavioral Health (PCBH) Model.

    PubMed

    Freeman, Dennis S; Hudgins, Cathy; Hornberger, Joel

    2018-06-01

    The Primary Care Behavioral Health (PCBH) practice model continues to gain converts among primary care and behavioral health professionals as the evidence supporting its effectiveness continues to accumulate. Despite a growing number of practices and organizations using the model effectively, widespread implementation has been hampered by outmoded policies and regulatory barriers. As policymakers and legislators begin to recognize the contributions that PCBH model services make to the care of complex patients and the expansion of access to those in need of behavioral health interventions, some encouraging policy initiatives are emerging and the policy environment is becoming more favorable to implementation of the PCBH model. This article outlines the necessity for policy change, exposing the policy issues and barriers that serve to limit the practice of the PCBH model; highlights innovative approaches some states are taking to foster integrated practice; and discusses the compatibility of the PCBH model with the nation's health care reform agenda. Psychologists have emerged as leaders in the design and implementation of PCBH model integration and are encouraged to continue to advance the model through the demonstration of efficient and effective clinical practice, participation in the expansion of an appropriately trained workforce, and advocacy for the inclusion of this practice model in emerging healthcare systems and value-based payment methodologies.

  8. Model and Analytic Processes for Export License Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determinemore » which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An approach to developing testable hypotheses for the macro-level assessment methodologies is provided. The outcome of this works suggests that we should develop a Bayes Net for micro-level analysis and continue to focus on Bayes Net, System Dynamics and Economic Input/Output models for assessing macro-level problems. Simultaneously, we need to develop metrics for assessing intent in export control, including the risks and consequences associated with all aspects of export control.« less

  9. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  10. Optimization of lipase-catalyzed biodiesel by isopropanolysis in a continuous packed-bed reactor using response surface methodology.

    PubMed

    Chang, Cheng; Chen, Jiann-Hwa; Chang, Chieh-Ming J; Wu, Tsung-Ta; Shieh, Chwen-Jen

    2009-10-31

    Isopropanolysis reactions were performed using triglycerides with immobilized lipase in a solvent-free environment. This study modeled the degree of isopropanolysis of soybean oil in a continuous packed-bed reactor when Novozym 435 was used as the biocatalyst. Response surface methodology (RSM) and three-level-three-factor Box-Behnken design were employed to evaluate the effects of synthesis parameters, reaction temperature ( degrees C), flow rate (mL/min) and substrate molar ratio of isopropanol to soybean oil, on the percentage molar conversion of biodiesel by transesterification. The results show that flow rate and temperature have a significant effect on the percentage of molar conversion. On the basis of ridge max analysis, the optimum conditions for synthesis were as follows: flow rate 0.1 mL/min, temperature 51.5 degrees C and substrate molar ratio 1:4.14. The predicted value was 76.62+/-1.52% and actual experimental value was 75.62+/-0.81% molar conversion. Moreover, continuous enzymatic process for seven days did not show any appreciable decrease in the percent of molar conversion (75%). This work demonstrates the applicability of lipase catalysis to prepare isopropyl esters by transesterification in solvent-free system with a continuous packed-bed reactor for industrial production.

  11. [The history and phenomenology of the concept of psychosis. A perspective of the Heidelberg school (1913-2008)].

    PubMed

    Bürgy, M

    2009-05-01

    The accomplishments of Heidelberg psychopathology and their continued development are illustrated using the example of the concept of psychosis. Jaspers founded the Heidelberg school by methodologically collating the psychiatric knowledge of his time in a structured fashion and in doing so laid the foundation for modern nosology. While, however, ICD and DSM classifications tend to be modelled on symptoms of expression and behaviour, the phenomenological models which Jaspers introduced into the field of psychiatry rather focused on symptoms of subjective experience. The phenomenological developments of psychopathology which originated in this context are, in the case of the schizophrenic psychoses, presented in a kaleidoscope-like manner. It becomes evident that a legacy-oriented, phenomenological search for specific symptoms is of continued relevance. This historical wealth of knowledge and the clinical exploration of phenomena continue to represent sources of impetus and momentum for the field of psychopathology.

  12. Multistep modeling (MSM) of biomolecular structure application to the A-G mispair in the B-DNA environment

    NASA Technical Reports Server (NTRS)

    Srinivasan, S.; Raghunathan, G.; Shibata, M.; Rein, R.

    1986-01-01

    A multistep modeling procedure has been evolved to study the structural changes introduced by lesions in DNA. We report here the change in the structure of regular B-DNA geometry due to the incorporation of Ganti-Aanti mispair in place of a regular G-C pair, preserving the helix continuity. The energetics of the structure so obtained is compared with the Ganti-Asyn configuration under similar constrained conditions. We present the methodology adopted and discuss the results.

  13. Finite-Length Line Source Superposition Model (FLLSSM)

    NASA Astrophysics Data System (ADS)

    1980-03-01

    A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.

  14. General implementation of arbitrary nonlinear quadrature phase gates

    NASA Astrophysics Data System (ADS)

    Marek, Petr; Filip, Radim; Ogawa, Hisashi; Sakaguchi, Atsushi; Takeda, Shuntaro; Yoshikawa, Jun-ichi; Furusawa, Akira

    2018-02-01

    We propose general methodology of deterministic single-mode quantum interaction nonlinearly modifying single quadrature variable of a continuous-variable system. The methodology is based on linear coupling of the system to ancillary systems subsequently measured by quadrature detectors. The nonlinear interaction is obtained by using the data from the quadrature detection for dynamical manipulation of the coupling parameters. This measurement-induced methodology enables direct realization of arbitrary nonlinear quadrature interactions without the need to construct them from the lowest-order gates. Such nonlinear interactions are crucial for more practical and efficient manipulation of continuous quadrature variables as well as qubits encoded in continuous-variable systems.

  15. Thinking Clearly About Schizotypy: Hewing to the Schizophrenia Liability Core, Considering Interesting Tangents, and Avoiding Conceptual Quicksand

    PubMed Central

    Lenzenweger, Mark F.

    2015-01-01

    The concept of schizotypy represents a rich and complex psychopathology construct. Furthermore, the construct implies a theoretical model that has considerable utility as an organizing framework for the study of schizophrenia, schizophrenia-related psychopathology (eg, delusional disorder, psychosis-NOS (not otherwise specified), schizotypal, and paranoid personality disorder), and putative schizophrenia endophenotypes as suggested by Rado, Meehl, Gottesman, Lenzenweger, and others. The understanding (and misunderstanding) of the schizophrenia-related schizotypy model, particularly as regards clinical illness, as well as an alternative approach to the construct require vigilance in order to ensure the methodological approach continues to yield the fruit that it can in illuminating the pathogenesis of schizophrenia-related psychopathology. The articles in the Special Section in this issue of Schizophrenia Bulletin highlight methodological and theoretical issues that should be examined carefully. PMID:25810061

  16. An automated laboratory-scale methodology for the generation of sheared mammalian cell culture samples.

    PubMed

    Joseph, Adrian; Goldrick, Stephen; Mollet, Michael; Turner, Richard; Bender, Jean; Gruber, David; Farid, Suzanne S; Titchener-Hooker, Nigel

    2017-05-01

    Continuous disk-stack centrifugation is typically used for the removal of cells and cellular debris from mammalian cell culture broths at manufacturing-scale. The use of scale-down methods to characterise disk-stack centrifugation performance enables substantial reductions in material requirements and allows a much wider design space to be tested than is currently possible at pilot-scale. The process of scaling down centrifugation has historically been challenging due to the difficulties in mimicking the Energy Dissipation Rates (EDRs) in typical machines. This paper describes an alternative and easy-to-assemble automated capillary-based methodology to generate levels of EDRs consistent with those found in a continuous disk-stack centrifuge. Variations in EDR were achieved through changes in capillary internal diameter and the flow rate of operation through the capillary. The EDRs found to match the levels of shear in the feed zone of a pilot-scale centrifuge using the experimental method developed in this paper (2.4×10 5 W/Kg) are consistent with those obtained through previously published computational fluid dynamic (CFD) studies (2.0×10 5 W/Kg). Furthermore, this methodology can be incorporated into existing scale-down methods to model the process performance of continuous disk-stack centrifuges. This was demonstrated through the characterisation of culture hold time, culture temperature and EDRs on centrate quality. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Transforming Boolean models to continuous models: methodology and application to T-cell receptor signaling

    PubMed Central

    Wittmann, Dominik M; Krumsiek, Jan; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Klamt, Steffen; Theis, Fabian J

    2009-01-01

    Background The understanding of regulatory and signaling networks has long been a core objective in Systems Biology. Knowledge about these networks is mainly of qualitative nature, which allows the construction of Boolean models, where the state of a component is either 'off' or 'on'. While often able to capture the essential behavior of a network, these models can never reproduce detailed time courses of concentration levels. Nowadays however, experiments yield more and more quantitative data. An obvious question therefore is how qualitative models can be used to explain and predict the outcome of these experiments. Results In this contribution we present a canonical way of transforming Boolean into continuous models, where the use of multivariate polynomial interpolation allows transformation of logic operations into a system of ordinary differential equations (ODE). The method is standardized and can readily be applied to large networks. Other, more limited approaches to this task are briefly reviewed and compared. Moreover, we discuss and generalize existing theoretical results on the relation between Boolean and continuous models. As a test case a logical model is transformed into an extensive continuous ODE model describing the activation of T-cells. We discuss how parameters for this model can be determined such that quantitative experimental results are explained and predicted, including time-courses for multiple ligand concentrations and binding affinities of different ligands. This shows that from the continuous model we may obtain biological insights not evident from the discrete one. Conclusion The presented approach will facilitate the interaction between modeling and experiments. Moreover, it provides a straightforward way to apply quantitative analysis methods to qualitatively described systems. PMID:19785753

  18. Simultaneous solution of the geoid and the surface density anomalies

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.; Karimi, R.; AllahTavakoli, Y.

    2012-04-01

    The main application of the land gravity data in geodesy is "local geoid" or "local gravity field" modeling, whereas the same data could play a vital role for the anomalous mass-density modeling in geophysical explorations. In the realm of local geoid computations based on Geodetic Boundary Value Problems (GBVP), it is needed that the effect of the topographic (or residual terrain) masses be removed via application of the Newton integral in order to perform the downward continuation in a harmonic space. However, harmonization of the downward continuation domain may not be perfectly possible unless accurate information about the mass-density of the topographic masses be available. On the other hand, from the exploration point of view the unwanted topographical masses within the aforementioned procedure could be regarded as the signal. In order to overcome the effect of the remaining masses within the remove step of the GBVP, which cause uncertainties in mathematical modeling of the problem, here we are proposing a methodology for simultaneous solution of the geoid and residual surface density modeling In other words, a new mathematical model will be offered which both provides the needed harmonic space for downward continuation and at the same time accounts for the non-harmonic terms of gravitational field and makes use of it for residual mass density modeling within the topographic region. The presented new model enjoys from uniqueness of the solution, opposite to the inverse application of the Newton integral for mass density modeling which is non-unique, and only needs regularization to remove its instability problem. In this way, the solution of the model provides both the incremental harmonic gravitational potential on surface of the reference ellipsoid as the gravity field model and the lateral surface mass-density variations via the second derivatives of the non harmonic terms of gravitational field. As the case study and accuracy verification, the proposed methodology is applied for identification of the salt geological structures as well as geoid computations within the northern coasts of Persian Gulf.

  19. A Proposed Performance-Based System for Teacher Interactive Electronic Continuous Professional Development (TIE-CPD)

    ERIC Educational Resources Information Center

    Razak, Rafiza Abdul; Yusop, Farrah Dina; Idris, Aizal Yusrina; Al-Sinaiyah, Yanbu; Halili, Siti Hajar

    2016-01-01

    The paper introduces Teacher Interactive Electronic Continuous Professional Development (TIE-CPD), an online interactive training system. The framework and methodology of TIE-CPD are designed with functionalities comparable with existing e-training systems. The system design and development literature offers several methodology and framework…

  20. Modeling of the effect of freezer conditions on the principal constituent parameters of ice cream by using response surface methodology.

    PubMed

    Inoue, K; Ochi, H; Taketsuka, M; Saito, H; Sakurai, K; Ichihashi, N; Iwatsuki, K; Kokubo, S

    2008-05-01

    A systematic analysis was carried out by using response surface methodology to create a quantitative model of the synergistic effects of conditions in a continuous freezer [mix flow rate (L/h), overrun (%), cylinder pressure (kPa), drawing temperature ( degrees C), and dasher speed (rpm)] on the principal constituent parameters of ice cream [rate of fat destabilization (%), mean air cell diameter (mum), and mean ice crystal diameter (mum)]. A central composite face-centered design was used for this study. Thirty-one combinations of the 5 above-mentioned freezer conditions were designed (including replicates at the center point), and ice cream samples were manufactured and examined in a continuous freezer under the selected conditions. The responses were the 3 variables given above. A quadratic model was constructed, with the freezer conditions as the independent variables and the ice cream characteristics as the dependent variables. The coefficients of determination (R(2)) were greater than 0.9 for all 3 responses, but Q(2), the index used here for the capability of the model for predicting future observed values of the responses, was negative for both the mean ice crystal diameter and the mean air cell diameter. Therefore, pruned models were constructed by removing terms that had contributed little to the prediction in the original model and by refitting the regression model. It was demonstrated that these pruned models provided good fits to the data in terms of R(2), Q(2), and ANOVA. The effects of freezer conditions were expressed quantitatively in terms of the 3 responses. The drawing temperature ( degrees C) was found to have a greater effect on ice cream characteristics than any of the other factors.

  1. Alcohol and drug treatment outcome studies: new methodological review (2005-2010) and comparison with past reviews.

    PubMed

    Robinson, Sean M; Sobell, Linda Carter; Sobell, Mark B; Arcidiacono, Steven; Tzall, David

    2014-01-01

    Several methodological reviews of alcohol treatment outcome studies and one review of drug studies have been published over the past 40 years. Although past reviews demonstrated methodological improvements in alcohol studies, they also found continued deficiencies. The current review allows for an updated evaluation of the methodological rigor of alcohol and drug studies and, by utilizing inclusion criteria similar to previous reviews, it allows for a comparative review over time. In addition, this is the first review that compares the methodology of alcohol and drug treatment outcome studies published during the same time period. The methodology for 25 alcohol and 11 drug treatment outcome studies published from 2005 through 2010 that met the review's inclusion criteria was evaluated. The majority of variables evaluated were used in prior reviews. The current review found that more alcohol and drug treatment outcome studies are now using continuous substance use measures and assessing problem severity. Although there have been methodological improvements over time, the current reviews differed little from their most recent past counterpart. Despite this finding, some areas, particularly the continued low reporting of demographic data, needs strengthening. Improvement in the methodological rigor of alcohol and drug treatment outcome studies has occurred over time. The current review found few differences between alcohol and drug study methodologies as well as few differences between the current review and the most recent past alcohol and drug reviews. © 2013 Elsevier Ltd. All rights reserved.

  2. A methodology to design heuristics for model selection based on the characteristics of data: Application to investigate when the Negative Binomial Lindley (NB-L) is preferred over the Negative Binomial (NB).

    PubMed

    Shirazi, Mohammadali; Dhavala, Soma Sekhar; Lord, Dominique; Geedipally, Srinivas Reddy

    2017-10-01

    Safety analysts usually use post-modeling methods, such as the Goodness-of-Fit statistics or the Likelihood Ratio Test, to decide between two or more competitive distributions or models. Such metrics require all competitive distributions to be fitted to the data before any comparisons can be accomplished. Given the continuous growth in introducing new statistical distributions, choosing the best one using such post-modeling methods is not a trivial task, in addition to all theoretical or numerical issues the analyst may face during the analysis. Furthermore, and most importantly, these measures or tests do not provide any intuitions into why a specific distribution (or model) is preferred over another (Goodness-of-Logic). This paper ponders into these issues by proposing a methodology to design heuristics for Model Selection based on the characteristics of data, in terms of descriptive summary statistics, before fitting the models. The proposed methodology employs two analytic tools: (1) Monte-Carlo Simulations and (2) Machine Learning Classifiers, to design easy heuristics to predict the label of the 'most-likely-true' distribution for analyzing data. The proposed methodology was applied to investigate when the recently introduced Negative Binomial Lindley (NB-L) distribution is preferred over the Negative Binomial (NB) distribution. Heuristics were designed to select the 'most-likely-true' distribution between these two distributions, given a set of prescribed summary statistics of data. The proposed heuristics were successfully compared against classical tests for several real or observed datasets. Not only they are easy to use and do not need any post-modeling inputs, but also, using these heuristics, the analyst can attain useful information about why the NB-L is preferred over the NB - or vice versa- when modeling data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, G.-H.; Pesaran, A.; Smith, K.

    The objectives of this paper are: (1) continue to explore thermal abuse behaviors of Li-ion cells and modules that are affected by local conditions of heat and materials; (2) use the 3D Li-ion battery thermal abuse 'reaction' model developed for cells to explore the impact of the location of internal short, its heating rate, and thermal properties of the cell; (3) continue to understand the mechanisms and interactions between heat transfer and chemical reactions during thermal runaway for Li-ion cells and modules; and (4) explore the use of the developed methodology to support the design of abuse-tolerant Li-ion battery systems.

  4. Sediment source fingerprinting as an aid to catchment management: A review of the current state of knowledge and a methodological decision-tree for end-users

    USGS Publications Warehouse

    Collins, A.L; Pulley, S.; Foster, I.D.L; Gellis, Allen; Porto, P.; Horowitz, A.J.

    2017-01-01

    The growing awareness of the environmental significance of fine-grained sediment fluxes through catchment systems continues to underscore the need for reliable information on the principal sources of this material. Source estimates are difficult to obtain using traditional monitoring techniques, but sediment source fingerprinting or tracing procedures, have emerged as a potentially valuable alternative. Despite the rapidly increasing numbers of studies reporting the use of sediment source fingerprinting, several key challenges and uncertainties continue to hamper consensus among the international scientific community on key components of the existing methodological procedures. Accordingly, this contribution reviews and presents recent developments for several key aspects of fingerprinting, namely: sediment source classification, catchment source and target sediment sampling, tracer selection, grain size issues, tracer conservatism, source apportionment modelling, and assessment of source predictions using artificial mixtures. Finally, a decision-tree representing the current state of knowledge is presented, to guide end-users in applying the fingerprinting approach.

  5. Advances in the continuous monitoring of erosion and deposition dynamics: Developments and applications of the new PEEP-3T system

    NASA Astrophysics Data System (ADS)

    Lawler, D. M.

    2008-01-01

    In most episodic erosion and deposition systems, knowledge of the timing of geomorphological change, in relation to fluctuations in the driving forces, is crucial to strong erosion process inference, and model building, validation and development. A challenge for geomorphology, however, is that few studies have focused on geomorphological event structure (timing, magnitude, frequency and duration of individual erosion and deposition events), in relation to applied stresses, because of the absence of key monitoring methodologies. This paper therefore (a) presents full details of a new erosion and deposition measurement system — PEEP-3T — developed from the Photo-Electronic Erosion Pin sensor in five key areas, including the addition of nocturnal monitoring through the integration of the Thermal Consonance Timing (TCT) concept, to produce a continuous sensing system; (b) presents novel high-resolution datasets from the redesigned PEEP-3T system for river bank system of the Rivers Nidd and Wharfe, northern England, UK; and (c) comments on their potential for wider application throughout geomorphology to address these key measurement challenges. Relative to manual methods of erosion and deposition quantification, continuous PEEP-3T methodologies increase the temporal resolution of erosion/deposition event detection by more than three orders of magnitude (better than 1-second resolution if required), and this facility can significantly enhance process inference. Results show that river banks are highly dynamic thermally and respond quickly to radiation inputs. Data on bank retreat timing, fixed with PEEP-3T TCT evidence, confirmed that they were significantly delayed up to 55 h after flood peaks. One event occurred 13 h after emergence from the flow. This suggests that mass failure processes rather than fluid entrainment dominated the system. It is also shown how, by integrating turbidity instrumentation with TCT ideas, linkages between sediment supply and sediment flux can be forged at event timescales, and a lack of sediment exhaustion was evident here. Five challenges for wider geomorphological process investigation are discussed. This event-based dynamics approach, based on continuous monitoring methodologies, appears to have considerable wider potential for stronger process inference and model testing and validation in many areas of geomorphology.

  6. Return on Investment Analysis for the Almond Board of California

    DTIC Science & Technology

    2004-06-01

    general approach for the analysis is first to identify relevant factors concerning consumer behavior using exploratory factor analysis (EFA) and...That completed the intermediate stage of the conceptual model below, referring to the latent drivers of consumer behavior that affect the almond... consumer behavior remains a challenge that will have to be continuously addressed by the ABC management. Finally, to improve the methodology for

  7. A Center for Excellence in Mathematical Sciences Final Progress Report

    DTIC Science & Technology

    1997-02-18

    together with a sampling rule of the form of (5): i(t) = G(x(t), t) + B(t) (4) G(., t) = continualized version of g(-, k), t E [ kA , (k + 1)A), k e N (5...Nobuki Takayama * 25 Pages 27 I I 93-97 Databases PF Actovotoes And Modeling Eugen Ardeleanu And Adriana Ardeleanu 6 Pages 93-98 Methodological Issues In

  8. Downward continuation of airborne gravity data by means of the change of boundary approach

    NASA Astrophysics Data System (ADS)

    Mansi, A. H.; Capponi, M.; Sampietro, D.

    2018-03-01

    Within the modelling of gravity data, a common practice is the upward/downward continuation of the signal, i.e. the process of continuing the gravitational signal in the vertical direction away or closer to the sources, respectively. The gravity field, being a potential field, satisfies the Laplace's equation outside the masses and this means that it allows to unambiguously perform this analytical continuation only in a source-free domain. The analytical continuation problem has been solved both in the space and spectral domains by exploiting different algorithms. As well known, the downward continuation operator, differently from the upward one, is an unstable operator, due to its spectral characteristics similar to those of a high-pass filter, and several regularization methods have been proposed in order to stabilize it. In this work, an iterative procedure to downward/upward continue the gravity field observations, acquired at different altitudes, is proposed. This methodology is based on the change of boundary principle and it has been expressively thought for aerogravimetric observations for geophysical exploration purposes. Within this field of application, usually several simplifications can be applied, basically due to the specific characteristics of the airborne surveys which are usually flown at almost constant altitude as close as possible to the terrain. For instance, these characteristics, as shown in the present work, allow to perform the downward continuation without the need of any regularization. The goodness of the proposed methodology has been evaluated by means of a numerical test on real data, acquired in the South of Australia. The test shows that it is possible to move the aerogravimetric data, acquired along tracks with a maximum height difference of about 250 m, with accuracies of the order of 10^{-3} mGal.

  9. Using continuous process improvement methodology to standardize nursing handoff communication.

    PubMed

    Klee, Kristi; Latta, Linda; Davis-Kirsch, Sallie; Pecchia, Maria

    2012-04-01

    The purpose of this article was to describe the use of continuous performance improvement (CPI) methodology to standardize nurse shift-to-shift handoff communication. The goals of the process were to standardize the content and process of shift handoff, improve patient safety, increase patient and family involvement in the handoff process, and decrease end-of-shift overtime. This article will describe process changes made over a 4-year period as result of application of the plan-do-check-act procedure, which is an integral part of the CPI methodology, and discuss further work needed to continue to refine this critical nursing care process. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Myocardial Infarct Segmentation from Magnetic Resonance Images for Personalized Modeling of Cardiac Electrophysiology

    PubMed Central

    Ukwatta, Eranga; Arevalo, Hermenegild; Li, Kristina; Yuan, Jing; Qiu, Wu; Malamas, Peter; Wu, Katherine C.

    2016-01-01

    Accurate representation of myocardial infarct geometry is crucial to patient-specific computational modeling of the heart in ischemic cardiomyopathy. We have developed a methodology for segmentation of left ventricular (LV) infarct from clinically acquired, two-dimensional (2D), late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, for personalized modeling of ventricular electrophysiology. The infarct segmentation was expressed as a continuous min-cut optimization problem, which was solved using its dual formulation, the continuous max-flow (CMF). The optimization objective comprised of a smoothness term, and a data term that quantified the similarity between image intensity histograms of segmented regions and those of a set of training images. A manual segmentation of the LV myocardium was used to initialize and constrain the developed method. The three-dimensional geometry of infarct was reconstructed from its segmentation using an implicit, shape-based interpolation method. The proposed methodology was extensively evaluated using metrics based on geometry, and outcomes of individualized electrophysiological simulations of cardiac dys(function). Several existing LV infarct segmentation approaches were implemented, and compared with the proposed method. Our results demonstrated that the CMF method was more accurate than the existing approaches in reproducing expert manual LV infarct segmentations, and in electrophysiological simulations. The infarct segmentation method we have developed and comprehensively evaluated in this study constitutes an important step in advancing clinical applications of personalized simulations of cardiac electrophysiology. PMID:26731693

  11. Structural damage continuous monitoring by using a data driven approach based on principal component analysis and cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid

    2017-05-01

    Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.

  12. Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory

    NASA Astrophysics Data System (ADS)

    Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei

    2015-08-01

    For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.

  13. Methodologies for Effective Writing Instruction in EFL and ESL Classrooms

    ERIC Educational Resources Information Center

    Al-Mahrooqi, Rahma, Ed.; Thakur, Vijay Singh; Roscoe, Adrian

    2015-01-01

    Educators continue to strive for advanced teaching methods to bridge the gap between native and non-native English speaking students. Lessons on written forms of communication continue to be a challenge recognized by educators who wish to improve student comprehension and overall ability to write clearly and expressively. "Methodologies for…

  14. Adapting to Uncertainty: Comparing Methodological Approaches to Climate Adaptation and Mitigation Policy

    NASA Astrophysics Data System (ADS)

    Huda, J.; Kauneckis, D. L.

    2013-12-01

    Climate change adaptation represents a number of unique policy-making challenges. Foremost among these is dealing with the range of future climate impacts to a wide scope of inter-related natural systems, their interaction with social and economic systems, and uncertainty resulting from the variety of downscaled climate model scenarios and climate science projections. These cascades of uncertainty have led to a number of new approaches as well as a reexamination of traditional methods for evaluating risk and uncertainty in policy-making. Policy makers are required to make decisions and formulate policy irrespective of the level of uncertainty involved and while a debate continues regarding the level of scientific certainty required in order to make a decision, incremental change in the climate policy continues at multiple governance levels. This project conducts a comparative analysis of the range of methodological approaches that are evolving to address uncertainty in climate change policy. It defines 'methodologies' to include a variety of quantitative and qualitative approaches involving both top-down and bottom-up policy processes that attempt to enable policymakers to synthesize climate information into the policy process. The analysis examines methodological approaches to decision-making in climate policy based on criteria such as sources of policy choice information, sectors to which the methodology has been applied, sources from which climate projections were derived, quantitative and qualitative methods used to deal with uncertainty, and the benefits and limitations of each. A typology is developed to better categorize the variety of approaches and methods, examine the scope of policy activities they are best suited for, and highlight areas for future research and development.

  15. Preliminary Results from a Model-Driven Architecture Methodology for Development of an Event-Driven Space Communications Service Concept

    NASA Technical Reports Server (NTRS)

    Roberts, Christopher J.; Morgenstern, Robert M.; Israel, David J.; Borky, John M.; Bradley, Thomas H.

    2017-01-01

    NASA's next generation space communications network will involve dynamic and autonomous services analogous to services provided by current terrestrial wireless networks. This architecture concept, known as the Space Mobile Network (SMN), is enabled by several technologies now in development. A pillar of the SMN architecture is the establishment and utilization of a continuous bidirectional control plane space link channel and a new User Initiated Service (UIS) protocol to enable more dynamic and autonomous mission operations concepts, reduced user space communications planning burden, and more efficient and effective provider network resource utilization. This paper provides preliminary results from the application of model driven architecture methodology to develop UIS. Such an approach is necessary to ensure systematic investigation of several open questions concerning the efficiency, robustness, interoperability, scalability and security of the control plane space link and UIS protocol.

  16. Modeling of the effect of freezer conditions on the hardness of ice cream using response surface methodology.

    PubMed

    Inoue, K; Ochi, H; Habara, K; Taketsuka, M; Saito, H; Ichihashi, N; Iwatsuki, K

    2009-12-01

    The effect of conventional continuous freezer parameters [mix flow (L/h), overrun (%), drawing temperature ( degrees C), cylinder pressure (kPa), and dasher speed (rpm)] on the hardness of ice cream under varying measured temperatures (-5, -10, and -15 degrees C) was investigated systematically using response surface methodology (central composite face-centered design), and the relationships were expressed as statistical models. The range (maximum and minimum values) of each freezer parameter was set according to the actual capability of the conventional freezer and applicability to the manufacturing process. Hardness was measured using a penetrometer. These models showed that overrun and drawing temperature had significant effects on hardness. The models can be used to optimize freezer conditions to make ice cream of the least possible hardness under the highest overrun (120%) and a drawing temperature of approximately -5.5 degrees C (slightly warmer than the lowest drawing temperature of -6.5 degrees C) within the range of this study. With reference to the structural elements of the ice cream, we suggest that the volume of overrun and ice crystal content, ice crystal size, and fat globule destabilization affect the hardness of ice cream. In addition, the combination of a simple instrumental parameter and response surface methodology allows us to show the relation between freezer conditions and one of the most important properties-hardness-visually and quantitatively on the practical level.

  17. Chemometric strategy for modeling metabolic biological space along the gastrointestinal tract and assessing microbial influences.

    PubMed

    Martin, François-Pierre J; Montoliu, Ivan; Kochhar, Sunil; Rezzi, Serge

    2010-12-01

    Over the past decade, the analysis of metabolic data with advanced chemometric techniques has offered the potential to explore functional relationships among biological compartments in relation to the structure and function of the intestine. However, the employed methodologies, generally based on regression modeling techniques, have given emphasis to region-specific metabolic patterns, while providing only limited insights into the spatiotemporal metabolic features of the complex gastrointestinal system. Hence, novel approaches are needed to analyze metabolic data to reconstruct the metabolic biological space associated with the evolving structures and functions of an organ such as the gastrointestinal tract. Here, we report the application of multivariate curve resolution (MCR) methodology to model metabolic relationships along the gastrointestinal compartments in relation to its structure and function using data from our previous metabonomic analysis. The method simultaneously summarizes metabolite occurrence and contribution to continuous metabolic signatures of the different biological compartments of the gut tract. This methodology sheds new light onto the complex web of metabolic interactions with gut symbionts that modulate host cell metabolism in surrounding gut tissues. In the future, such an approach will be key to provide new insights into the dynamic onset of metabolic deregulations involved in region-specific gastrointestinal disorders, such as Crohn's disease or ulcerative colitis.

  18. Optimizing low impact development (LID) for stormwater runoff treatment in urban area, Korea: Experimental and modeling approach.

    PubMed

    Baek, Sang-Soo; Choi, Dong-Ho; Jung, Jae-Woon; Lee, Hyung-Jin; Lee, Hyuk; Yoon, Kwang-Sik; Cho, Kyung Hwa

    2015-12-01

    Currently, continued urbanization and development result in an increase of impervious areas and surface runoff including pollutants. Also one of the greatest issues in pollutant emissions is the first flush effect (FFE), which implies a greater discharge rate of pollutant mass in the early part in the storm. Low impact development (LID) practices have been mentioned as a promising strategy to control urban stormwater runoff and pollution in the urban ecosystem. However, this requires many experimental and modeling efforts to test LID characteristics and propose an adequate guideline for optimizing LID management. In this study, we propose a novel methodology to optimize the sizes of different types of LID by conducting intensive stormwater monitoring and numerical modeling in a commercial site in Korea. The methodology proposed optimizes LID size in an attempt to moderate FFE on a receiving waterbody. Thereby, the main objective of the optimization is to minimize mass first flush (MFF), which is an indicator for quantifying FFE. The optimal sizes of 6 different LIDs ranged from 1.2 mm to 3.0 mm in terms of runoff depths, which significantly moderate the FFE. We hope that the new proposed methodology can be instructive for establishing LID strategies to mitigate FFE. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Methodology of Comparative Analysis of Public School Teachers' Continuing Professional Development in Great Britain, Canada and the USA

    ERIC Educational Resources Information Center

    Mukan, Nataliya; Kravets, Svitlana

    2015-01-01

    In the article the methodology of comparative analysis of public school teachers' continuing professional development (CPD) in Great Britain, Canada and the USA has been presented. The main objectives are defined as theoretical analysis of scientific and pedagogical literature, which highlights different aspects of the problem under research;…

  20. Semiparametric regression during 2003–2007*

    PubMed Central

    Ruppert, David; Wand, M.P.; Carroll, Raymond J.

    2010-01-01

    Semiparametric regression is a fusion between parametric regression and nonparametric regression that integrates low-rank penalized splines, mixed model and hierarchical Bayesian methodology – thus allowing more streamlined handling of longitudinal and spatial correlation. We review progress in the field over the five-year period between 2003 and 2007. We find semiparametric regression to be a vibrant field with substantial involvement and activity, continual enhancement and widespread application. PMID:20305800

  1. Stress and PTSD Mechanisms as Targets for Pharmacotherapy of Alcohol Abuse, Addiction and Relapse

    DTIC Science & Technology

    2017-10-01

    and due to the need for some methodology refinements. 15. SUBJECT TERMS PTSD, alcohol, ethanol, prazosin, noradrenergic, startle, anxiety, stress...behaviors in the differing experimental models used in these studies; we continue to evaluate whether prazosin treatment disproportionately decreases...intermittent alcohol access (IAA, 24 h/day free choice between 20% alcohol vs water on 3 non -consecutive days/week) to establish stable elevated

  2. Finding a balance between "value added" and feeling valued: revising models of care. The human factor of implementing a quality improvement initiative using Lean methodology within the healthcare sector.

    PubMed

    Deans, Rachel; Wade, Shawna

    2011-01-01

    Growing demand from clients waiting to access vital services in a healthcare sector under economic constraint, coupled with the pressure for ongoing improvement within a multi-faceted organization, can have a significant impact on the front-line staff, who are essential to the successful implementation of any quality improvement initiative. The Lean methodology is a management system for continuous improvement based on the Toyota Production System; it focuses on two main themes: respect for people and the elimination of waste or non-value-added activities. Within the Lean process, value-added is used to describe any activity that contributes directly to satisfying the needs of the client, and non-value-added refers to any activity that takes time, space or resources but does not contribute directly to satisfying client needs. Through the revision of existing models of service delivery, the authors' organization has made an impact on increasing access to care and has supported successful engagement of staff in the process, while ensuring that the focus remains on the central needs of clients and families accessing services. While the performance metrics continue to exhibit respectable results for this strategic priority, further gains are expected over the next 18-24 months.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobromir Panayotov; Andrew Grief; Brad J. Merrill

    'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less

  4. Report on FY17 testing in support of integrated EPP-SMT design methods development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yanli .; Jetter, Robert I.; Sham, T. -L.

    The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate a SMT data-based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The purpose of this methodology is to minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, thermomechanical tests continued in FY17. Thismore » report presents the recent test results for Type 1 SMT specimens on Alloy 617 with long hold times, pressurization SMT on Alloy 617, and two-bar thermal ratcheting test results on SS316H at the temperature range of 405 °C to 705 °C. Preliminary EPP strain range analysis on the two-bar tests are critically evaluated and compared with the experimental results.« less

  5. Model documentation report: Transportation sector model of the National Energy Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity inmore » model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.« less

  6. Modelling of slaughterhouse solid waste anaerobic digestion: determination of parameters and continuous reactor simulation.

    PubMed

    López, Iván; Borzacconi, Liliana

    2010-10-01

    A model based on the work of Angelidaki et al. (1993) was applied to simulate the anaerobic biodegradation of ruminal contents. In this study, two fractions of solids with different biodegradation rates were considered. A first-order kinetic was used for the easily biodegradable fraction and a kinetic expression that is function of the extracellular enzyme concentration was used for the slowly biodegradable fraction. Batch experiments were performed to obtain an accumulated methane curve that was then used to obtain the model parameters. For this determination, a methodology derived from the "multiple-shooting" method was successfully used. Monte Carlo simulations allowed a confidence range to be obtained for each parameter. Simulations of a continuous reactor were performed using the optimal set of model parameters. The final steady-states were determined as functions of the operational conditions (solids load and residence time). The simulations showed that methane flow peaked at a flow rate of 0.5-0.8 Nm(3)/d/m(reactor)(3) at a residence time of 10-20 days. Simulations allow the adequate selection of operating conditions of a continuous reactor. (c) 2010 Elsevier Ltd. All rights reserved.

  7. Autonomous interplanetary constellation design

    NASA Astrophysics Data System (ADS)

    Chow, Cornelius Channing, II

    According to NASA's integrated space technology roadmaps, space-based infrastructures are envisioned as necessary ingredients to a sustained effort in continuing space exploration. Whether it be for extra-terrestrial habitats, roving/cargo vehicles, or space tourism, autonomous space networks will provide a vital communications lifeline for both future robotic and human missions alike. Projecting that the Moon will be a bustling hub of activity within a few decades, a near-term opportunity for in-situ infrastructure development is within reach. This dissertation addresses the anticipated need for in-space infrastructure by investigating a general design methodology for autonomous interplanetary constellations; to illustrate the theory, this manuscript presents results from an application to the Earth-Moon neighborhood. The constellation design methodology is formulated as an optimization problem, involving a trajectory design step followed by a spacecraft placement sequence. Modeling the dynamics as a restricted 3-body problem, the investigated design space consists of families of periodic orbits which play host to the constellations, punctuated by arrangements of spacecraft autonomously guided by a navigation strategy called LiAISON (Linked Autonomous Interplanetary Satellite Orbit Navigation). Instead of more traditional exhaustive search methods, a numerical continuation approach is implemented to map the admissible configuration space. In particular, Keller's pseudo-arclength technique is used to follow folding/bifurcating solution manifolds, which are otherwise inaccessible with other parameter continuation schemes. A succinct characterization of the underlying structure of the local, as well as global, extrema is thus achievable with little a priori intuition of the solution space. Furthermore, the proposed design methodology offers benefits in computation speed plus the ability to handle mildly stochastic systems. An application of the constellation design methodology to the restricted Earth-Moon system, reveals optimal pairwise configurations for various L1, L2, and L5 (halo, axial, and vertical) periodic orbit families. Navigation accuracies, ranging from O (10+/-1) meters in position space, are obtained for the optimal Earth-Moon constellations, given measurement noise on the order of 1 meter.

  8. Understanding the neural basis of cognitive bias modification as a clinical treatment for depression.

    PubMed

    Eguchi, Akihiro; Walters, Daniel; Peerenboom, Nele; Dury, Hannah; Fox, Elaine; Stringer, Simon

    2017-03-01

    [Correction Notice: An Erratum for this article was reported in Vol 85(3) of Journal of Consulting and Clinical Psychology (see record 2017-07144-002). In the article, there was an error in the Discussion section's first paragraph for Implications and Future Work. The in-text reference citation for Penton-Voak et al. (2013) was incorrectly listed as "Blumenfeld, Preminger, Sagi, and Tsodyks (2006)". All versions of this article have been corrected.] Objective: Cognitive bias modification (CBM) eliminates cognitive biases toward negative information and is efficacious in reducing depression recurrence, but the mechanisms behind the bias elimination are not fully understood. The present study investigated, through computer simulation of neural network models, the neural dynamics underlying the use of CBM in eliminating the negative biases in the way that depressed patients evaluate facial expressions. We investigated 2 new CBM methodologies using biologically plausible synaptic learning mechanisms-continuous transformation learning and trace learning-which guide learning by exploiting either the spatial or temporal continuity between visual stimuli presented during training. We first describe simulations with a simplified 1-layer neural network, and then we describe simulations in a biologically detailed multilayer neural network model of the ventral visual pathway. After training with either the continuous transformation learning rule or the trace learning rule, the 1-layer neural network eliminated biases in interpreting neutral stimuli as sad. The multilayer neural network trained with realistic face stimuli was also shown to be able to use continuous transformation learning or trace learning to reduce biases in the interpretation of neutral stimuli. The simulation results suggest 2 biologically plausible synaptic learning mechanisms, continuous transformation learning and trace learning, that may subserve CBM. The results are highly informative for the development of experimental protocols to produce optimal CBM training methodologies with human participants. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Robust model predictive control for optimal continuous drug administration.

    PubMed

    Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos

    2014-10-01

    In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Minimizing effects of methodological decisions on interpretation and prediction in species distribution studies: An example with background selection

    USGS Publications Warehouse

    Jarnevich, Catherine S.; Talbert, Marian; Morisette, Jeffrey T.; Aldridge, Cameron L.; Brown, Cynthia; Kumar, Sunil; Manier, Daniel; Talbert, Colin; Holcombe, Tracy R.

    2017-01-01

    Evaluating the conditions where a species can persist is an important question in ecology both to understand tolerances of organisms and to predict distributions across landscapes. Presence data combined with background or pseudo-absence locations are commonly used with species distribution modeling to develop these relationships. However, there is not a standard method to generate background or pseudo-absence locations, and method choice affects model outcomes. We evaluated combinations of both model algorithms (simple and complex generalized linear models, multivariate adaptive regression splines, Maxent, boosted regression trees, and random forest) and background methods (random, minimum convex polygon, and continuous and binary kernel density estimator (KDE)) to assess the sensitivity of model outcomes to choices made. We evaluated six questions related to model results, including five beyond the common comparison of model accuracy assessment metrics (biological interpretability of response curves, cross-validation robustness, independent data accuracy and robustness, and prediction consistency). For our case study with cheatgrass in the western US, random forest was least sensitive to background choice and the binary KDE method was least sensitive to model algorithm choice. While this outcome may not hold for other locations or species, the methods we used can be implemented to help determine appropriate methodologies for particular research questions.

  11. Gas-liquid countercurrent integration process for continuous biodiesel production using a microporous solid base KF/CaO as catalyst.

    PubMed

    Hu, Shengyang; Wen, Libai; Wang, Yun; Zheng, Xinsheng; Han, Heyou

    2012-11-01

    A continuous-flow integration process was developed for biodiesel production using rapeseed oil as feedstock, based on the countercurrent contact reaction between gas and liquid, separation of glycerol on-line and cyclic utilization of methanol. Orthogonal experimental design and response surface methodology were adopted to optimize technological parameters. A second-order polynomial model for the biodiesel yield was established and validated experimentally. The high determination coefficient (R(2)=98.98%) and the low probability value (Pr<0.0001) proved that the model matched the experimental data, and had a high predictive ability. The optimal technological parameters were: 81.5°C reaction temperature, 51.7cm fill height of catalyst KF/CaO and 105.98kPa system pressure. Under these conditions, the average yield of triplicate experiments was 93.7%, indicating the continuous-flow process has good potential in the manufacture of biodiesel. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  13. Method for Controlling Space Transportation System Life Cycle Costs

    NASA Technical Reports Server (NTRS)

    McCleskey, Carey M.; Bartine, David E.

    2006-01-01

    A structured, disciplined methodology is required to control major cost-influencing metrics of space transportation systems during design and continuing through the test and operations phases. This paper proposes controlling key space system design metrics that specifically influence life cycle costs. These are inclusive of flight and ground operations, test, and manufacturing and infrastructure. The proposed technique builds on today's configuration and mass properties control techniques and takes on all the characteristics of a classical control system. While the paper does not lay out a complete math model, key elements of the proposed methodology are explored and explained with both historical and contemporary examples. Finally, the paper encourages modular design approaches and technology investments compatible with the proposed method.

  14. Glucose Prediction Algorithms from Continuous Monitoring Data: Assessment of Accuracy via Continuous Glucose Error-Grid Analysis.

    PubMed

    Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio

    2007-09-01

    The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.

  15. The Deskilling Controversy.

    ERIC Educational Resources Information Center

    Attewell, Paul

    1987-01-01

    Braverman and others argue that capitalism continues to degrade and deskill work. The author presents theoretical, empirical, and methodological criticisms that highlight methodological weaknesses in the deskilling approach. (SK)

  16. Evaluation Model for Pavement Surface Distress on 3d Point Clouds from Mobile Mapping System

    NASA Astrophysics Data System (ADS)

    Aoki, K.; Yamamoto, K.; Shimamura, H.

    2012-07-01

    This paper proposes a methodology to evaluate the pavement surface distress for maintenance planning of road pavement using 3D point clouds from Mobile Mapping System (MMS). The issue on maintenance planning of road pavement requires scheduled rehabilitation activities for damaged pavement sections to keep high level of services. The importance of this performance-based infrastructure asset management on actual inspection data is globally recognized. Inspection methodology of road pavement surface, a semi-automatic measurement system utilizing inspection vehicles for measuring surface deterioration indexes, such as cracking, rutting and IRI, have already been introduced and capable of continuously archiving the pavement performance data. However, any scheduled inspection using automatic measurement vehicle needs much cost according to the instruments' specification or inspection interval. Therefore, implementation of road maintenance work, especially for the local government, is difficult considering costeffectiveness. Based on this background, in this research, the methodologies for a simplified evaluation for pavement surface and assessment of damaged pavement section are proposed using 3D point clouds data to build urban 3D modelling. The simplified evaluation results of road surface were able to provide useful information for road administrator to find out the pavement section for a detailed examination and for an immediate repair work. In particular, the regularity of enumeration of 3D point clouds was evaluated using Chow-test and F-test model by extracting the section where the structural change of a coordinate value was remarkably achieved. Finally, the validity of the current methodology was investigated by conducting a case study dealing with the actual inspection data of the local roads.

  17. Nano-metrology and terrain modelling - convergent practice in surface characterisation

    USGS Publications Warehouse

    Pike, R.J.

    2000-01-01

    The quantification of magnetic-tape and disk topography has a macro-scale counterpart in the Earth sciences - terrain modelling, the numerical representation of relief and pattern of the ground surface. The two practices arose independently and continue to function separately. This methodological paper introduces terrain modelling, discusses its similarities to and differences from industrial surface metrology, and raises the possibility of a unified discipline of quantitative surface characterisation. A brief discussion of an Earth-science problem, subdividing a heterogeneous terrain surface from a set of sample measurements, exemplifies a multivariate statistical procedure that may transfer to tribological applications of 3-D metrological height data.

  18. Guidelines for Calculating and Routing a Dam-Break Flood.

    DTIC Science & Technology

    1977-01-01

    flow, Teton Dam . 20. ABSTRACT (Continue an reverse aide If necessary and Identify by block number) This report described procedures necessary to calculate...and route a dam -break flood using an existing generalized unsteady open channel flow model. The recent Teton Dam event was reconstituted to test the...methodology may be obtained from The Hydrologic Engineering Center. The computer program was applied to the Teton Dam data set to demonstrate the level of

  19. A continuous time delay-difference type model (CTDDM) applied to stock assessment of the southern Atlantic albacore Thunnus alalunga

    NASA Astrophysics Data System (ADS)

    Liao, Baochao; Liu, Qun; Zhang, Kui; Baset, Abdul; Memon, Aamir Mahmood; Memon, Khadim Hussain; Han, Yanan

    2016-09-01

    A continuous time delay-diff erence model (CTDDM) has been established that considers continuous time delays of biological processes. The southern Atlantic albacore ( Thunnus alalunga) stock is the one of the commercially important tuna population in the marine world. The age structured production model (ASPM) and the surplus production model (SPM) have already been used to assess the albacore stock. However, the ASPM requires detailed biological information and the SPM lacks the biological realism. In this study, we focus on applying a CTDDM to the southern Atlantic albacore ( T. alalunga) species, which provides an alternative method to assess this fishery. It is the first time that CTDDM has been provided for assessing the Atlantic albacore ( T. alalunga) fishery. CTDDM obtained the 80% confidence interval of MSY (maximum sustainable yield) of (21 510 t, 23 118t). The catch in 2011 (24 100 t) is higher than the MSY values and the relative fishing mortality ratio ( F 2011/ F MSY) is higher than 1.0. The results of CTDDM were analyzed to verify the proposed methodology and provide reference information for the sustainable management of the southern Atlantic albacore stock. The CTDDM treats the recruitment, the growth, and the mortality rates as all varying continuously over time and fills gaps between ASPM and SPM in this stock assessment.

  20. Emergent Pedagogy and Affect in Collaborative Research: A Metho-Pedagogical Paradigm

    ERIC Educational Resources Information Center

    Gallagher, Kathleen; Wessels, Anne

    2011-01-01

    The widespread turn towards "collaboration" in qualitative research methodologies warrants careful and continuous critique. This paper addresses the possibilities and the challenges of collaborative methodology, and in particular what happens when the line between pedagogy and methodology is blurred in classroom-based ethnographic…

  1. 42 CFR 433.206 - Threshold methodology.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Threshold methodology. 433.206 Section 433.206 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Methodologies for Determining Federal Share of Medicaid Expenditures for Adult Eligibilit...

  2. 42 CFR 433.206 - Threshold methodology.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Threshold methodology. 433.206 Section 433.206 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS STATE FISCAL ADMINISTRATION Methodologies for Determining Federal Share of Medicaid Expenditures for Adult Eligibilit...

  3. Sediment source fingerprinting as an aid to catchment management: A review of the current state of knowledge and a methodological decision-tree for end-users.

    PubMed

    Collins, A L; Pulley, S; Foster, I D L; Gellis, A; Porto, P; Horowitz, A J

    2017-06-01

    The growing awareness of the environmental significance of fine-grained sediment fluxes through catchment systems continues to underscore the need for reliable information on the principal sources of this material. Source estimates are difficult to obtain using traditional monitoring techniques, but sediment source fingerprinting or tracing procedures, have emerged as a potentially valuable alternative. Despite the rapidly increasing numbers of studies reporting the use of sediment source fingerprinting, several key challenges and uncertainties continue to hamper consensus among the international scientific community on key components of the existing methodological procedures. Accordingly, this contribution reviews and presents recent developments for several key aspects of fingerprinting, namely: sediment source classification, catchment source and target sediment sampling, tracer selection, grain size issues, tracer conservatism, source apportionment modelling, and assessment of source predictions using artificial mixtures. Finally, a decision-tree representing the current state of knowledge is presented, to guide end-users in applying the fingerprinting approach. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. A review of failure models for unidirectional ceramic matrix composites under monotonic loads

    NASA Technical Reports Server (NTRS)

    Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.

    1989-01-01

    Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.

  5. A geological model for the management of subsurface data in the urban environment of Barcelona and surrounding area

    NASA Astrophysics Data System (ADS)

    Vázquez-Suñé, Enric; Ángel Marazuela, Miguel; Velasco, Violeta; Diviu, Marc; Pérez-Estaún, Andrés; Álvarez-Marrón, Joaquina

    2016-09-01

    The overdevelopment of cities since the industrial revolution has shown the need to incorporate a sound geological knowledge in the management of required subsurface infrastructures and in the assessment of increasingly needed groundwater resources. Additionally, the scarcity of outcrops and the technical difficulty to conduct underground exploration in urban areas highlights the importance of implementing efficient management plans that deal with the legacy of heterogeneous subsurface information. To deal with these difficulties, a methodology has been proposed to integrate all the available spatio-temporal data into a comprehensive spatial database and a set of tools that facilitates the analysis and processing of the existing and newly added data for the city of Barcelona (NE Spain). Here we present the resulting actual subsurface 3-D geological model that incorporates and articulates all the information stored in the database. The methodology applied to Barcelona benefited from a good collaboration between administrative bodies and researchers that enabled the realization of a comprehensive geological database despite logistic difficulties. Currently, the public administration and also private sectors both benefit from the geological understanding acquired in the city of Barcelona, for example, when preparing the hydrogeological models used in groundwater assessment plans. The methodology further facilitates the continuous incorporation of new data in the implementation and sustainable management of urban groundwater, and also contributes to significantly reducing the costs of new infrastructures.

  6. Analysis of health economics assessment reports for pharmaceuticals in France – understanding the underlying philosophy of CEESP assessment

    PubMed Central

    Toumi, Mondher; Motrunich, Anastasiia; Millier, Aurélie; Rémuzat, Cécile; Chouaid, Christos; Falissard, Bruno; Aballéa, Samuel

    2017-01-01

    ABSTRACT Background: Despite the guidelines for Economic and Public Health Assessment Committee (CEESP) submission having been available for nearly six years, the dossiers submitted continue to deviate from them, potentially impacting product prices. Objective: to review the reports published by CEESP, analyse deviations from the guidelines, and discuss their implications for the pricing and reimbursement process. Study design: CEESP reports published until January 2017 were reviewed, and deviations from the guidelines were extracted. The frequency of deviations was described by type of methodological concern (minor, important or major). Results: In 19 reports, we identified 243 methodological concerns, most often concerning modelling, measurement and valuation of health states and results presentation and sensitivity analyses; nearly 63% were minor, 33% were important and 4.5% were major. All reports included minor methodological concerns, and 17 (89%) included at least one important and/or major methodological concern. Global major methodological concerns completely invalidated the analysis in seven dossiers (37%). Conclusion: The CEESP submission dossiers fail to adhere to the guidelines, potentially invalidating the health economics analysis and resulting in pricing negotiations. As these negotiations tend to be unfavourable for the manufacturer, the industry should strive to improve the quality of the analyses submitted to CEESP. PMID:28804600

  7. 21 CFR 114.90 - Methodology.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Methodology. 114.90 Section 114.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION ACIDIFIED FOODS Production and Process Controls § 114.90 Methodology. Methods that may be used to...

  8. 21 CFR 114.90 - Methodology.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Methodology. 114.90 Section 114.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION ACIDIFIED FOODS Production and Process Controls § 114.90 Methodology. Methods that may be used to...

  9. 21 CFR 114.90 - Methodology.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Methodology. 114.90 Section 114.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION ACIDIFIED FOODS Production and Process Controls § 114.90 Methodology. Methods that may be used to...

  10. 77 FR 30411 - Connect America Fund; High-Cost Universal Service Support

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-23

    ... ``benchmarks'' for high cost loop support (HCLS). The methodology the Bureau adopts, builds on the analysis... to support continued broadband investment. The methodology the Bureau adopts today is described in... methodology, HCLS will be recalculated to account for the additional support available under the overall cap...

  11. Critical Communicative Methodology: Informing Real Social Transformation through Research

    ERIC Educational Resources Information Center

    Gomez, Aitor; Puigvert, Lidia; Flecha, Ramon

    2011-01-01

    The critical communicative methodology (CCM) is a methodological response to the dialogic turn of societies and sciences that has already had an important impact in transforming situations of inequality and exclusion. Research conducted with the CCM implies continuous and egalitarian dialogue among researchers and the people involved in the…

  12. [Adjusted Clinical Groups Method (ACG) to allocate resources according to the disease burden of each health center].

    PubMed

    Santelices C, Emilio; Muñoz P, Fernando; Muñiz, Patricio; Rojas, José

    2016-03-01

    Health care must be provided with strong primary health care models, emphasizing prevention and a continued, integrated and interdisciplinary care. Tools should be used to allow a better planning and more efficient use of resources. To assess risk adjustment methodologies, such as the Adjusted Clinical Groups (ACG) developed by The Johns Hopkins University, to allow the identification of chronic condition patterns and allocate resources accordingly. We report the results obtained applying the ACG methodology in primary care systems of 22 counties for three chronic diseases, namely Diabetes Mellitus, Hypertension and Heart Failure. The outcomes show a great variability in the prevalence of these conditions in the different health centers. There is also a great diversity in the use of resources for a given condition in the different health care centers. This methodology should contribute to a better distribution of health care resources, which should be based on the disease burden of each health care center.

  13. Comparison of 3D representations depicting micro folds: overlapping imagery vs. time-of-flight laser scanner

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, Aristidis D.; Georgopoulos, Andreas; Lozios, Stylianos G.

    2012-10-01

    A relatively new field of interest, which continuously gains grounds nowadays, is digital 3D modeling. However, the methodologies, the accuracy and the time and effort required to produce a high quality 3D model have been changing drastically the last few years. Whereas in the early days of digital 3D modeling, 3D models were only accessible to computer experts in animation, working many hours in expensive sophisticated software, today 3D modeling has become reasonably fast and convenient. On top of that, with online 3D modeling software, such as 123D Catch, nearly everyone can produce 3D models with minimum effort and at no cost. The only requirement is panoramic overlapping images, of the (still) objects the user wishes to model. This approach however, has limitations in the accuracy of the model. An objective of the study is to examine these limitations by assessing the accuracy of this 3D modeling methodology, with a Terrestrial Laser Scanner (TLS). Therefore, the scope of this study is to present and compare 3D models, produced with two different methods: 1) Traditional TLS method with the instrument ScanStation 2 by Leica and 2) Panoramic overlapping images obtained with DSLR camera and processed with 123D Catch free software. The main objective of the study is to evaluate advantages and disadvantages of the two 3D model producing methodologies. The area represented with the 3D models, features multi-scale folding in a cipollino marble formation. The most interesting part and most challenging to capture accurately, is an outcrop which includes vertically orientated micro folds. These micro folds have dimensions of a few centimeters while a relatively strong relief is evident between them (perhaps due to different material composition). The area of interest is located in Mt. Hymittos, Greece.

  14. Manpower Substitution and Productivity in Medical Practice

    PubMed Central

    Reinhardt, Uwe E.

    1973-01-01

    Probably in response to the often alleged physician shortage in this country, concerted research efforts are under way to identify technically feasible opportunities for manpower substitution in the production of ambulatory health care. The approaches range from descriptive studies of the effect of task delegation on output of medical services to rigorous mathematical modeling of health care production by means of linear or continuous production functions. In this article the distinct methodological approaches underlying mathematical models are presented in synopsis, and their inherent strengths and weaknesses are contrasted. The discussion includes suggestions for future research directions. Images Fig. 2 PMID:4586735

  15. A Sequential Shifting Algorithm for Variable Rotor Speed Control

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Edwards, Jason M.; DeCastro, Jonathan A.

    2007-01-01

    A proof of concept of a continuously variable rotor speed control methodology for rotorcraft is described. Variable rotor speed is desirable for several reasons including improved maneuverability, agility, and noise reduction. However, it has been difficult to implement because turboshaft engines are designed to operate within a narrow speed band, and a reliable drive train that can provide continuous power over a wide speed range does not exist. The new methodology proposed here is a sequential shifting control for twin-engine rotorcraft that coordinates the disengagement and engagement of the two turboshaft engines in such a way that the rotor speed may vary over a wide range, but the engines remain within their prescribed speed bands and provide continuous torque to the rotor; two multi-speed gearboxes facilitate the wide rotor speed variation. The shifting process begins when one engine slows down and disengages from the transmission by way of a standard freewheeling clutch mechanism; the other engine continues to apply torque to the rotor. Once one engine disengages, its gear shifts, the multi-speed gearbox output shaft speed resynchronizes and it re-engages. This process is then repeated with the other engine. By tailoring the sequential shifting, the rotor may perform large, rapid speed changes smoothly, as demonstrated in several examples. The emphasis of this effort is on the coordination and control aspects for proof of concept. The engines, rotor, and transmission are all simplified linear models, integrated to capture the basic dynamics of the problem.

  16. Application of Lean Healthcare methodology in a urology department of a tertiary hospital as a tool for improving efficiency.

    PubMed

    Boronat, F; Budia, A; Broseta, E; Ruiz-Cerdá, J L; Vivas-Consuelo, D

    To describe the application of the Lean methodology as a method for continuously improving the efficiency of a urology department in a tertiary hospital. The implementation of the Lean Healthcare methodology in a urology department was conducted in 3 phases: 1) team training and improvement of feedback among the practitioners, 2) management by process and superspecialisation and 3) improvement of indicators (continuous improvement). The indicators were obtained from the Hospital's information systems. The main source of information was the Balanced Scorecard for health systems management (CUIDISS). The comparison with other autonomous and national urology departments was performed through the same platform with the help of the Hospital's records department (IASIST). A baseline was established with the indicators obtained in 2011 for the comparative analysis of the results after implementing the Lean Healthcare methodology. The implementation of this methodology translated into high practitioner satisfaction, improved quality indicators reaching a risk-adjusted complication index (RACI) of 0.59 and a risk-adjusted mortality rate (RAMR) of 0.24 in 4 years. A value of 0.61 was reached with the efficiency indicator (risk-adjusted length of stay [RALOS] index), with a savings of 2869 stays compared with national Benchmarking (IASIST). The risk-adjusted readmissions index (RARI) was the only indicator above the standard, with a value of 1.36 but with progressive annual improvement of the same. The Lean methodology can be effectively applied to a urology department of a tertiary hospital to improve efficiency, obtaining significant and continuous improvements in all its indicators, as well as practitioner satisfaction. Team training, management by process, continuous improvement and delegation of responsibilities has been shown to be the fundamental pillars of this methodology. Copyright © 2017 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. General methodology for nonlinear modeling of neural systems with Poisson point-process inputs.

    PubMed

    Marmarelis, V Z; Berger, T W

    2005-07-01

    This paper presents a general methodological framework for the practical modeling of neural systems with point-process inputs (sequences of action potentials or, more broadly, identical events) based on the Volterra and Wiener theories of functional expansions and system identification. The paper clarifies the distinctions between Volterra and Wiener kernels obtained from Poisson point-process inputs. It shows that only the Wiener kernels can be estimated via cross-correlation, but must be defined as zero along the diagonals. The Volterra kernels can be estimated far more accurately (and from shorter data-records) by use of the Laguerre expansion technique adapted to point-process inputs, and they are independent of the mean rate of stimulation (unlike their P-W counterparts that depend on it). The Volterra kernels can also be estimated for broadband point-process inputs that are not Poisson. Useful applications of this modeling approach include cases where we seek to determine (model) the transfer characteristics between one neuronal axon (a point-process 'input') and another axon (a point-process 'output') or some other measure of neuronal activity (a continuous 'output', such as population activity) with which a causal link exists.

  18. Identification and Modelling of the In-Plane Reinforcement Orientation Variations in a CFRP Laminate Produced by Manual Lay-Up

    NASA Astrophysics Data System (ADS)

    Davila, Yves; Crouzeix, Laurent; Douchin, Bernard; Collombet, Francis; Grunevald, Yves-Henri

    2017-08-01

    Reinforcement angle orientation has a significant effect on the mechanical properties of composite materials. This work presents a methodology to introduce variable reinforcement angles into finite element (FE) models of composite structures. The study of reinforcement orientation variations uses meta-models to identify and control a continuous variation across the composite ply. First, the reinforcement angle is measured through image analysis techniques of the composite plies during the lay-up phase. Image analysis results show that variations in the mean ply orientations are between -0.5 and 0.5° with standard deviations ranging between 0.34 and 0.41°. An automatic post-treatment of the images determines the global and local angle variations yielding good agreements visually and numerically between the analysed images and the identified parameters. A composite plate analysed at the end of the cooling phase is presented as a case of study. Here, the variation in residual strains induced by the variability in the reinforcement orientation are up to 28% of the strain field of the homogeneous FE model. The proposed methodology has shown its capabilities to introduce material and geometrical variability into FE analysis of layered composite structures.

  19. Identification and Modelling of the In-Plane Reinforcement Orientation Variations in a CFRP Laminate Produced by Manual Lay-Up

    NASA Astrophysics Data System (ADS)

    Davila, Yves; Crouzeix, Laurent; Douchin, Bernard; Collombet, Francis; Grunevald, Yves-Henri

    2018-06-01

    Reinforcement angle orientation has a significant effect on the mechanical properties of composite materials. This work presents a methodology to introduce variable reinforcement angles into finite element (FE) models of composite structures. The study of reinforcement orientation variations uses meta-models to identify and control a continuous variation across the composite ply. First, the reinforcement angle is measured through image analysis techniques of the composite plies during the lay-up phase. Image analysis results show that variations in the mean ply orientations are between -0.5 and 0.5° with standard deviations ranging between 0.34 and 0.41°. An automatic post-treatment of the images determines the global and local angle variations yielding good agreements visually and numerically between the analysed images and the identified parameters. A composite plate analysed at the end of the cooling phase is presented as a case of study. Here, the variation in residual strains induced by the variability in the reinforcement orientation are up to 28% of the strain field of the homogeneous FE model. The proposed methodology has shown its capabilities to introduce material and geometrical variability into FE analysis of layered composite structures.

  20. Precision reconstruction of manufactured free-form components

    NASA Astrophysics Data System (ADS)

    Ristic, Mihailo; Brujic, Djordje; Ainsworth, Iain

    2000-03-01

    Manufacturing needs in many industries, especially the aerospace and the automotive, involve CAD remodeling of manufactured free-form parts using NURBS. This is typically performed as part of 'first article inspection' or 'closing the design loop.' The reconstructed model must satisfy requirements such as accuracy, compatibility with the original CAD model and adherence to various constraints. The paper outlines a methodology for realizing this task. Efficiency and quality of the results are achieved by utilizing the nominal CAD model. It is argued that measurement and remodeling steps are equally important. We explain how the measurement was optimized in terms of accuracy, point distribution and measuring speed using a CMM. Remodeling steps include registration, data segmentation, parameterization and surface fitting. Enforcement of constraints such as continuity was performed as part of the surface fitting process. It was found necessary that the relevant algorithms are able to perform in the presence of measurement noise, while making no special assumptions about regularity of data distribution. In order to deal with real life situations, a number of supporting functions for geometric modeling were required and these are described. The presented methodology was applied using real aeroengine parts and the experimental results are presented.

  1. Automatic programming of arc welding robots

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Srikanth

    Automatic programming of arc welding robots requires the geometric description of a part from a solid modeling system, expert weld process knowledge and the kinematic arrangement of the robot and positioner automatically. Current commercial solid models are incapable of storing explicitly product and process definitions of weld features. This work presents a paradigm to develop a computer-aided engineering environment that supports complete weld feature information in a solid model and to create an automatic programming system for robotic arc welding. In the first part, welding features are treated as properties or attributes of an object, features which are portions of the object surface--the topological boundary. The structure for representing the features and attributes is a graph called the Welding Attribute Graph (WAGRAPH). The method associates appropriate weld features to geometric primitives, adds welding attributes, and checks the validity of welding specifications. A systematic structure is provided to incorporate welding attributes and coordinate system information in a CSG tree. The specific implementation of this structure using a hybrid solid modeler (IDEAS) and an object-oriented programming paradigm is described. The second part provides a comprehensive methodology to acquire and represent weld process knowledge required for the proper selection of welding schedules. A methodology of knowledge acquisition using statistical methods is proposed. It is shown that these procedures did little to capture the private knowledge of experts (heuristics), but helped in determining general dependencies, and trends. A need was established for building the knowledge-based system using handbook knowledge and to allow the experts further to build the system. A methodology to check the consistency and validity for such knowledge addition is proposed. A mapping shell designed to transform the design features to application specific weld process schedules is described. A new approach using fixed path modified continuation methods is proposed in the final section to plan continuously the trajectory of weld seams in an integrated welding robot and positioner environment. The joint displacement, velocity, and acceleration histories all along the path as a function of the path parameter for the best possible welding condition are provided for the robot and the positioner to track various paths normally encountered in arc welding.

  2. Forward progress of scientific inquiry into the early father-child relationship: introduction to the special issue on very young children and their fathers.

    PubMed

    Bocknek, Erika L; Hossain, Ziarat; Roggman, Lori

    2014-01-01

    Research on fathering and the father-child relationship has made substantial progress in the most recent 15 years since the last special issue of the Infant Mental Health Journal on fathers and young children. This special issue on fathers and young children contains a series of papers exemplifying this progress, including advances in methodology-more direct assessment and more observational measures-in addition to the increasing dynamic complexity of the conceptual models used to study fathers, the diversity of fathers studied, and the growth of programs to support early father involvement. In assessing the current state of the field, special attention is given to contributions made by the papers contained in this special issue, and two critical areas for continued progress are addressed: (1) methodological and measurement development that specifically address fathers and fathering relationships and (2) cross-cultural and ecologically valid research examining the diversity of models of fathering. © 2014 Michigan Association for Infant Mental Health.

  3. Basis function models for animal movement

    USGS Publications Warehouse

    Hooten, Mevin B.; Johnson, Devin S.

    2017-01-01

    Advances in satellite-based data collection techniques have served as a catalyst for new statistical methodology to analyze these data. In wildlife ecological studies, satellite-based data and methodology have provided a wealth of information about animal space use and the investigation of individual-based animal–environment relationships. With the technology for data collection improving dramatically over time, we are left with massive archives of historical animal telemetry data of varying quality. While many contemporary statistical approaches for inferring movement behavior are specified in discrete time, we develop a flexible continuous-time stochastic integral equation framework that is amenable to reduced-rank second-order covariance parameterizations. We demonstrate how the associated first-order basis functions can be constructed to mimic behavioral characteristics in realistic trajectory processes using telemetry data from mule deer and mountain lion individuals in western North America. Our approach is parallelizable and provides inference for heterogenous trajectories using nonstationary spatial modeling techniques that are feasible for large telemetry datasets. Supplementary materials for this article are available online.

  4. Simultaneous, noninvasive, in vivo, continuous monitoring of hematocrit, vascular volume, hemoglobin oxygen saturation, pulse rate and breathing rate in humans and other animal models using a single light source

    NASA Astrophysics Data System (ADS)

    Dent, Paul; Tun, Sai Han; Fillioe, Seth; Deng, Bin; Satalin, Josh; Nieman, Gary; Wilcox, Kailyn; Searles, Quinn; Narsipur, Sri; Peterson, Charles M.; Goodisman, Jerry; Mostrom, James; Steinmann, Richard; Chaiken, J.

    2018-02-01

    We previously reported a new algorithm "PV[O]H" for continuous, noninvasive, in vivo monitoring of hematocrit changes in blood and have since shown its utility for monitoring in humans during 1) hemodialysis, 2) orthostatic perturbations and 3) during blood loss and fluid replacement in a rat model. We now show that the algorithm is sensitive to changes in hemoglobin oxygen saturation. We document the phenomenology of the effect and explain the effect using new results obtained from humans and rat models. The oxygen sensitivity derives from the differential absorption of autofluorescence originating in the static tissues by oxy and deoxy hemoglobin. Using this approach we show how to perform simultaneous, noninvasive, in vivo, continuous monitoring of hematocrit, vascular volume, hemoglobin oxygen saturation, pulse rate and breathing rate in mammals using a single light source. We suspect that monitoring of changes in this suite of vital signs can be provided with improved time response, sensitivity and precision compared to existing methodologies. Initial results also offer a more detailed glimpse into the systemic oxygen transport in the circulatory system of humans.

  5. LMI designmethod for networked-based PID control

    NASA Astrophysics Data System (ADS)

    Souza, Fernando de Oliveira; Mozelli, Leonardo Amaral; de Oliveira, Maurício Carvalho; Palhares, Reinaldo Martinez

    2016-10-01

    In this paper, we propose a methodology for the design of networked PID controllers for second-order delayed processes using linear matrix inequalities. The proposed procedure takes into account time-varying delay on the plant, time-varying delays induced by the network and packed dropouts. The design is carried on entirely using a continuous-time model of the closed-loop system where time-varying delays are used to represent sampling and holding occurring in a discrete-time digital PID controller.

  6. The study of insect blood-feeding behaviour. 2. Recording techniques and the use of flow charts.

    PubMed

    Smith, J J; Friend, W G

    1987-01-01

    This paper continues a discussion of approaches and methodologies we have used in our studies of feeding in haematophagous insects. Described are techniques for directly monitoring behaviour: electrical recording of feeding behaviour via resistance changes in the food canal, optical methods for monitoring mouthpart activity, and a computer technique for behavioural event recording. Also described is the use of "flow charts" or "decision diagrams" to model interrelated sequences of behaviours.

  7. Retrieval of Aerosol Parameters from Continuous H24 Lidar-Ceilometer Measurements

    NASA Astrophysics Data System (ADS)

    Dionisi, D.; Barnaba, F.; Costabile, F.; Di Liberto, L.; Gobbi, G. P.; Wille, H.

    2016-06-01

    Ceilometer technology is increasingly applied to the monitoring and the characterization of tropospheric aerosols. In this work, a method to estimate some key aerosol parameters (extinction coefficient, surface area concentration and volume concentration) from ceilometer measurements is presented. A numerical model has been set up to derive a mean functional relationships between backscatter and the above mentioned parameters based on a large set of simulated aerosol optical properties. A good agreement was found between the modeled backscatter and extinction coefficients and the ones measured by the EARLINET Raman lidars. The developed methodology has then been applied to the measurements acquired by a prototype Polarization Lidar-Ceilometer (PLC). This PLC instrument was developed within the EC- LIFE+ project "DIAPASON" as an upgrade of the commercial, single-channel Jenoptik CHM15k system. The PLC run continuously (h24) close to Rome (Italy) for a whole year (2013-2014). Retrievals of the aerosol backscatter coefficient at 1064 nm and of the relevant aerosol properties were performed using the proposed methodology. This information, coupled to some key aerosol type identification made possible by the depolarization channel, allowed a year-round characterization of the aerosol field at this site. Examples are given to show how this technology coupled to appropriate data inversion methods is potentially useful in the operational monitoring of parameters of air quality and meteorological interest.

  8. Optimization of controlled release nanoparticle formulation of verapamil hydrochloride using artificial neural networks with genetic algorithm and response surface methodology.

    PubMed

    Li, Yongqiang; Abbaspour, Mohammadreza R; Grootendorst, Paul V; Rauth, Andrew M; Wu, Xiao Yu

    2015-08-01

    This study was performed to optimize the formulation of polymer-lipid hybrid nanoparticles (PLN) for the delivery of an ionic water-soluble drug, verapamil hydrochloride (VRP) and to investigate the roles of formulation factors. Modeling and optimization were conducted based on a spherical central composite design. Three formulation factors, i.e., weight ratio of drug to lipid (X1), and concentrations of Tween 80 (X2) and Pluronic F68 (X3), were chosen as independent variables. Drug loading efficiency (Y1) and mean particle size (Y2) of PLN were selected as dependent variables. The predictive performance of artificial neural networks (ANN) and the response surface methodology (RSM) were compared. As ANN was found to exhibit better recognition and generalization capability over RSM, multi-objective optimization of PLN was then conducted based upon the validated ANN models and continuous genetic algorithms (GA). The optimal PLN possess a high drug loading efficiency (92.4%, w/w) and a small mean particle size (∼100nm). The predicted response variables matched well with the observed results. The three formulation factors exhibited different effects on the properties of PLN. ANN in coordination with continuous GA represent an effective and efficient approach to optimize the PLN formulation of VRP with desired properties. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Modified teaching approach for an enhanced medical physics graduate education experience

    PubMed Central

    Rutel, IB

    2011-01-01

    Lecture-based teaching promotes a passive interaction with students. Opportunities to modify this format are available to enhance the overall learning experience for both students and instructors. The description for a discussion-based learning format is presented as it applies to a graduate curriculum with technical (formal mathematical derivation) topics. The presented hybrid method involves several techniques, including problem-based learning, modeling, and online lectures, eliminating didactic lectures. The results from an end-of-course evaluation show that the students appear to prefer the modified format over the more traditional methodology of “lecture only” contact time. These results are motivation for further refinement and continued implementation of the described methodology in the current course and potentially other courses within the department graduate curriculum. PMID:22279505

  10. The relationship between grief adjustment and continuing bonds for parents who have lost a child.

    PubMed

    Ronen, Rama; Packman, Wendy; Field, Nigel P; Davies, Betty; Kramer, Robin; Long, Janet K

    This article presents findings from a study on the impact of a child's death on parents. We explored the prominence and adaptiveness of parents' continuing bonds expressions, psychological adjustment, and grief reactions. A qualitative case study methodology was used to describe six cases. Participants were classified into two groups based on scores on the Inventory of Complicated Grief. Commonalities in themes on the Continuing Bonds Interview and projective drawings were assessed. Those in the Non-Complicated Grief Group reported internalization of positive qualities and identification with the deceased child as a role model, whereas participants in the Complicated Grief Group did not report these experiences. In addition, the drawings of those in the Non-Complicated Grief Group were evaluated as more adaptive than those in the Complicated Grief Group.

  11. 75 FR 62403 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-08

    ... Project: 2011-2014 National Survey on Drug Use and Health: Methodological Field Tests (OMB No. 0930-0290..., SAMHSA received a three-year renewal of its generic clearance for methodological field tests. This will be a request for another renewal of the generic approval to continue methodological tests over the...

  12. Eye-Tracking as a Tool in Process-Oriented Reading Test Validation

    ERIC Educational Resources Information Center

    Solheim, Oddny Judith; Uppstad, Per Henning

    2011-01-01

    The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…

  13. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  14. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  15. Talking about a (business continuity) revolution: Why best practices are wrong and possible solutions for getting them right.

    PubMed

    Armour, Mark

    The business continuity profession has been following a methodology that has barely evolved since its inception. Unfortunately, the stodgy, labour-intensive practices of the past are poorly suited to today's fast-paced and ever-changing work environments. Proposed herein is a new approach to the discipline. Just as agile methodology revolutionised project management, new tactics in preparedness can drastically change how this profession is practised. That is the hope. If there is to be any significant change in business continuity ahead, it may just take a revolution.

  16. Global tectonic reconstructions with continuously deforming and evolving rigid plates

    NASA Astrophysics Data System (ADS)

    Gurnis, Michael; Yang, Ting; Cannon, John; Turner, Mark; Williams, Simon; Flament, Nicolas; Müller, R. Dietmar

    2018-07-01

    Traditional plate reconstruction methodologies do not allow for plate deformation to be considered. Here we present software to construct and visualize global tectonic reconstructions with deforming plates within the context of rigid plates. Both deforming and rigid plates are defined by continuously evolving polygons. The deforming regions are tessellated with triangular meshes such that either strain rate or cumulative strain can be followed. The finite strain history, crustal thickness and stretching factor of points within the deformation zones are tracked as Lagrangian points. Integrating these tools within the interactive platform GPlates enables specialized users to build and refine deforming plate models and integrate them with other models in time and space. We demonstrate the integrated platform with regional reconstructions of Cenozoic western North America, the Mesozoic South American Atlantic margin, and Cenozoic southeast Asia, embedded within global reconstructions, using different data and reconstruction strategies.

  17. Continuous Trailing-Edge Flaps for Primary Flight Control of a Helicopter Main Rotor

    NASA Technical Reports Server (NTRS)

    Thornburgh, Robert P.; Kreshock, Andrew R.; Wilbur, Matthew L.; Sekula, Martin K.; Shen, Jinwei

    2014-01-01

    The use of continuous trailing-edge flaps (CTEFs) for primary flight control of a helicopter main rotor is studied. A practical, optimized bimorph design with Macro-Fiber Composite actuators is developed for CTEF control, and a coupled structures and computational fluid dynamics methodology is used to study the fundamental behavior of an airfoil with CTEFs. These results are used within a comprehensive rotorcraft analysis model to study the control authority requirements of the CTEFs when utilized for primary flight control of a utility class helicopter. A study of the effect of blade root pitch index (RPI) on CTEF control authority is conducted, and the impact of structural and aerodynamic model complexity on the comprehensive analysis results is presented. The results show that primary flight control using CTEFs is promising; however, a more viable option may include the control of blade RPI, as well.

  18. Integrating geological archives and climate models for the mid-Pliocene warm period.

    PubMed

    Haywood, Alan M; Dowsett, Harry J; Dolan, Aisling M

    2016-02-16

    The mid-Pliocene Warm Period (mPWP) offers an opportunity to understand a warmer-than-present world and assess the predictive ability of numerical climate models. Environmental reconstruction and climate modelling are crucial for understanding the mPWP, and the synergy of these two, often disparate, fields has proven essential in confirming features of the past and in turn building confidence in projections of the future. The continual development of methodologies to better facilitate environmental synthesis and data/model comparison is essential, with recent work demonstrating that time-specific (time-slice) syntheses represent the next logical step in exploring climate change during the mPWP and realizing its potential as a test bed for understanding future climate change.

  19. Didactical suggestion for a Dynamic Hybrid Intelligent e-Learning Environment (DHILE) applying the PENTHA ID Model

    NASA Astrophysics Data System (ADS)

    dall'Acqua, Luisa

    2011-08-01

    The teleology of our research is to propose a solution to the request of "innovative, creative teaching", proposing a methodology to educate creative Students in a society characterized by multiple reference points and hyper dynamic knowledge, continuously subject to reviews and discussions. We apply a multi-prospective Instructional Design Model (PENTHA ID Model), defined and developed by our research group, which adopts a hybrid pedagogical approach, consisting of elements of didactical connectivism intertwined with aspects of social constructivism and enactivism. The contribution proposes an e-course structure and approach, applying the theoretical design principles of the above mentioned ID Model, describing methods, techniques, technologies and assessment criteria for the definition of lesson modes in an e-course.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riensche, Roderick M.; Paulson, Patrick R.; Danielson, Gary R.

    We describe a methodology and architecture to support the development of games in a predictive analytics context. These games serve as part of an overall family of systems designed to gather input knowledge, calculate results of complex predictive technical and social models, and explore those results in an engaging fashion. The games provide an environment shaped and driven in part by the outputs of the models, allowing users to exert influence over a limited set of parameters, and displaying the results when those actions cause changes in the underlying model. We have crafted a prototype system in which we aremore » implementing test versions of games driven by models in such a fashion, using a flexible architecture to allow for future continuation and expansion of this work.« less

  1. Integrating geological archives and climate models for the mid-Pliocene warm period

    PubMed Central

    Haywood, Alan M.; Dowsett, Harry J.; Dolan, Aisling M.

    2016-01-01

    The mid-Pliocene Warm Period (mPWP) offers an opportunity to understand a warmer-than-present world and assess the predictive ability of numerical climate models. Environmental reconstruction and climate modelling are crucial for understanding the mPWP, and the synergy of these two, often disparate, fields has proven essential in confirming features of the past and in turn building confidence in projections of the future. The continual development of methodologies to better facilitate environmental synthesis and data/model comparison is essential, with recent work demonstrating that time-specific (time-slice) syntheses represent the next logical step in exploring climate change during the mPWP and realizing its potential as a test bed for understanding future climate change. PMID:26879640

  2. A systematic petri net approach for multiple-scale modeling and simulation of biochemical processes.

    PubMed

    Chen, Ming; Hu, Minjie; Hofestädt, Ralf

    2011-06-01

    A method to exploit hybrid Petri nets for modeling and simulating biochemical processes in a systematic way was introduced. Both molecular biology and biochemical engineering aspects are manipulated. With discrete and continuous elements, the hybrid Petri nets can easily handle biochemical factors such as metabolites concentration and kinetic behaviors. It is possible to translate both molecular biological behavior and biochemical processes workflow into hybrid Petri nets in a natural manner. As an example, penicillin production bioprocess is modeled to illustrate the concepts of the methodology. Results of the dynamic of production parameters in the bioprocess were simulated and observed diagrammatically. Current problems and post-genomic perspectives were also discussed.

  3. RRegrs: an R package for computer-aided model selection with multiple regression models.

    PubMed

    Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L

    2015-01-01

    Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.

  4. Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key

    PubMed Central

    Batchelder, William H.

    2014-01-01

    Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812

  5. A system-of-systems modeling methodology for strategic general aviation design decision-making

    NASA Astrophysics Data System (ADS)

    Won, Henry Thome

    General aviation has long been studied as a means of providing an on-demand "personal air vehicle" that bypasses the traffic at major commercial hubs. This thesis continues this research through development of a system of systems modeling methodology applicable to the selection of synergistic product concepts, market segments, and business models. From the perspective of the conceptual design engineer, the design and selection of future general aviation aircraft is complicated by the definition of constraints and requirements, and the tradeoffs among performance and cost aspects. Qualitative problem definition methods have been utilized, although their accuracy in determining specific requirement and metric values is uncertain. In industry, customers are surveyed, and business plans are created through a lengthy, iterative process. In recent years, techniques have developed for predicting the characteristics of US travel demand based on travel mode attributes, such as door-to-door time and ticket price. As of yet, these models treat the contributing systems---aircraft manufacturers and service providers---as independently variable assumptions. In this research, a methodology is developed which seeks to build a strategic design decision making environment through the construction of a system of systems model. The demonstrated implementation brings together models of the aircraft and manufacturer, the service provider, and most importantly the travel demand. Thus represented is the behavior of the consumers and the reactive behavior of the suppliers---the manufacturers and transportation service providers---in a common modeling framework. The results indicate an ability to guide the design process---specifically the selection of design requirements---through the optimization of "capability" metrics. Additionally, results indicate the ability to find synergetic solutions, that is solutions in which two systems might collaborate to achieve a better result than acting independently. Implementation of this methodology can afford engineers a more autonomous perspective in the concept exploration process, providing dynamic feedback about a design's potential success in specific market segments. The method also has potential to strengthen the connection between design and business departments, as well as between manufacturers, service providers, and infrastructure planners---bringing information about how the respective systems interact, and what might be done to improve synergism of systems.

  6. Fast maximum likelihood estimation using continuous-time neural point process models.

    PubMed

    Lepage, Kyle Q; MacDonald, Christopher J

    2015-06-01

    A recent report estimates that the number of simultaneously recorded neurons is growing exponentially. A commonly employed statistical paradigm using discrete-time point process models of neural activity involves the computation of a maximum-likelihood estimate. The time to computate this estimate, per neuron, is proportional to the number of bins in a finely spaced discretization of time. By using continuous-time models of neural activity and the optimally efficient Gaussian quadrature, memory requirements and computation times are dramatically decreased in the commonly encountered situation where the number of parameters p is much less than the number of time-bins n. In this regime, with q equal to the quadrature order, memory requirements are decreased from O(np) to O(qp), and the number of floating-point operations are decreased from O(np(2)) to O(qp(2)). Accuracy of the proposed estimates is assessed based upon physiological consideration, error bounds, and mathematical results describing the relation between numerical integration error and numerical error affecting both parameter estimates and the observed Fisher information. A check is provided which is used to adapt the order of numerical integration. The procedure is verified in simulation and for hippocampal recordings. It is found that in 95 % of hippocampal recordings a q of 60 yields numerical error negligible with respect to parameter estimate standard error. Statistical inference using the proposed methodology is a fast and convenient alternative to statistical inference performed using a discrete-time point process model of neural activity. It enables the employment of the statistical methodology available with discrete-time inference, but is faster, uses less memory, and avoids any error due to discretization.

  7. Environment, genes, and experience: lessons from behavior genetics.

    PubMed

    Barsky, Philipp I

    2010-11-01

    The article reviews the theoretical analysis of the problems inherent in studying the environment within behavior genetics across several periods in the development of environmental studies in behavior genetics and proposes some possible alternatives to traditional approaches to studying the environment in behavior genetics. The first period (from the end of the 1920s to the end of the 1970s), when the environment was not actually studied, is called pre-environmental; during this time, the basic principles and theoretical models of understanding environmental effects in behavior genetics were developed. The second period is characterized by the development of studies on environmental influences within the traditional behavior genetics paradigm; several approaches to studying the environment emerged in behavior genetics during this period, from the beginning of the 1980s until today. At the present time, the field is undergoing paradigmatic changes, concerned with methodology, theory, and mathematical models of genotype-environment interplay; this might be the beginning of a third period of development of environmental studies in behavior genetics. In another part, the methodological problems related to environmental studies in behavior genetics are discussed. Although the methodology used in differential psychology is applicable for assessment of differences between individuals, it is insufficient to explain the sources of these differences. In addition, we stress that psychoanalytic studies of twins and their experiences, initiated in the 1930s and continued episodically until the 1980s, could bring an interesting methodology and contribute to the explanation of puzzling findings from environmental studies of behavior genetics. Finally, we will conclude with implications from the results of environmental studies in behavior genetics, including methodological issues. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Modeling, Analyzing, and Mitigating Dissonance Between Alerting Systems

    NASA Technical Reports Server (NTRS)

    Song, Lixia; Kuchar, James K.

    2003-01-01

    Alerting systems are becoming pervasive in process operations, which may result in the potential for dissonance or conflict in information from different alerting systems that suggests different threat levels and/or actions to resolve hazards. Little is currently available to help in predicting or solving the dissonance problem. This thesis presents a methodology to model and analyze dissonance between alerting systems, providing both a theoretical foundation for understanding dissonance and a practical basis from which specific problems can be addressed. A state-space representation of multiple alerting system operation is generalized that can be tailored across a variety of applications. Based on the representation, two major causes of dissonance are identified: logic differences and sensor error. Additionally, several possible types of dissonance are identified. A mathematical analysis method is developed to identify the conditions for dissonance originating from logic differences. A probabilistic analysis methodology is developed to estimate the probability of dissonance originating from sensor error, and to compare the relative contribution to dissonance of sensor error against the contribution from logic differences. A hybrid model, which describes the dynamic behavior of the process with multiple alerting systems, is developed to identify dangerous dissonance space, from which the process can lead to disaster. Methodologies to avoid or mitigate dissonance are outlined. Two examples are used to demonstrate the application of the methodology. First, a conceptual In-Trail Spacing example is presented. The methodology is applied to identify the conditions for possible dissonance, to identify relative contribution of logic difference and sensor error, and to identify dangerous dissonance space. Several proposed mitigation methods are demonstrated in this example. In the second example, the methodology is applied to address the dissonance problem between two air traffic alert and avoidance systems: the existing Traffic Alert and Collision Avoidance System (TCAS) vs. the proposed Airborne Conflict Management system (ACM). Conditions on ACM resolution maneuvers are identified to avoid dynamic dissonance between TCAS and ACM. Also included in this report is an Appendix written by Lee Winder about recent and continuing work on alerting systems design. The application of Markov Decision Process (MDP) theory to complex alerting problems is discussed and illustrated with an abstract example system.

  9. A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM)

    DTIC Science & Technology

    2017-10-01

    TECHNICAL REPORT 3079 October 2017 A Methodology for Phased Array Radar Threshold Modeling Using the Advanced Propagation Model (APM...Head 55190 Networks Division iii EXECUTIVE SUMMARY This report summarizes the methodology developed to improve the radar threshold modeling...PHASED ARRAY RADAR CONFIGURATION ..................................................................... 1 3. METHODOLOGY

  10. Methodology for assessing quantities of water and proppant injection, and water production associated with development of continuous petroleum accumulations

    USGS Publications Warehouse

    Haines, Seth S.

    2015-07-13

    The quantities of water and hydraulic fracturing proppant required for producing petroleum (oil, gas, and natural gas liquids) from continuous accumulations, and the quantities of water extracted during petroleum production, can be quantitatively assessed using a probabilistic approach. The water and proppant assessment methodology builds on the U.S. Geological Survey methodology for quantitative assessment of undiscovered technically recoverable petroleum resources in continuous accumulations. The U.S. Geological Survey assessment methodology for continuous petroleum accumulations includes fundamental concepts such as geologically defined assessment units, and probabilistic input values including well-drainage area, sweet- and non-sweet-spot areas, and success ratio within the untested area of each assessment unit. In addition to petroleum-related information, required inputs for the water and proppant assessment methodology include probabilistic estimates of per-well water usage for drilling, cementing, and hydraulic-fracture stimulation; the ratio of proppant to water for hydraulic fracturing; the percentage of hydraulic fracturing water that returns to the surface as flowback; and the ratio of produced water to petroleum over the productive life of each well. Water and proppant assessments combine information from recent or current petroleum assessments with water- and proppant-related input values for the assessment unit being studied, using Monte Carlo simulation, to yield probabilistic estimates of the volume of water for drilling, cementing, and hydraulic fracture stimulation; the quantity of proppant for hydraulic fracture stimulation; and the volumes of water produced as flowback shortly after well completion, and produced over the life of the well.

  11. A modeling approach for aerosol optical depth analysis during forest fire events

    NASA Astrophysics Data System (ADS)

    Aube, Martin P.; O'Neill, Normand T.; Royer, Alain; Lavoue, David

    2004-10-01

    Measurements of aerosol optical depth (AOD) are important indicators of aerosol particle behavior. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as DDV (Dense Dark Vegetation) based inversion algorithms which yield AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new assimilation methodology that links AOD measurements and the predictions of a particulate matter Transport Model. This modelling package (AODSEM V2.0 for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution may be tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important and robust parameter. We applied this methodology to a significant smoke event that occurred over the eastern part of North America in July 2002.

  12. Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy

    NASA Astrophysics Data System (ADS)

    Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.

    2010-03-01

    Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.

  13. Applications of response surface methodology and artificial neural network for decolorization of distillery spent wash by using activated Piper nigrum.

    PubMed

    Arulmathi, P; Elangovan, G

    2016-11-01

    Ethanol production from sugarcane molasses yields large volume of highly colored spent wash as effluent. This color is imparted by the recalcitrant melanoidin pigment produced due to the Maillard reaction. In the present work, decolourization of melanoidin was carried out using activated carbon prepared from pepper stem (Piper nigrum). The interaction effect between parameters were studied by response surface methodology using central composite design and maximum decolourization of 75 % was obtained at pH 7.5, Melanoidin concentration of 32.5 mg l-1 with 1.63 g 100ml-1 of adsorbent for 2hr 75min. Artificial neural networks was also used to optimize the process parameters, giving 74 % decolourization for the same parameters. The Langmuir and Freundich isotherms were applied for describing the biosorption equilibrium. The process was represented by the Langmuir isotherm with a correlation coefficient of 0.94. The first-order, second-order models were implemented for demonstrating the biosorption mechanism and, as a result, Pseudo second order model kinetics fitted best to the experimental data. The estimated enthalpy change (DH) and entropy change (DS) of adsorption were 32.195 kJ mol-1 and 115.44 J mol-1 K which indicates that the adsorption of melanoidin was an endothermic process. Continuous adsorption studies were conducted under optimized condition. The breakthrough curve analysis was determined using the experimental data obtained from continuous adsorption. Continuous column studies gave a breakthrough at 182 mins and 176 ml. It was concluded that column packed with Piper nigrum based activated carbon can be used to remove color from distillery spent wash.

  14. Outside users payload model

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The outside users payload model which is a continuation of documents and replaces and supersedes the July 1984 edition is presented. The time period covered by this model is 1985 through 2000. The following sections are included: (1) definition of the scope of the model; (2) discussion of the methodology used; (3) overview of total demand; (4) summary of the estimated market segmentation by launch vehicle; (5) summary of the estimated market segmentation by user type; (6) details of the STS market forecast; (7) summary of transponder trends; (8) model overview by mission category; and (9) detailed mission models. All known non-NASA, non-DOD reimbursable payloads forecast to be flown by non-Soviet-block countries are included in this model with the exception of Spacelab payloads and small self contained payloads. Certain DOD-sponsored or cosponsored payloads are included if they are reimbursable launches.

  15. General calibration methodology for a combined Horton-SCS infiltration scheme in flash flood modeling

    NASA Astrophysics Data System (ADS)

    Gabellani, S.; Silvestro, F.; Rudari, R.; Boni, G.

    2008-12-01

    Flood forecasting undergoes a constant evolution, becoming more and more demanding about the models used for hydrologic simulations. The advantages of developing distributed or semi-distributed models have currently been made clear. Now the importance of using continuous distributed modeling emerges. A proper schematization of the infiltration process is vital to these types of models. Many popular infiltration schemes, reliable and easy to implement, are too simplistic for the development of continuous hydrologic models. On the other hand, the unavailability of detailed and descriptive information on soil properties often limits the implementation of complete infiltration schemes. In this work, a combination between the Soil Conservation Service Curve Number method (SCS-CN) and a method derived from Horton equation is proposed in order to overcome the inherent limits of the two schemes. The SCS-CN method is easily applicable on large areas, but has structural limitations. The Horton-like methods present parameters that, though measurable to a point, are difficult to achieve a reliable estimate at catchment scale. The objective of this work is to overcome these limits by proposing a calibration procedure which maintains the large applicability of the SCS-CN method as well as the continuous description of the infiltration process given by the Horton's equation suitably modified. The estimation of the parameters of the modified Horton method is carried out using a formal analogy with the SCS-CN method under specific conditions. Some applications, at catchment scale within a distributed model, are presented.

  16. An Introduction to Flight Software Development: FSW Today, FSW 2010

    NASA Technical Reports Server (NTRS)

    Gouvela, John

    2004-01-01

    Experience and knowledge gained from ongoing maintenance of Space Shuttle Flight Software and new development projects including Cockpit Avionics Upgrade are applied to projected needs of the National Space Exploration Vision through Spiral 2. Lessons learned from these current activities are applied to create a sustainable, reliable model for development of critical software to support Project Constellation. This presentation introduces the technologies, methodologies, and infrastructure needed to produce and sustain high quality software. It will propose what is needed to support a Vision for Space Exploration that places demands on the innovation and productivity needed to support future space exploration. The technologies in use today within FSW development include tools that provide requirements tracking, integrated change management, modeling and simulation software. Specific challenges that have been met include the introduction and integration of Commercial Off the Shelf (COTS) Real Time Operating System for critical functions. Though technology prediction has proved to be imprecise, Project Constellation requirements will need continued integration of new technology with evolving methodologies and changing project infrastructure. Targets for continued technology investment are integrated health monitoring and management, self healing software, standard payload interfaces, autonomous operation, and improvements in training. Emulation of the target hardware will also allow significant streamlining of development and testing. The methodologies in use today for FSW development are object oriented UML design, iterative development using independent components, as well as rapid prototyping . In addition, Lean Six Sigma and CMMI play a critical role in the quality and efficiency of the workforce processes. Over the next six years, we expect these methodologies to merge with other improvements into a consolidated office culture with all processes being guided by automated office assistants. The infrastructure in use today includes strict software development and configuration management procedures, including strong control of resource management and critical skills coverage. This will evolve to a fully integrated staff organization with efficient and effective communication throughout all levels guided by a Mission-Systems Architecture framework with focus on risk management and attention toward inevitable product obsolescence. This infrastructure of computing equipment, software and processes will itself be subject to technological change and need for management of change and improvement,

  17. 75 FR 78720 - Agency Information Collection Activities: Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-16

    .... Proposed Project: 2011-2014 National Survey on Drug Use and Health: Methodological Field Tests (OMB No..., SAMHSA received a 3-year renewal of its generic clearance for methodological field tests. This will be a request for another renewal of the generic approval to continue methodological tests over the next 3 years...

  18. Review of Research on School Principal Leadership in Mainland China, 1998-2013: Continuity and Change

    ERIC Educational Resources Information Center

    Walker, Allan; Qian, Haiyan

    2015-01-01

    Purpose: The purpose of this paper is to review English-language publications about school principalship in China published between 1998 and 2013 and to present an overview of the authorship, topics, methodologies and key findings of these publications. Design/methodology/approach: The methodology includes an exhaustive review of journal articles…

  19. Methodology to develop crash modification functions for road safety treatments with fully specified and hierarchical models.

    PubMed

    Chen, Yongsheng; Persaud, Bhagwant

    2014-09-01

    Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. CFD analysis on gas distribution for different scrubber redirection configurations in sump cut.

    PubMed

    Zheng, Y; Organiscak, J A; Zhou, L; Beck, T W; Rider, J P

    2015-01-01

    The National Institute for Occupational Safety and Health's Office of Mine Safety and Health Research recently developed a series of models using computational fluid dynamics (CFD) to study the gas distribution around a continuous mining machine with various fan-powered flooded bed scrubber discharge configurations. CFD models using Species Transport Model without reactions in FLUENT were constructed to evaluate the redirection of scrubber discharge toward the mining face rather than behind the return curtain. The following scenarios are considered in this study: 100 percent of the discharge redirected back toward the face on the off-curtain side of the continuous miner; 100 percent of the discharge redirected back toward the face, but divided equally to both sides of the machine; and 15 percent of the discharge redirected toward the face on the off-curtain side of the machine, with 85 percent directed into the return. These models were compared against a model with a conventional scrubber discharge, where air is directed away from the face into the return. The CFD models were calibrated and validated based on experimental data and accurately predicted sulfur hexafluoride (SF 6 ) gas levels at four gas monitoring locations. One additional prediction model was simulated to consider a different scrubber discharge angle for the 100 percent redirected, equally divided case. These models identified relatively high gassy areas around the continuous miner, which may not warrant their use in coal mines with medium to high methane liberation rates. This paper describes the methodology used to develop the CFD models, and the validation of the models based on experimental data.

  1. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.

    2018-04-01

    A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  2. Ecological risk assessment of ecosystem services in the Taihu Lake Basin of China from 1985 to 2020.

    PubMed

    Xu, Xibao; Yang, Guishan; Tan, Yan; Zhuang, Qianlai; Li, Hengpeng; Wan, Rongrong; Su, Weizhong; Zhang, Jian

    2016-06-01

    There are tremendous theoretical, methodological and policy challenges in evaluating the impact of land-use change on the degradation of ecosystem services (ES) at the regional scale. This study addresses these challenges by developing an interdisciplinary methodology based on the Procedure for Ecological Tiered Assessment of Risk (PETAR). This novel methodology integrates ecological models with a land-use change model. This study quantifies the multi-dimensional degradation risks of ES in the Taihu Lake Basin (TLB) of China from 1985 to 2020. Four key ES related to water purification, water quantity adjustment, carbon sequestration and grain production are selected. The study employs models of Denitrification-Decomposition (DNDC), Soil-Water-Atmosphere-Plant (SWAP), Biome-BGC and Agro-ecological Zoning (AEZ) for assimilations. Land-use changes by 2020 were projected using a geographically weighted multinomial logit-cellular automata (GWML-CA) model. The results show that rapid land-use change has posed a great degradation risk of ES in the region in 1985-2020. Slightly less than two-thirds of the basin experienced degradation of ES over the 1985-2010 period, and about 12% of the basin will continue to experience degradation until 2020. Hot spots with severe deterioration in 2010-2020 are projected to be centered around some small and less developed cities in the region. Regulating accelerated urban sprawl and population growth, reinforcing current environmental programs, and establishing monitoring systems for observing dynamics of regional ES are suggested as practical counter-measures. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. An Empirical Study of Re-sampling Techniques as a Method for Improving Error Estimates in Split-plot Designs

    DTIC Science & Technology

    2010-03-01

    sufficient replications often lead to models that lack precision in error estimation and thus imprecision in corresponding conclusions. This work develops...v Preface This work is dedicated to all who gave and continue to give in order for me to achieve some semblance of success. Benjamin M. Lee vi...develop, examine and test methodologies for an- alyzing test results from split-plot designs. In particular, this work determines the applicability

  4. Single service point: it's all in the design.

    PubMed

    Bradigan, Pamela S; Rodman, Ruey L

    2008-01-01

    "Design thinking" principles from a leading design firm, IDEO, were key elements in the planning process for a one-desk service model, the ASK Desk, at the John A. Prior Health Sciences Library. The library administration and staff employed the methodology to enhance customer experiences, meet technology challenges, and compete in a changing education environment. The most recent renovations demonstrate how the principles were applied. The concept of "continuous design thinking" is important in the library's daily operations to serve customers most effectively.

  5. A life prediction model for laminated composite structural components

    NASA Technical Reports Server (NTRS)

    Allen, David H.

    1990-01-01

    A life prediction methodology for laminated continuous fiber composites subjected to fatigue loading conditions was developed. A summary is presented of research completed. A phenomenological damage evolution law was formulated for matrix cracking which is independent of stacking sequence. Mechanistic and physical support was developed for the phenomenological evolution law proposed above. The damage evolution law proposed above was implemented to a finite element computer program. And preliminary predictions were obtained for a structural component undergoing fatigue loading induced damage.

  6. Research methodology workshops evaluation using the Kirkpatrick's model: translating theory into practice.

    PubMed

    Abdulghani, Hamza Mohammad; Shaik, Shaffi Ahamed; Khamis, Nehal; Al-Drees, Abdulmajeed Abdulrahman; Irshad, Mohammad; Khalil, Mahmoud Salah; Alhaqwi, Ali Ibrahim; Isnani, Arthur

    2014-04-01

    Qualitative and quantitative evaluation of academic programs can enhance the development, effectiveness, and dissemination of comparative quality reports as well as quality improvement efforts. To evaluate the five research methodology workshops through assessing participants' satisfaction, knowledge and skills gain and impact on practices by the Kirkpatrick's evaluation model. The four level Kirkpatrick's model was applied for the evaluation. Training feedback questionnaires, pre and post tests, learner development plan reports and behavioral surveys were used to evaluate the effectiveness of the workshop programs. Of the 116 participants, 28 (24.1%) liked with appreciation, 62 (53.4%) liked with suggestions and 26 (22.4%) disliked the programs. Pre and post MCQs tests mean scores showed significant improvement of relevant basic knowledge and cognitive skills by 17.67% (p ≤ 0.005). Pre-and-post tests scores on workshops sub-topics also significantly improved for the manuscripts (p ≤ 0.031) and proposal writing (p ≤ 0.834). As for the impact, 56.9% of participants started research, and 6.9% published their studies. The results from participants' performance revealed an overall positive feedback and 79% of participant reported transfer of training skills at their workplace. The course outcomes achievement and suggestions given for improvements offer insight into the program which were encouraging and very useful. Encouraging "research culture" and work-based learning are probably the most powerful determinants for research promotion. These findings therefore encourage faculty development unit to continue its training and development in the research methodology aspects.

  7. Three-dimensional printing of continuous-fiber composites by in-nozzle impregnation

    PubMed Central

    Matsuzaki, Ryosuke; Ueda, Masahito; Namiki, Masaki; Jeong, Tae-Kun; Asahara, Hirosuke; Horiguchi, Keisuke; Nakamura, Taishi; Todoroki, Akira; Hirano, Yoshiyasu

    2016-01-01

    We have developed a method for the three-dimensional (3D) printing of continuous fiber-reinforced thermoplastics based on fused-deposition modeling. The technique enables direct 3D fabrication without the use of molds and may become the standard next-generation composite fabrication methodology. A thermoplastic filament and continuous fibers were separately supplied to the 3D printer and the fibers were impregnated with the filament within the heated nozzle of the printer immediately before printing. Polylactic acid was used as the matrix while carbon fibers, or twisted yarns of natural jute fibers, were used as the reinforcements. The thermoplastics reinforced with unidirectional jute fibers were examples of plant-sourced composites; those reinforced with unidirectional carbon fiber showed mechanical properties superior to those of both the jute-reinforced and unreinforced thermoplastics. Continuous fiber reinforcement improved the tensile strength of the printed composites relative to the values shown by conventional 3D-printed polymer-based composites. PMID:26965201

  8. Effects of space environment on composites: An analytical study of critical experimental parameters

    NASA Technical Reports Server (NTRS)

    Gupta, A.; Carroll, W. F.; Moacanin, J.

    1979-01-01

    A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.

  9. Marine Viral Pathogens.

    DTIC Science & Technology

    1998-05-13

    coccolithophorid Emiliania huxleyi. Experiments are continuing to determine whether the pathogens are viral. We have continued the development of PCR primers... Emiliania huxleyi; further work will be required to determine if the pathogen is viral. We have also continued methodological work to improve our ability

  10. A Summary of Revisions Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.; Perry, Boyd, III; Silva, Walter A.; Newman, Brett

    2014-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees-of - freedom and allows for the calculation of various airplane responses due to a discrete one-minus- cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and additional output data so as to provide a more useful and precise tool for gust load analysis. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs is included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  11. A Predictive Safety Management System Software Package Based on the Continuous Hazard Tracking and Failure Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Quintana, Rolando

    2003-01-01

    The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.

  12. The art and science of cancer education and evaluation: toward facilitating improved patient outcomes.

    PubMed

    Johnson, Lenora; Ousley, Anita; Swarz, Jeffrey; Bingham, Raymond J; Erickson, J Bianca; Ellis, Steven; Moody, Terra

    2011-03-01

    Cancer education is a constantly evolving field, as science continues to advance both our understanding of cancer and its effects on patients, families, and communities. Moving discoveries to practice expeditiously is paramount to impacting cancer outcomes. The continuing education of cancer care professionals throughout their practice life is vital to facilitating the adoption of therapeutic innovations. Meanwhile, more general educational programs serve to keep cancer patients, their families, and the public informed of the latest findings in cancer research. The National Cancer Institute conducted an assessment of the current knowledge base for cancer education which involved two literature reviews, one of the general literature of the evaluation of medical and health education efforts, and the other of the preceding 5 years of the Journal of Cancer Education (JCE). These reviews explored a wide range of educational models and methodologies. In general, those that were most effective used multiple methodologies, interactive techniques, and multiple exposures over time. Less than one third of the articles in the JCE reported on a cancer education or communication product, and of these, only 70% had been evaluated for effectiveness. Recommendations to improve the evaluation of cancer education and the educational focus of the JCE are provided.

  13. Modular continuous wavelet processing of biosignals: extracting heart rate and oxygen saturation from a video signal

    PubMed Central

    2016-01-01

    A novel method of extracting heart rate and oxygen saturation from a video-based biosignal is described. The method comprises a novel modular continuous wavelet transform approach which includes: performing the transform, undertaking running wavelet archetyping to enhance the pulse information, extraction of the pulse ridge time–frequency information [and thus a heart rate (HRvid) signal], creation of a wavelet ratio surface, projection of the pulse ridge onto the ratio surface to determine the ratio of ratios from which a saturation trending signal is derived, and calibrating this signal to provide an absolute saturation signal (SvidO2). The method is illustrated through its application to a video photoplethysmogram acquired during a porcine model of acute desaturation. The modular continuous wavelet transform-based approach is advocated by the author as a powerful methodology to deal with noisy, non-stationary biosignals in general. PMID:27382479

  14. Speed Accuracy Tradeoffs in Human Speech Production

    DTIC Science & Technology

    2017-05-01

    for considering Fitts’ law in the domain of speech production is elucidated. Methodological challenges in applying Fitts-style analysis are addressed...order to assess whether articulatory kinematics conform to Fitts’ law. A second, associated goal is to address the methodological challenges inherent in...performing Fitts-style analysis on rtMRI data of speech production. Methodological challenges include segmenting continuous speech into specific motor

  15. Speed-Accuracy Tradeoffs in Speech Production

    DTIC Science & Technology

    2017-06-01

    imaging data of speech production. A theoretical framework for considering Fitts’ law in the domain of speech production is elucidated. Methodological ...articulatory kinematics conform to Fitts’ law. A second, associated goal is to address the methodological challenges inherent in performing Fitts-style...analysis on rtMRI data of speech production. Methodological challenges include segmenting continuous speech into specific motor tasks, defining key

  16. Modeling Common-Sense Decisions in Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    A methodology has been conceived for efficient synthesis of dynamical models that simulate common-sense decision- making processes. This methodology is intended to contribute to the design of artificial-intelligence systems that could imitate human common-sense decision making or assist humans in making correct decisions in unanticipated circumstances. This methodology is a product of continuing research on mathematical models of the behaviors of single- and multi-agent systems known in biology, economics, and sociology, ranging from a single-cell organism at one extreme to the whole of human society at the other extreme. Earlier results of this research were reported in several prior NASA Tech Briefs articles, the three most recent and relevant being Characteristics of Dynamics of Intelligent Systems (NPO -21037), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48; Self-Supervised Dynamical Systems (NPO-30634), NASA Tech Briefs, Vol. 27, No. 3 (March 2003), page 72; and Complexity for Survival of Living Systems (NPO- 43302), NASA Tech Briefs, Vol. 33, No. 7 (July 2009), page 62. The methodology involves the concepts reported previously, albeit viewed from a different perspective. One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Models of motor dynamics are used to simulate the observable behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. Autonomy is imparted to the decisionmaking process by feedback from mental to motor dynamics. This feedback replaces unavailable external information by information stored in the internal knowledge base. Representation of the dynamical models in a parameterized form reduces the task of common-sense-based decision making to a solution of the following hetero-associated-memory problem: store a set of m predetermined stochastic processes given by their probability distributions in such a way that when presented with an unexpected change in the form of an input out of the set of M inputs, the coupled motormental dynamics converges to the corresponding one of the m pre-assigned stochastic process, and a sample of this process represents the decision.

  17. Control Oriented Modeling and Validation of Aeroservoelastic Systems

    NASA Technical Reports Server (NTRS)

    Crowder, Marianne; deCallafon, Raymond (Principal Investigator)

    2002-01-01

    Lightweight aircraft design emphasizes the reduction of structural weight to maximize aircraft efficiency and agility at the cost of increasing the likelihood of structural dynamic instabilities. To ensure flight safety, extensive flight testing and active structural servo control strategies are required to explore and expand the boundary of the flight envelope. Aeroservoelastic (ASE) models can provide online flight monitoring of dynamic instabilities to reduce flight time testing and increase flight safety. The success of ASE models is determined by the ability to take into account varying flight conditions and the possibility to perform flight monitoring under the presence of active structural servo control strategies. In this continued study, these aspects are addressed by developing specific methodologies and algorithms for control relevant robust identification and model validation of aeroservoelastic structures. The closed-loop model robust identification and model validation are based on a fractional model approach where the model uncertainties are characterized in a closed-loop relevant way.

  18. ARMA models for earthquake ground motions. Seismic safety margins research program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.

    1981-02-01

    Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less

  19. Modeling acclimatization by hybrid systems: condition changes alter biological system behavior models.

    PubMed

    Assar, Rodrigo; Montecino, Martín A; Maass, Alejandro; Sherman, David J

    2014-07-01

    In order to describe the dynamic behavior of a complex biological system, it is useful to combine models integrating processes at different levels and with temporal dependencies. Such combinations are necessary for modeling acclimatization, a phenomenon where changes in environmental conditions can induce drastic changes in the behavior of a biological system. In this article we formalize the use of hybrid systems as a tool to model this kind of biological behavior. A modeling scheme called strong switches is proposed. It allows one to take into account both minor adjustments to the coefficients of a continuous model, and, more interestingly, large-scale changes to the structure of the model. We illustrate the proposed methodology with two applications: acclimatization in wine fermentation kinetics, and acclimatization of osteo-adipo differentiation system linking stimulus signals to bone mass. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. K-Means Subject Matter Expert Refined Topic Model Methodology

    DTIC Science & Technology

    2017-01-01

    Refined Topic Model Methodology Topic Model Estimation via K-Means U.S. Army TRADOC Analysis Center-Monterey 700 Dyer Road...January 2017 K-means Subject Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-Means Theodore T. Allen, Ph.D. Zhenhuan...Matter Expert Refined Topic Model Methodology Topic Model Estimation via K-means 5a. CONTRACT NUMBER W9124N-15-P-0022 5b. GRANT NUMBER 5c

  1. Long-term forecasting of internet backbone traffic.

    PubMed

    Papagiannaki, Konstantina; Taft, Nina; Zhang, Zhi-Li; Diot, Christophe

    2005-09-01

    We introduce a methodology to predict when and where link additions/upgrades have to take place in an Internet protocol (IP) backbone network. Using simple network management protocol (SNMP) statistics, collected continuously since 1999, we compute aggregate demand between any two adjacent points of presence (PoPs) and look at its evolution at time scales larger than 1 h. We show that IP backbone traffic exhibits visible long term trends, strong periodicities, and variability at multiple time scales. Our methodology relies on the wavelet multiresolution analysis (MRA) and linear time series models. Using wavelet MRA, we smooth the collected measurements until we identify the overall long-term trend. The fluctuations around the obtained trend are further analyzed at multiple time scales. We show that the largest amount of variability in the original signal is due to its fluctuations at the 12-h time scale. We model inter-PoP aggregate demand as a multiple linear regression model, consisting of the two identified components. We show that this model accounts for 98% of the total energy in the original signal, while explaining 90% of its variance. Weekly approximations of those components can be accurately modeled with low-order autoregressive integrated moving average (ARIMA) models. We show that forecasting the long term trend and the fluctuations of the traffic at the 12-h time scale yields accurate estimates for at least 6 months in the future.

  2. Analytical Methodology Used To Assess/Refine Observatory Thermal Vacuum Test Conditions For the Landsat 8 Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Fantano, Louis

    2015-01-01

    Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.

  3. Anthropogenic ``Global Warming'' Alarmism: Illuminating some Scientific and Methodological Flaws

    NASA Astrophysics Data System (ADS)

    Gould, Larry

    2009-10-01

    There continues to be an increasing number of scientists and public figures around the world who are challenging the dominant political- and mediadriven claims that have been bolstered by so-called ``consensus'' scientific views -- that dangerous ``global warming/climate change'' is caused primarily by human-produced carbon dioxide. This general talk will show that the weight of scientific evidence strongly contradicts the alarmist claims. It will also explain what are some of the methodological flaws that continue to threaten the scientific method.

  4. 14 CFR 121.909 - Approval of Advanced Qualification Program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... methodology must incorporate a thorough analysis of the certificate holder's operations, aircraft, line environment and job functions. All AQP qualification and continuing qualification curriculums must integrate.... (ii) Initial job task listing. (iii) Instructional systems development methodology. (iv) Qualification...

  5. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  6. Modelling the anaerobic digestion of solid organic waste - Substrate characterisation method for ADM1 using a combined biochemical and kinetic parameter estimation approach.

    PubMed

    Poggio, D; Walker, M; Nimmo, W; Ma, L; Pourkashanian, M

    2016-07-01

    This work proposes a novel and rigorous substrate characterisation methodology to be used with ADM1 to simulate the anaerobic digestion of solid organic waste. The proposed method uses data from both direct substrate analysis and the methane production from laboratory scale anaerobic digestion experiments and involves assessment of four substrate fractionation models. The models partition the organic matter into a mixture of particulate and soluble fractions with the decision on the most suitable model being made on quality of fit between experimental and simulated data and the uncertainty of the calibrated parameters. The method was tested using samples of domestic green and food waste and using experimental data from both short batch tests and longer semi-continuous trials. The results showed that in general an increased fractionation model complexity led to better fit but with increased uncertainty. When using batch test data the most suitable model for green waste included one particulate and one soluble fraction, whereas for food waste two particulate fractions were needed. With richer semi-continuous datasets, the parameter estimation resulted in less uncertainty therefore allowing the description of the substrate with a more complex model. The resulting substrate characterisations and fractionation models obtained from batch test data, for both waste samples, were used to validate the method using semi-continuous experimental data and showed good prediction of methane production, biogas composition, total and volatile solids, ammonia and alkalinity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. A validated methodology for determination of laboratory instrument computer interface efficacy

    NASA Astrophysics Data System (ADS)

    1984-12-01

    This report is intended to provide a methodology for determining when, and for which instruments, direct interfacing of laboratory instrument and laboratory computers is beneficial. This methodology has been developed to assist the Tri-Service Medical Information Systems Program Office in making future decisions regarding laboratory instrument interfaces. We have calculated the time savings required to reach a break-even point for a range of instrument interface prices and corresponding average annual costs. The break-even analyses used empirical data to estimate the number of data points run per day that are required to meet the break-even point. The results indicate, for example, that at a purchase price of $3,000, an instrument interface will be cost-effective if the instrument is utilized for at least 154 data points per day if operated in the continuous mode, or 216 points per day if operated in the discrete mode. Although this model can help to ensure that instrument interfaces are cost effective, additional information should be considered in making the interface decisions. A reduction in results transcription errors may be a major benefit of instrument interfacing.

  8. An integrated dispersion preparation, characterization and in vitro dosimetry methodology for engineered nanomaterials

    PubMed Central

    DeLoid, Glen M.; Cohen, Joel M.; Pyrgiotakis, Georgios; Demokritou, Philip

    2018-01-01

    Summary Evidence continues to grow of the importance of in vitro and in vivo dosimetry in the hazard assessment and ranking of engineered nanomaterials (ENMs). Accurate dose metrics are particularly important for in vitro cellular screening to assess the potential health risks or bioactivity of ENMs. In order to ensure meaningful and reproducible quantification of in vitro dose, with consistent measurement and reporting between laboratories, it is necessary to adopt standardized and integrated methodologies for 1) generation of stable ENM suspensions in cell culture media, 2) colloidal characterization of suspended ENMs, particularly properties that determine particle kinetics in an in vitro system (size distribution and formed agglomerate effective density), and 3) robust numerical fate and transport modeling for accurate determination of ENM dose delivered to cells over the course of the in vitro exposure. Here we present such an integrated comprehensive protocol based on such a methodology for in vitro dosimetry, including detailed standardized procedures for each of these three critical steps. The entire protocol requires approximately 6-12 hours to complete. PMID:28102836

  9. Identifying environmental factors harmful to reproduction.

    PubMed Central

    Palmer, A K

    1993-01-01

    Reproduction is essential for the continuation of the species and for life itself. In biological terms, living and reproducing are essentially one and the same. There is, therefore, no sharp division between identifying factors harmful to reproduction and identifying factors harmful to life or vice versa. Detection of harmful factors requires balanced use of a variety of methodologies from databases on structure-activity relationships through in vitro and in vivo test systems of varying complexity to surveys of wildlife and human populations. Human surveys provide the only assured means of discriminating between real and imagined harmful factors, but they are time consuming and provide information after the harm has been done. Test systems with whole animals provide the best prospects for identifying harmful factors quickly, but currently available methods used for testing agrochemicals and drugs need a thorough overhaul before they can provide a role model. Whether there is a need for new methodology is doubtful. More certain is the need to use existing methodology more wisely. We need a better understanding of the environment--whatever it is--and a more thoughtful approach to investigation of multifactorial situations. PMID:8243390

  10. Development of Testing Methodologies for the Mechanical Properties of MEMS

    NASA Technical Reports Server (NTRS)

    Ekwaro-Osire, Stephen

    2003-01-01

    This effort is to investigate and design testing strategies to determine the mechanical properties of MicroElectroMechanical Systems (MEMS) as well as investigate the development of a MEMS Probabilistic Design Methodology (PDM). One item of potential interest is the design of a test for the Weibull size effect in pressure membranes. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. However, the primary area of investigation will most likely be analysis and modeling of material interfaces for strength as well as developing a strategy to handle stress singularities at sharp corners, filets, and material interfaces. This will be a continuation of the previous years work. The ultimate objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads.

  11. 7 CFR 1940.560 - Guarantee Rural Rental Housing Program.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 13 2010-01-01 2009-01-01 true Guarantee Rural Rental Housing Program. 1940.560 Section 1940.560 Agriculture Regulations of the Department of Agriculture (Continued) RURAL HOUSING... OF AGRICULTURE (CONTINUED) PROGRAM REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for...

  12. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. © 2015, The International Biometric Society.

  13. Spatio-Temporal Dimensions of Child Poverty in America, 1990-2010.

    PubMed

    Call, Maia A; Voss, Paul R

    2016-01-01

    The persistence of childhood poverty in the United States, a wealthy and developed country, continues to pose both an analytical dilemma and public policy challenge, despite many decades of research and remedial policy implementation. In this paper, our goals are twofold, though our primary focus is methodological. We attempt both to examine the relationship between space, time, and previously established factors correlated with childhood poverty at the county level in the continental United States as well as to provide an empirical case study to demonstrate an underutilized methodological approach. We analyze a spatially consistent dataset built from the 1990 and 2000 U.S. Censuses, and the 2006-2010 American Community Survey. Our analytic approach includes cross-sectional spatial models to estimate the reproduction of poverty for each of the reference years as well as a fixed effects panel data model, to analyze change in child poverty over time. In addition, we estimate a full space-time interaction model, which adjusts for spatial and temporal variation in these data. These models reinforce our understanding of the strong regional persistence of childhood poverty in the U.S. over time and suggest that the factors impacting childhood poverty remain much the same today as they have in past decades.

  14. Thermal photogrammetric imaging: A new technique for monitoring dome eruptions

    NASA Astrophysics Data System (ADS)

    Thiele, Samuel T.; Varley, Nick; James, Mike R.

    2017-05-01

    Structure-from-motion (SfM) algorithms greatly facilitate the generation of 3-D topographic models from photographs and can form a valuable component of hazard monitoring at active volcanic domes. However, model generation from visible imagery can be prevented due to poor lighting conditions or surface obscuration by degassing. Here, we show that thermal images can be used in a SfM workflow to mitigate these issues and provide more continuous time-series data than visible-light equivalents. We demonstrate our methodology by producing georeferenced photogrammetric models from 30 near-monthly overflights of the lava dome that formed at Volcán de Colima (Mexico) between 2013 and 2015. Comparison of thermal models with equivalents generated from visible-light photographs from a consumer digital single lens reflex (DSLR) camera suggests that, despite being less detailed than their DSLR counterparts, the thermal models are more than adequate reconstructions of dome geometry, giving volume estimates within 10% of those derived using the DSLR. Significantly, we were able to construct thermal models in situations where degassing and poor lighting prevented the construction of models from DSLR imagery, providing substantially better data continuity than would have otherwise been possible. We conclude that thermal photogrammetry provides a useful new tool for monitoring effusive volcanic activity and assessing associated volcanic risks.

  15. QSAR modeling of GPCR ligands: methodologies and examples of applications.

    PubMed

    Tropsha, A; Wang, S X

    2006-01-01

    GPCR ligands represent not only one of the major classes of current drugs but the major continuing source of novel potent pharmaceutical agents. Because 3D structures of GPCRs as determined by experimental techniques are still unavailable, ligand-based drug discovery methods remain the major computational molecular modeling approaches to the analysis of growing data sets of tested GPCR ligands. This paper presents an overview of modern Quantitative Structure Activity Relationship (QSAR) modeling. We discuss the critical issue of model validation and the strategy for applying the successfully validated QSAR models to virtual screening of available chemical databases. We present several examples of applications of validated QSAR modeling approaches to GPCR ligands. We conclude with the comments on exciting developments in the QSAR modeling of GPCR ligands that focus on the study of emerging data sets of compounds with dual or even multiple activities against two or more of GPCRs.

  16. Linear parameter varying identification of ankle joint intrinsic stiffness during imposed walking movements.

    PubMed

    Sobhani Tehrani, Ehsan; Jalaleddini, Kian; Kearney, Robert E

    2013-01-01

    This paper describes a novel model structure and identification method for the time-varying, intrinsic stiffness of human ankle joint during imposed walking (IW) movements. The model structure is based on the superposition of a large signal, linear, time-invariant (LTI) model and a small signal linear-parameter varying (LPV) model. The methodology is based on a two-step algorithm; the LTI model is first estimated using data from an unperturbed IW trial. Then, the LPV model is identified using data from a perturbed IW trial with the output predictions of the LTI model removed from the measured torque. Experimental results demonstrate that the method accurately tracks the continuous-time variation of normal ankle intrinsic stiffness when the joint position changes during the IW movement. Intrinsic stiffness gain decreases from full plantarflexion to near the mid-point of plantarflexion and then increases substantially as the ankle is dosriflexed.

  17. The Iterative Research Cycle: Process-Based Model Evaluation

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2014-12-01

    The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.

  18. Iterative refinement of implicit boundary models for improved geological feature reproduction

    NASA Astrophysics Data System (ADS)

    Martin, Ryan; Boisvert, Jeff B.

    2017-12-01

    Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.

  19. Analyzing data from open enrollment groups: current considerations and future directions.

    PubMed

    Morgan-Lopez, Antonio A; Fals-Stewart, William

    2008-07-01

    Difficulties in modeling turnover in treatment-group membership have been cited as one of the major impediments to ecological validity of substance abuse and alcoholism treatment research. In this review, our primary foci are on (a) the discussion of approaches that draw on state-of-the-science analytic methods for modeling open-enrollment group data and (b) highlighting emerging issues that are critical to this relatively new area of methodological research (e.g., quantifying membership change, modeling "holiday" effects, and modeling membership change among group members and leaders). Continuing refinement of new modeling tools to address these analytic complexities may ultimately lead to the development of more federally funded open-enrollment trials. These developments may also facilitate the building of a "community-friendly" treatment research portfolio for funding agencies that support substance abuse and alcoholism treatment research.

  20. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less

  1. Data Quality Assurance for Supersonic Jet Noise Measurements

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.; Henderson, Brenda S.; Bridges, James E.

    2010-01-01

    The noise created by a supersonic aircraft is a primary concern in the design of future high-speed planes. The jet noise reduction technologies required on these aircraft will be developed using scale-models mounted to experimental jet rigs designed to simulate the exhaust gases from a full-scale jet engine. The jet noise data collected in these experiments must accurately predict the noise levels produced by the full-scale hardware in order to be a useful development tool. A methodology has been adopted at the NASA Glenn Research Center s Aero-Acoustic Propulsion Laboratory to insure the quality of the supersonic jet noise data acquired from the facility s High Flow Jet Exit Rig so that it can be used to develop future nozzle technologies that reduce supersonic jet noise. The methodology relies on mitigating extraneous noise sources, examining the impact of measurement location on the acoustic results, and investigating the facility independence of the measurements. The methodology is documented here as a basis for validating future improvements and its limitations are noted so that they do not affect the data analysis. Maintaining a high quality jet noise laboratory is an ongoing process. By carefully examining the data produced and continually following this methodology, data quality can be maintained and improved over time.

  2. Effects of Assuming Independent Component Failure Times, If They Are Actually Dependent, in a Series System.

    DTIC Science & Technology

    1985-11-26

    etc.).., Major decisions involving reliability ptudies, based on competing risk methodology , have been made in the past and will continue to be made...censoring mechanism. In such instances, the methodology for estimating relevant reliabili- ty probabilities has received considerable attention (cf. David...proposal for a discussion of the general methodology . .,4..% . - ’ -. - ’ . ’ , . * I - " . . - - - - . . ,_ . . . . . . . . .4

  3. Application of Response Surface Methodology on Leaching of Iron from Partially Laterised Khondalite Rocks: A Bauxite Mining Waste

    NASA Astrophysics Data System (ADS)

    Swain, Ranjita; Bhima Rao, R.

    2018-04-01

    In the present investigation, response surface methodology (RSM) is used for a quadratic model that continuously controls the process parameters. This model is used to optimize the removal of iron oxide from Partially Laterised Khondalite (PLK) rocks which is influenced by several independent variables namely acid concentration, time and temperature. Second order response functions are produced for leaching of iron oxide from PLK rocks-a bauxite mining waste. In RSM, Box-Behnken design is used for the process optimization to achieve maximum removal of iron oxide. The influence of the process variables of leaching of iron oxide is presented in the form of 3-D response graphs. The results of this investigation reveals that 3 M hydrochloric acid concentration, 240 min time and 373 K temperature are found to be the best conditions for removal of 99% Fe2O3. The product obtain at this condition contain 80% brightness which is suitable for ceramic and filler industry applications. The novelity of the work is that the waste can be a value added product after suitable physical beneficiation and chemical treatment.

  4. A frontier analysis approach for benchmarking hospital performance in the treatment of acute myocardial infarction.

    PubMed

    Stanford, Robert E

    2004-05-01

    This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.

  5. Assessing Electronic Cigarette-Related Tweets for Sentiment and Content Using Supervised Machine Learning

    PubMed Central

    Cole-Lewis, Heather; Varghese, Arun; Sanders, Amy; Schwarz, Mary; Pugatch, Jillian

    2015-01-01

    Background Electronic cigarettes (e-cigarettes) continue to be a growing topic among social media users, especially on Twitter. The ability to analyze conversations about e-cigarettes in real-time can provide important insight into trends in the public’s knowledge, attitudes, and beliefs surrounding e-cigarettes, and subsequently guide public health interventions. Objective Our aim was to establish a supervised machine learning algorithm to build predictive classification models that assess Twitter data for a range of factors related to e-cigarettes. Methods Manual content analysis was conducted for 17,098 tweets. These tweets were coded for five categories: e-cigarette relevance, sentiment, user description, genre, and theme. Machine learning classification models were then built for each of these five categories, and word groupings (n-grams) were used to define the feature space for each classifier. Results Predictive performance scores for classification models indicated that the models correctly labeled the tweets with the appropriate variables between 68.40% and 99.34% of the time, and the percentage of maximum possible improvement over a random baseline that was achieved by the classification models ranged from 41.59% to 80.62%. Classifiers with the highest performance scores that also achieved the highest percentage of the maximum possible improvement over a random baseline were Policy/Government (performance: 0.94; % improvement: 80.62%), Relevance (performance: 0.94; % improvement: 75.26%), Ad or Promotion (performance: 0.89; % improvement: 72.69%), and Marketing (performance: 0.91; % improvement: 72.56%). The most appropriate word-grouping unit (n-gram) was 1 for the majority of classifiers. Performance continued to marginally increase with the size of the training dataset of manually annotated data, but eventually leveled off. Even at low dataset sizes of 4000 observations, performance characteristics were fairly sound. Conclusions Social media outlets like Twitter can uncover real-time snapshots of personal sentiment, knowledge, attitudes, and behavior that are not as accessible, at this scale, through any other offline platform. Using the vast data available through social media presents an opportunity for social science and public health methodologies to utilize computational methodologies to enhance and extend research and practice. This study was successful in automating a complex five-category manual content analysis of e-cigarette-related content on Twitter using machine learning techniques. The study details machine learning model specifications that provided the best accuracy for data related to e-cigarettes, as well as a replicable methodology to allow extension of these methods to additional topics. PMID:26307512

  6. Assessing Electronic Cigarette-Related Tweets for Sentiment and Content Using Supervised Machine Learning.

    PubMed

    Cole-Lewis, Heather; Varghese, Arun; Sanders, Amy; Schwarz, Mary; Pugatch, Jillian; Augustson, Erik

    2015-08-25

    Electronic cigarettes (e-cigarettes) continue to be a growing topic among social media users, especially on Twitter. The ability to analyze conversations about e-cigarettes in real-time can provide important insight into trends in the public's knowledge, attitudes, and beliefs surrounding e-cigarettes, and subsequently guide public health interventions. Our aim was to establish a supervised machine learning algorithm to build predictive classification models that assess Twitter data for a range of factors related to e-cigarettes. Manual content analysis was conducted for 17,098 tweets. These tweets were coded for five categories: e-cigarette relevance, sentiment, user description, genre, and theme. Machine learning classification models were then built for each of these five categories, and word groupings (n-grams) were used to define the feature space for each classifier. Predictive performance scores for classification models indicated that the models correctly labeled the tweets with the appropriate variables between 68.40% and 99.34% of the time, and the percentage of maximum possible improvement over a random baseline that was achieved by the classification models ranged from 41.59% to 80.62%. Classifiers with the highest performance scores that also achieved the highest percentage of the maximum possible improvement over a random baseline were Policy/Government (performance: 0.94; % improvement: 80.62%), Relevance (performance: 0.94; % improvement: 75.26%), Ad or Promotion (performance: 0.89; % improvement: 72.69%), and Marketing (performance: 0.91; % improvement: 72.56%). The most appropriate word-grouping unit (n-gram) was 1 for the majority of classifiers. Performance continued to marginally increase with the size of the training dataset of manually annotated data, but eventually leveled off. Even at low dataset sizes of 4000 observations, performance characteristics were fairly sound. Social media outlets like Twitter can uncover real-time snapshots of personal sentiment, knowledge, attitudes, and behavior that are not as accessible, at this scale, through any other offline platform. Using the vast data available through social media presents an opportunity for social science and public health methodologies to utilize computational methodologies to enhance and extend research and practice. This study was successful in automating a complex five-category manual content analysis of e-cigarette-related content on Twitter using machine learning techniques. The study details machine learning model specifications that provided the best accuracy for data related to e-cigarettes, as well as a replicable methodology to allow extension of these methods to additional topics.

  7. Validation of gravity data from the geopotential field model for subsurface investigation of the Cameroon Volcanic Line (Western Africa)

    NASA Astrophysics Data System (ADS)

    Marcel, Jean; Abate Essi, Jean Marcel; Nouck, Philippe Njandjock; Sanda, Oumarou; Manguelle-Dicoum, Eliézer

    2018-03-01

    Belonging to the Cameroon Volcanic Line (CVL), the western part of Cameroon is an active volcanic zone with volcanic eruptions and deadly gas emissions. The volcanic flows generally cover areas and bury structural features like faults. Terrestrial gravity surveys can hardly cover entirely this mountainous area due to difficult accessibility. The present work aims to evaluate gravity data derived from the geopotential field model, EGM2008 to investigate the subsurface of the CVL. The methodology involves upward continuation, horizontal gradient, maxima of horizontal gradient-upward continuation combination and Euler deconvolution techniques. The lineaments map inferred from this geopotential field model confirms several known lineaments and reveals new ones covered by lava flows. The known lineaments are interpreted as faults or geological contacts such as the Foumban fault and the Pan-African Belt-Congo craton contact. The lineaments highlighted coupled with the numerous maar lakes identified in this volcanic sector attest of the vulnerability of the CVL where special attention should be given for geohazard prevention.

  8. REE radiation fault model: a tool for organizing and communication radiation test data and construction COTS based spacebourne computing systems

    NASA Technical Reports Server (NTRS)

    Ferraro, R.; Some, R.

    2002-01-01

    The growth in data rates of instruments on future NASA spacecraft continues to outstrip the improvement in communications bandwidth and processing capabilities of radiation-hardened computers. Sophisticated autonomous operations strategies will further increase the processing workload. Given the reductions in spacecraft size and available power, standard radiation hardened computing systems alone will not be able to address the requirements of future missions. The REE project was intended to overcome this obstacle by developing a COTS- based supercomputer suitable for use as a science and autonomy data processor in most space environments. This development required a detailed knowledge of system behavior in the presence of Single Event Effect (SEE) induced faults so that mitigation strategies could be designed to recover system level reliability while maintaining the COTS throughput advantage. The REE project has developed a suite of tools and a methodology for predicting SEU induced transient fault rates in a range of natural space environments from ground-based radiation testing of component parts. In this paper we provide an overview of this methodology and tool set with a concentration on the radiation fault model and its use in the REE system development methodology. Using test data reported elsewhere in this and other conferences, we predict upset rates for a particular COTS single board computer configuration in several space environments.

  9. The impact of domestic rainwater harvesting systems in storm water runoff mitigation at the urban block scale.

    PubMed

    Palla, A; Gnecco, I; La Barbera, P

    2017-04-15

    In the framework of storm water management, Domestic Rainwater Harvesting (DRWH) systems are recently recognized as source control solutions according to LID principles. In order to assess the impact of these systems in storm water runoff control, a simple methodological approach is proposed. The hydrologic-hydraulic modelling is undertaken using EPA SWMM; the DRWH is implemented in the model by using a storage unit linked to the building water supply system and to the drainage network. The proposed methodology has been implemented for a residential urban block located in Genoa (Italy). Continuous simulations are performed by using the high-resolution rainfall data series for the ''do nothing'' and DRWH scenarios. The latter includes the installation of a DRWH system for each building of the urban block. Referring to the test site, the peak and volume reduction rate evaluated for the 2125 rainfall events are respectively equal to 33 and 26 percent, on average (with maximum values of 65 percent for peak and 51 percent for volume). In general, the adopted methodology indicates that the hydrologic performance of the storm water drainage network equipped with DRWH systems is noticeable even for the design storm event (T = 10 years) and the rainfall depth seems to affect the hydrologic performance at least when the total depth exceeds 20 mm. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Evaluation of Two Soil Water Redistribution Models (Finite Difference and Hourly Cascade Approach) Through The Comparison of Continuous field Sensor-Based Measurements

    NASA Astrophysics Data System (ADS)

    Ferreyra, R.; Stockle, C. O.; Huggins, D. R.

    2014-12-01

    Soil water storage and dynamics are of critical importance for a variety of processes in terrestrial ecosystems, including agriculture. Many of those systems are under significant pressure in terms of water availability and use. Therefore, assessing alternative scenarios through hydrological models is an increasingly valuable exercise. Soil water holding capacity is defined by the concepts of soil field capacity and plant available water, which are directly related to soil physical properties. Both concepts define the energy status of water in the root system and closely interact with plant physiological processes. Furthermore, these concepts play a key role in the environmental transport of nutrients and pollutants. Soil physical parameters (e.g. saturated hydraulic conductivity, total porosity and water release curve) are required as input for field-scale soil water redistribution models. These parameters are normally not easy to measure or monitor, and estimation through pedotransfer functions is often inadequate. Our objectives are to improve field-scale hydrological modeling by: (1) assessing new undisturbed methodologies for determining important soil physical parameters necessary for model inputs; and (2) evaluating model outputs, making a detailed specification of soil parameters and the particular boundary condition that are driving water movement under two contrasting environments. Soil physical properties (saturated hydraulic conductivity and determination of water release curves) were quantified using undisturbed laboratory methodologies for two different soil textural classes (silt loam and sandy loam) and used to evaluate two soil water redistribution models (finite difference solution and hourly cascade approach). We will report on model corroboration results performed using in situ, continuous, field measurements with soil water content capacitance probes and digital tensiometers. Here, natural drainage and water redistribution were monitored following a controlled water application where the study areas were isolated from other water inputs and outputs. We will also report on the assessment of two soil water sensors (Decagon Devices 5TM capacitance probe and UMS T4 tensiometers) for the two soil textural classes in terms of consistency and replicability.

  11. [The path of continuity of care between hospital and territory in patients with severe brain injury. The expectations of caregivers and professionals].

    PubMed

    Feiroli, Raffaele; Bolzani, Monica; Cornelli, Maria Cristina; Ghirardi, Lida; Guizzardi, Loris; Onesti, Roberta; Dovani, Antonella; Davolo, Andrea; Artioli, Giovanna; Mancini, Tiziana

    2013-01-01

    The present study analyses how continuity of care is perceived by health professionals and GRACER (Gravi Cerebrolesioni Acquisite Emilia Romagna) patients' caregivers, in order to investigate where the gap between expectations and reality is more heavily felt and which dimension of the continuity of care is the most important both for health professionals and GRACER patients' caregivers. The study has been developed following the Gap Analysis theoretical model. A questionnaire, based on ServQual model, was used to collect data about the three dimensions of the construct of continuity of care related to information, management and relation, declined along the lines of expectations and perception of reality. The questionnaire was administered to health professionals and caregivers of GRACER patients (12-36 months after the event) inside 4 healthcare institutes in Emilia Romagna. The PAI (Piano Assistenziale Individuale) approach was the methodology applied in these 4 sites. To both groups the relational continuity was the most important dimension, followed at a long distance by the informational and the management ones. It has also been noted that to professionals reality is always worse than expectations, with the exception of only two items in the dimension of management continuity. To caregivers reality is worse than expectations in some items in the dimensions of information and management The study has shown that the relational dimension of continuity of care should be more investigated, as confirmed by literature. More research is needed about the professionals' dissatisfaction generated by the negative balance between expectations and perception of reality.

  12. CFD Code Development for Combustor Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.

  13. Methods for network meta-analysis of continuous outcomes using individual patient data: a case study in acupuncture for chronic pain.

    PubMed

    Saramago, Pedro; Woods, Beth; Weatherly, Helen; Manca, Andrea; Sculpher, Mark; Khan, Kamran; Vickers, Andrew J; MacPherson, Hugh

    2016-10-06

    Network meta-analysis methods, which are an extension of the standard pair-wise synthesis framework, allow for the simultaneous comparison of multiple interventions and consideration of the entire body of evidence in a single statistical model. There are well-established advantages to using individual patient data to perform network meta-analysis and methods for network meta-analysis of individual patient data have already been developed for dichotomous and time-to-event data. This paper describes appropriate methods for the network meta-analysis of individual patient data on continuous outcomes. This paper introduces and describes network meta-analysis of individual patient data models for continuous outcomes using the analysis of covariance framework. Comparisons are made between this approach and change score and final score only approaches, which are frequently used and have been proposed in the methodological literature. A motivating example on the effectiveness of acupuncture for chronic pain is used to demonstrate the methods. Individual patient data on 28 randomised controlled trials were synthesised. Consistency of endpoints across the evidence base was obtained through standardisation and mapping exercises. Individual patient data availability avoided the use of non-baseline-adjusted models, allowing instead for analysis of covariance models to be applied and thus improving the precision of treatment effect estimates while adjusting for baseline imbalance. The network meta-analysis of individual patient data using the analysis of covariance approach is advocated to be the most appropriate modelling approach for network meta-analysis of continuous outcomes, particularly in the presence of baseline imbalance. Further methods developments are required to address the challenge of analysing aggregate level data in the presence of baseline imbalance.

  14. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  15. A prototype computerized synthesis methodology for generic space access vehicle (SAV) conceptual design

    NASA Astrophysics Data System (ADS)

    Huang, Xiao

    2006-04-01

    Today's and especially tomorrow's competitive launch vehicle design environment requires the development of a dedicated generic Space Access Vehicle (SAV) design methodology. A total of 115 industrial, research, and academic aircraft, helicopter, missile, and launch vehicle design synthesis methodologies have been evaluated. As the survey indicates, each synthesis methodology tends to focus on a specific flight vehicle configuration, thus precluding the key capability to systematically compare flight vehicle design alternatives. The aim of the research investigation is to provide decision-making bodies and the practicing engineer a design process and tool box for robust modeling and simulation of flight vehicles where the ultimate performance characteristics may hinge on numerical subtleties. This will enable the designer of a SAV for the first time to consistently compare different classes of SAV configurations on an impartial basis. This dissertation presents the development steps required towards a generic (configuration independent) hands-on flight vehicle conceptual design synthesis methodology. This process is developed such that it can be applied to any flight vehicle class if desired. In the present context, the methodology has been put into operation for the conceptual design of a tourist Space Access Vehicle. The case study illustrates elements of the design methodology & algorithm for the class of Horizontal Takeoff and Horizontal Landing (HTHL) SAVs. The HTHL SAV design application clearly outlines how the conceptual design process can be centrally organized, executed and documented with focus on design transparency, physical understanding and the capability to reproduce results. This approach offers the project lead and creative design team a management process and tool which iteratively refines the individual design logic chosen, leading to mature design methods and algorithms. As illustrated, the HTHL SAV hands-on design methodology offers growth potential in that the same methodology can be continually updated and extended to other SAV configuration concepts, such as the Vertical Takeoff and Vertical Landing (VTVL) SAV class. Having developed, validated and calibrated the methodology for HTHL designs in the 'hands-on' mode, the report provides an outlook how the methodology will be integrated into a prototype computerized design synthesis software AVDS-PrADOSAV in a follow-on step.

  16. Space-Time Data fusion for Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, H.; Cressie, N.

    2011-01-01

    NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.

  17. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  18. [Definition of low threshold volumes for quality assurance: conceptual and methodological issues involved in the definition and evaluation of thresholds for volume outcome relations in clinical care].

    PubMed

    Wetzel, Hermann

    2006-01-01

    In a large number of mostly retrospective association studies, a statistical relationship between volume and quality of health care has been reported. However, the relevance of these results is frequently limited by methodological shortcomings. In this article, criteria for the evidence and definition of thresholds for volume-outcome relations are proposed, e.g. the specification of relevant outcomes for quality indicators, analysis of volume as a continuous variable with an adequate case-mix and risk adjustment, accounting for cluster effects and considering mathematical models for the derivation of cut-off values. Moreover, volume thresholds are regarded as surrogate parameters for the indirect classification of the quality of care, whose diagnostic validity and effectiveness in improving health care quality need to be evaluated in prospective studies.

  19. Toward a Bio-Medical Thesaurus: Building the Foundation of the UMLS

    PubMed Central

    Tuttle, Mark S.; Blois, Marsden S.; Erlbaum, Mark S.; Nelson, Stuart J.; Sherertz, David D.

    1988-01-01

    The Unified Medical Language System (UMLS) is being designed to provide a uniform user interface to heterogeneous machine-readable bio-medical information resources, such as bibliographic databases, genetic databases, expert systems and patient records.1 Such an interface will have to recognize different ways of saying the same thing, and provide links to ways of saying related things. One way to represent the necessary associations is via a domain thesaurus. As no such thesaurus exists, and because, once built, it will be both sizable and in need of continuous maintenance, its design should include a methodology for building and maintaining it. We propose a methodology, utilizing lexically expanded schema inversion, and a design, called T. Lex, which together form one approach to the problem of defining and building a bio-medical thesaurus. We argue that the semantic locality implicit in such a thesaurus will support model-based reasoning in bio-medicine.2

  20. Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma-pi nets.

    PubMed

    Neville, R S; Stonham, T J; Glover, R J

    2000-01-01

    In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma-pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital "Higher Order" sigma-pi nodes and studies continuous input RAM-based sigma-pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma-pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y epsilon [0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma-pi node which enables the provision of high accuracy outputs. The sigma-pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma-pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma-pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma-pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using 'ring memories'. Finally, we study the sigma-pi units' ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma-pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289-303).

  1. Assessing the groundwater recharge under various irrigation schemes in Central Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Lin, Zih-Ciao; Tsai, Cheng-Bin

    2014-05-01

    The flooded paddy fields can be considered as a major source of groundwater recharge in Central Taiwan. The risk of rice production has increased notably due to climate change in this area. To respond to agricultural water shortage caused by climate change without affecting rice yield in the future, the application of water-saving irrigation is the substantial resolution. The System of Rice Intensification (SRI) was developed as a set of insights and practices used in growing irrigated rice. Based on the water-saving irrigation practice of SRI, impacts of the new methodology on the reducing of groundwater recharge were assessed in central Taiwan. The three-dimensional finite element groundwater model (FEMWATER) with the variable boundary condition analog functions, was applied in simulating groundwater recharge under different irrigation schemes. According to local climatic and environmental characteristics associated with SRI methodology, the change of infiltration rate was evaluated and compared with the traditional irrigation schemes, including continuous irrigation and rotational irrigation scheme. The simulation results showed that the average infiltration rate in the rice growing season decreased when applying the SRI methodology, and the total groundwater recharge amount of SRI with a 5-day irrigation interval reduced 12% and 9% compared with continuous irrigation (6cm constant ponding water depth) and rotational scheme (5-day irrigation interval with 6 cm initial ponding water depth), respectively. The results could be used as basis for planning long-term adaptive water resource management strategies to climate change in Central Taiwan. Keywords: SRI, Irrigation schemes, Groundwater recharge, Infiltration

  2. 12 CFR 252.155 - Methodologies and practices.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... SYSTEM (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for....155 Methodologies and practices. (a) Potential impact on capital. In conducting a stress test under...) Losses, pre-provision net revenue, provision for loan and lease losses, and net income; and (2) The...

  3. 12 CFR 252.155 - Methodologies and practices.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... SYSTEM (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for....155 Methodologies and practices. (a) Potential impact on capital. In conducting a stress test under...) Losses, pre-provision net revenue, provision for loan and lease losses, and net income; and (2) The...

  4. Social Media Mourning.

    PubMed

    Moore, Jensen; Magee, Sara; Gamreklidze, Ellada; Kowalewski, Jennifer

    2017-01-01

    This article uses grounded theory methodology to analyze in-depth interviews conducted with mourners who used social networking sites during bereavement. The social media mourning (SMM) model outlines how social networking sites are used to grieve using one or more of the following: (a) one-way communication, (b) two-way communication, and (c) immortality communication. The model indicates causal conditions of SMM: (a) sharing information with family or friends and (sometimes) beginning a dialog, (b) discussing death with others mourning, (c) discussing death with a broader mourning community, and (d) commemorating and continuing connection to the deceased. The article includes actions and consequences associated with SMM and suggests several ways in which SMM changes or influences the bereavement process.

  5. Miniature high temperature plug-type heat flux gauges

    NASA Technical Reports Server (NTRS)

    Liebert, Curt H.

    1992-01-01

    The objective is to describe continuing efforts to develop methods for measuring surface heat flux, gauge active surface temperature, and heat transfer coefficient quantities. The methodology involves inventing a procedure for fabricating improved plug-type heat flux gauges and also for formulating inverse heat conduction models and calculation procedures. These models and procedures are required for making indirect measurements of these quantities from direct temperature measurements at gauge interior locations. Measurements of these quantities were made in a turbine blade thermal cycling tester (TBT) located at MSFC. The TBT partially simulates the turbopump turbine environment in the Space Shuttle Main Engine. After the TBT test, experiments were performed in an arc lamp to analyze gauge quality.

  6. Big–deep–smart data in imaging for guiding materials design

    DOE PAGES

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-09-23

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  7. Big-deep-smart data in imaging for guiding materials design.

    PubMed

    Kalinin, Sergei V; Sumpter, Bobby G; Archibald, Richard K

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  8. Big-deep-smart data in imaging for guiding materials design

    NASA Astrophysics Data System (ADS)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    2015-10-01

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  9. Big–deep–smart data in imaging for guiding materials design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalinin, Sergei V.; Sumpter, Bobby G.; Archibald, Richard K.

    Harnessing big data, deep data, and smart data from state-of-the-art imaging might accelerate the design and realization of advanced functional materials. Here we discuss new opportunities in materials design enabled by the availability of big data in imaging and data analytics approaches, including their limitations, in material systems of practical interest. We specifically focus on how these tools might help realize new discoveries in a timely manner. Such methodologies are particularly appropriate to explore in light of continued improvements in atomistic imaging, modelling and data analytics methods.

  10. Lagrangian condensation microphysics with Twomey CCN activation

    NASA Astrophysics Data System (ADS)

    Grabowski, Wojciech W.; Dziekan, Piotr; Pawlowska, Hanna

    2018-01-01

    We report the development of a novel Lagrangian microphysics methodology for simulations of warm ice-free clouds. The approach applies the traditional Eulerian method for the momentum and continuous thermodynamic fields such as the temperature and water vapor mixing ratio, and uses Lagrangian super-droplets to represent condensed phase such as cloud droplets and drizzle or rain drops. In other applications of the Lagrangian warm-rain microphysics, the super-droplets outside clouds represent unactivated cloud condensation nuclei (CCN) that become activated upon entering a cloud and can further grow through diffusional and collisional processes. The original methodology allows for the detailed study of not only effects of CCN on cloud microphysics and dynamics, but also CCN processing by a cloud. However, when cloud processing is not of interest, a simpler and computationally more efficient approach can be used with super-droplets forming only when CCN is activated and no super-droplet existing outside a cloud. This is possible by applying the Twomey activation scheme where the local supersaturation dictates the concentration of cloud droplets that need to be present inside a cloudy volume, as typically used in Eulerian bin microphysics schemes. Since a cloud volume is a small fraction of the computational domain volume, the Twomey super-droplets provide significant computational advantage when compared to the original super-droplet methodology. Additional advantage comes from significantly longer time steps that can be used when modeling of CCN deliquescence is avoided. Moreover, other formulation of the droplet activation can be applied in case of low vertical resolution of the host model, for instance, linking the concentration of activated cloud droplets to the local updraft speed. This paper discusses the development and testing of the Twomey super-droplet methodology, focusing on the activation and diffusional growth. Details of the activation implementation, transport of super-droplets in the physical space, and the coupling between super-droplets and the Eulerian temperature and water vapor field are discussed in detail. Some of these are relevant to the original super-droplet methodology as well and to the ice phase modeling using the Lagrangian approach. As a computational example, the scheme is applied to an idealized moist thermal rising in a stratified environment, with the original super-droplet methodology providing a benchmark to which the new scheme is compared.

  11. The Tools of Quality in Electronic Engineering Education.

    ERIC Educational Resources Information Center

    Medrano, C. T.; Ube, M.; Plaza, I.; Blesa, A.

    2002-01-01

    Explores the integration of three factors in the lecture room that influence the function between students and society. Describes the innovation of quality in education that provides students with a work methodology in agreement with the professional world and allows continuous improvement in educational methodology. (Author/KHR)

  12. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  13. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  14. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  15. 7 CFR 1940.552 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... REGULATIONS (CONTINUED) GENERAL Methodology and Formulas for Allocation of Loan and Grant Program Funds § 1940..., funds will be controlled by the National Office. (b) Basic formula criteria, data source and weight. Basic formulas are used to calculate a basic state factor as a part of the methodology for allocating...

  16. Continuous EEG source imaging enhances analysis of EEG-fMRI in focal epilepsy.

    PubMed

    Vulliemoz, S; Rodionov, R; Carmichael, D W; Thornton, R; Guye, M; Lhatoo, S D; Michel, C M; Duncan, J S; Lemieux, L

    2010-02-15

    EEG-correlated fMRI (EEG-fMRI) studies can reveal haemodynamic changes associated with Interictal Epileptic Discharges (IED). Methodological improvements are needed to increase sensitivity and specificity for localising the epileptogenic zone. We investigated whether the estimated EEG source activity improved models of the BOLD changes in EEG-fMRI data, compared to conventional < event-related > designs based solely on the visual identification of IED. Ten patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI. EEG Source Imaging (ESI) was performed on intra-fMRI averaged IED to identify the irritative zone. The continuous activity of this estimated IED source (cESI) over the entire recording was used for fMRI analysis (cESI model). The maps of BOLD signal changes explained by cESI were compared to results of the conventional IED-related model. ESI was concordant with non-invasive data in 13/15 different types of IED. The cESI model explained significant additional BOLD variance in regions concordant with video-EEG, structural MRI or, when available, intracranial EEG in 10/15 IED. The cESI model allowed better detection of the BOLD cluster, concordant with intracranial EEG in 4/7 IED, compared to the IED model. In 4 IED types, cESI-related BOLD signal changes were diffuse with a pattern suggestive of contamination of the source signal by artefacts, notably incompletely corrected motion and pulse artefact. In one IED type, there was no significant BOLD change with either model. Continuous EEG source imaging can improve the modelling of BOLD changes related to interictal epileptic activity and this may enhance the localisation of the irritative zone. Copyright 2009 Elsevier Inc. All rights reserved.

  17. Segmentation and Recognition of Continuous Human Activity

    DTIC Science & Technology

    2001-01-01

    This paper presents a methodology for automatic segmentation and recognition of continuous human activity . We segment a continuous human activity into...commencement or termination. We use single action sequences for the training data set. The test sequences, on the other hand, are continuous sequences of human ... activity that consist of three or more actions in succession. The system has been tested on continuous activity sequences containing actions such as

  18. A holistic approach for large-scale derived flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  19. Modeling tree crown dynamics with 3D partial differential equations.

    PubMed

    Beyer, Robert; Letort, Véronique; Cournède, Paul-Henry

    2014-01-01

    We characterize a tree's spatial foliage distribution by the local leaf area density. Considering this spatially continuous variable allows to describe the spatiotemporal evolution of the tree crown by means of 3D partial differential equations. These offer a framework to rigorously take locally and adaptively acting effects into account, notably the growth toward light. Biomass production through photosynthesis and the allocation to foliage and wood are readily included in this model framework. The system of equations stands out due to its inherent dynamic property of self-organization and spontaneous adaptation, generating complex behavior from even only a few parameters. The density-based approach yields spatially structured tree crowns without relying on detailed geometry. We present the methodological fundamentals of such a modeling approach and discuss further prospects and applications.

  20. Archetype modeling methodology.

    PubMed

    Moner, David; Maldonado, José Alberto; Robles, Montserrat

    2018-03-01

    Clinical Information Models (CIMs) expressed as archetypes play an essential role in the design and development of current Electronic Health Record (EHR) information structures. Although there exist many experiences about using archetypes in the literature, a comprehensive and formal methodology for archetype modeling does not exist. Having a modeling methodology is essential to develop quality archetypes, in order to guide the development of EHR systems and to allow the semantic interoperability of health data. In this work, an archetype modeling methodology is proposed. This paper describes its phases, the inputs and outputs of each phase, and the involved participants and tools. It also includes the description of the possible strategies to organize the modeling process. The proposed methodology is inspired by existing best practices of CIMs, software and ontology development. The methodology has been applied and evaluated in regional and national EHR projects. The application of the methodology provided useful feedback and improvements, and confirmed its advantages. The conclusion of this work is that having a formal methodology for archetype development facilitates the definition and adoption of interoperable archetypes, improves their quality, and facilitates their reuse among different information systems and EHR projects. Moreover, the proposed methodology can be also a reference for CIMs development using any other formalism. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Proportional exponentiated link transformed hazards (ELTH) models for discrete time survival data with application

    PubMed Central

    Joeng, Hee-Koung; Chen, Ming-Hui; Kang, Sangwook

    2015-01-01

    Discrete survival data are routinely encountered in many fields of study including behavior science, economics, epidemiology, medicine, and social science. In this paper, we develop a class of proportional exponentiated link transformed hazards (ELTH) models. We carry out a detailed examination of the role of links in fitting discrete survival data and estimating regression coefficients. Several interesting results are established regarding the choice of links and baseline hazards. We also characterize the conditions for improper survival functions and the conditions for existence of the maximum likelihood estimates under the proposed ELTH models. An extensive simulation study is conducted to examine the empirical performance of the parameter estimates under the Cox proportional hazards model by treating discrete survival times as continuous survival times, and the model comparison criteria, AIC and BIC, in determining links and baseline hazards. A SEER breast cancer dataset is analyzed in details to further demonstrate the proposed methodology. PMID:25772374

  2. Model Package Report: Hanford Soil Inventory Model SIM v.2 Build 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nichols, Will E.; Zaher, U.; Mehta, S.

    The Hanford Soil Inventory Model (SIM) is a tool for the estimation of inventory of contaminants that were released to soil from liquid discharges during the U.S. Department of Energy’s Hanford Site operations. This model package report documents the construction and development of a second version of SIM (SIM-v2) to support the needs of Hanford Site Composite Analysis. The SIM-v2 is implemented using GoldSim Pro®1 software with a new model architecture that preserves the uncertainty in inventory estimates while reducing the computational burden (compared to the previous version) and allowing more traceability and transparency in calculation methodology. The calculation architecturemore » is designed in such a manner that future updates to the waste stream composition along with addition or deletion of waste sites can be performed with relative ease. In addition, the new computational platform allows for continued hardware upgrade.« less

  3. Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems

    DTIC Science & Technology

    2015-12-01

    distribution is unlimited IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE SYSTEMS by Sang M. Sok December 2015...REPORT TYPE AND DATES COVERED Dissertation 4. TITLE AND SUBTITLE IMPROVED CONCEPTUAL MODELS METHODOLOGY (ICoMM) FOR VALIDATION OF NON-OBSERVABLE...importance of the CoM. The improved conceptual model methodology (ICoMM) is developed in support of improving the structure of the CoM for both face and

  4. Damage identification of beam structures using free response shapes obtained by use of a continuously scanning laser Doppler vibrometer system

    NASA Astrophysics Data System (ADS)

    Xu, Y. F.; Chen, Da-Ming; Zhu, W. D.

    2017-08-01

    Spatially dense operating deflection shapes and mode shapes can be rapidly obtained by use of a continuously scanning laser Doppler vibrometer (CSLDV) system, which sweeps its laser spot over a vibrating structure surface. This paper introduces a new type of vibration shapes called a free response shape (FRS) that can be obtained by use of a CSLDV system, and a new damage identification methodology using FRSs is developed for beam structures. An analytical expression of FRSs of a damped beam structure is derived, and FRSs from the analytical expression compare well with those from a finite element model. In the damage identification methodology, a free-response damage index (FRDI) is proposed, and damage regions can be identified near neighborhoods with consistently high values of FRDIs associated with different modes; an auxiliary FRDI is defined to assist identification of the neighborhoods. A FRDI associated with a mode consists of differences between curvatures of FRSs associated with the mode in a number of half-scan periods of a CSLDV system and those from polynomials that fit the FRSs with properly determined orders. A convergence index is proposed to determine the proper order of a polynomial fit. One advantage of the methodology is that the FRDI does not require any baseline information of an undamaged beam structure, if it is geometrically smooth and made of materials that have no stiffness and mass discontinuities. Another advantage is that FRDIs associated with multiple modes can be obtained using free response of a beam structure measured by a CSLDV system in one scan. The number of half-scan periods for calculation of the FRDI associated with a mode can be determined by use of the short-time Fourier transform. The proposed methodology was numerically and experimentally applied to identify damage in beam structures; effects of the scan frequency of a CSLDV system on qualities of obtained FRSs were experimentally investigated.

  5. Uncertainty Evaluation of Computational Model Used to Support the Integrated Powerhead Demonstration Project

    NASA Technical Reports Server (NTRS)

    Steele, W. G.; Molder, K. J.; Hudson, S. T.; Vadasy, K. V.; Rieder, P. T.; Giel, T.

    2005-01-01

    NASA and the U.S. Air Force are working on a joint project to develop a new hydrogen-fueled, full-flow, staged combustion rocket engine. The initial testing and modeling work for the Integrated Powerhead Demonstrator (IPD) project is being performed by NASA Marshall and Stennis Space Centers. A key factor in the testing of this engine is the ability to predict and measure the transient fluid flow during engine start and shutdown phases of operation. A model built by NASA Marshall in the ROCket Engine Transient Simulation (ROCETS) program is used to predict transient engine fluid flows. The model is initially calibrated to data from previous tests on the Stennis E1 test stand. The model is then used to predict the next run. Data from this run can then be used to recalibrate the model providing a tool to guide the test program in incremental steps to reduce the risk to the prototype engine. In this paper, they define this type of model as a calibrated model. This paper proposes a method to estimate the uncertainty of a model calibrated to a set of experimental test data. The method is similar to that used in the calibration of experiment instrumentation. For the IPD example used in this paper, the model uncertainty is determined for both LOX and LH flow rates using previous data. The successful use of this model is then demonstrated to predict another similar test run within the uncertainty bounds. The paper summarizes the uncertainty methodology when a model is continually recalibrated with new test data. The methodology is general and can be applied to other calibrated models.

  6. SUDOQU, a new dose-assessment methodology for radiological surface contamination.

    PubMed

    van Dillen, Teun; van Dijk, Arjan

    2018-06-12

    A new methodology has been developed for the assessment of the annual effective dose resulting from removable and fixed radiological surface contamination. It is entitled SUDOQU (SUrface DOse QUantification) and it can for instance be used to derive criteria for surface contamination related to the import of non-food consumer goods, containers and conveyances, e.g., limiting values and operational screening levels. SUDOQU imposes mass (activity)-balance equations based on radioactive decay, removal and deposition processes in indoor and outdoor environments. This leads to time-dependent contamination levels that may be of particular importance in exposure scenarios dealing with one or a few contaminated items only (usually public exposure scenarios, therefore referred to as the 'consumer' model). Exposure scenarios with a continuous flow of freshly contaminated goods also fall within the scope of the methodology (typically occupational exposure scenarios, thus referred to as the 'worker model'). In this paper we describe SUDOQU, its applications, and its current limitations. First, we delineate the contamination issue, present the assumptions and explain the concepts. We describe the relevant removal, transfer, and deposition processes, and derive equations for the time evolution of the radiological surface-, air- and skin-contamination levels. These are then input for the subsequent evaluation of the annual effective dose with possible contributions from external gamma radiation, inhalation, secondary ingestion (indirect, from hand to mouth), skin contamination, direct ingestion and skin-contact exposure. The limiting effective surface dose is introduced for issues involving the conservatism of dose calculations. SUDOQU can be used by radiation-protection scientists/experts and policy makers in the field of e.g. emergency preparedness, trade and transport, exemption and clearance, waste management, and nuclear facilities. Several practical examples are worked out demonstrating the potential applications of the methodology. . Creative Commons Attribution license.

  7. A general methodology for population analysis

    NASA Astrophysics Data System (ADS)

    Lazov, Petar; Lazov, Igor

    2014-12-01

    For a given population with N - current and M - maximum number of entities, modeled by a Birth-Death Process (BDP) with size M+1, we introduce utilization parameter ρ, ratio of the primary birth and death rates in that BDP, which, physically, determines (equilibrium) macrostates of the population, and information parameter ν, which has an interpretation as population information stiffness. The BDP, modeling the population, is in the state n, n=0,1,…,M, if N=n. In presence of these two key metrics, applying continuity law, equilibrium balance equations concerning the probability distribution pn, n=0,1,…,M, of the quantity N, pn=Prob{N=n}, in equilibrium, and conservation law, and relying on the fundamental concepts population information and population entropy, we develop a general methodology for population analysis; thereto, by definition, population entropy is uncertainty, related to the population. In this approach, what is its essential contribution, the population information consists of three basic parts: elastic (Hooke's) or absorption/emission part, synchronization or inelastic part and null part; the first two parts, which determine uniquely the null part (the null part connects them), are the two basic components of the Information Spectrum of the population. Population entropy, as mean value of population information, follows this division of the information. A given population can function in information elastic, antielastic and inelastic regime. In an information linear population, the synchronization part of the information and entropy is absent. The population size, M+1, is the third key metric in this methodology. Namely, right supposing a population with infinite size, the most of the key quantities and results for populations with finite size, emerged in this methodology, vanish.

  8. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    PubMed

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. An Overview of Modifications Applied to a Turbulence Response Analysis Method for Flexible Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Funk, Christie J.

    2013-01-01

    A software program and associated methodology to study gust loading on aircraft exists for a classification of geometrically simplified flexible configurations. This program consists of a simple aircraft response model with two rigid and three flexible symmetric degrees of freedom and allows for the calculation of various airplane responses due to a discrete one-minus-cosine gust as well as continuous turbulence. Simplifications, assumptions, and opportunities for potential improvements pertaining to the existing software program are first identified, then a revised version of the original software tool is developed with improved methodology to include more complex geometries, additional excitation cases, and output data so as to provide a more useful and accurate tool for gust load analysis. Revisions are made in the categories of aircraft geometry, computation of aerodynamic forces and moments, and implementation of horizontal tail mode shapes. In order to improve the original software program to enhance usefulness, a wing control surface and horizontal tail control surface is added, an extended application of the discrete one-minus-cosine gust input is employed, a supplemental continuous turbulence spectrum is implemented, and a capability to animate the total vehicle deformation response to gust inputs in included. These revisions and enhancements are implemented and an analysis of the results is used to validate the modifications.

  10. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  11. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  12. Cardiac surgery report cards: comprehensive review and statistical critique.

    PubMed

    Shahian, D M; Normand, S L; Torchiana, D F; Lewis, S M; Pastore, J O; Kuntz, R E; Dreyer, P I

    2001-12-01

    Public report cards and confidential, collaborative peer education represent distinctly different approaches to cardiac surgery quality assessment and improvement. This review discusses the controversies regarding their methodology and relative effectiveness. Report cards have been the more commonly used approach, typically as a result of state legislation. They are based on the presumption that publication of outcomes effectively motivates providers, and that market forces will reward higher quality. Numerous studies have challenged the validity of these hypotheses. Furthermore, although states with report cards have reported significant decreases in risk-adjusted mortality, it is unclear whether this improvement resulted from public disclosure or, rather, from the development of internal quality programs by hospitals. An additional confounding factor is the nationwide decline in heart surgery mortality, including states without quality monitoring. Finally, report cards may engender negative behaviors such as high-risk case avoidance and "gaming" of the reporting system, especially if individual surgeon results are published. The alternative approach, continuous quality improvement, may provide an opportunity to enhance performance and reduce interprovider variability while avoiding the unintended negative consequences of report cards. This collaborative method, which uses exchange visits between programs and determination of best practice, has been highly effective in northern New England and in the Veterans Affairs Administration. However, despite their potential advantages, quality programs based solely on confidential continuous quality improvement do not address the issue of public accountability. For this reason, some states may continue to mandate report cards. In such instances, it is imperative that appropriate statistical techniques and report formats are used, and that professional organizations simultaneously implement continuous quality improvement programs. The statistical methodology underlying current report cards is flawed, and does not justify the degree of accuracy presented to the public. All existing risk-adjustment methods have substantial inherent imprecision, and this is compounded when the results of such patient-level models are aggregated and used inappropriately to assess provider performance. Specific problems include sample size differences, clustering of observations, multiple comparisons, and failure to account for the random component of interprovider variability. We advocate the use of hierarchical or multilevel statistical models to address these concerns, as well as report formats that emphasize the statistical uncertainty of the results.

  13. Language Education and ELT Materials in Turkey from the Path Dependence Perspective

    ERIC Educational Resources Information Center

    Isik, Ali

    2011-01-01

    This paper examines the role of traditional language teaching methodology on the current language teaching methodology in Turkey from the Path Dependence Theory perspective. Path Dependence claims that the past continues shaping the present. Similarly, traditional approaches still shape foreign/second language education. Turkey has inherited a…

  14. 39 CFR 501.16 - PC postage payment methodology.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DISTRIBUTE POSTAGE EVIDENCING SYSTEMS § 501.16 PC postage payment methodology. (a) The PC Postage customer is... issues a refund to a customer for any unused postage in a Postage Evidencing System. After verification... Service approval to continue to operate PC Postage systems, the provider must submit to a periodic audit...

  15. Exploratory Practice and Soft Systems Methodology

    ERIC Educational Resources Information Center

    Tajino, Akira; Smith, Craig

    2005-01-01

    This paper aims to demonstrate that Soft Systems Methodology (SSM), a soft systems approach developed in management studies (see Checkland, 1981), can be usefully linked with Exploratory Practice (EP), a form of practitioner research for language classrooms. Some compatible SSM and EP characteristics, in tandem, could enhance continual efforts to…

  16. Beyond Composite Scores and Cronbach's Alpha: Advancing Methodological Rigor in Recreation Research

    ERIC Educational Resources Information Center

    Gagnon, Ryan J.; Stone, Garrett A.; Garst, Barry A.

    2017-01-01

    Critically examining common statistical approaches and their strengths and weaknesses is an important step in advancing recreation and leisure sciences. To continue this critical examination and to inform methodological decision making, this study compared three approaches to determine how alternative approaches may result in contradictory…

  17. Research in Secondary English, 1912-2011: Historical Continuities and Discontinuities in the NCTE Imprint

    ERIC Educational Resources Information Center

    Brass, Jory; Burns, Leslie David

    2011-01-01

    This study identified historical continuities and discontinuities across a century of secondary research published in "English Journal" (1912-1966) and "Research in the Teaching of English" (1967-2011). It highlights considerable methodological continuity across six decades of "English Journal" and some shifts in research emphases that tended to…

  18. First-Time-Users' Impressions of Continuing Education Using the Internet

    ERIC Educational Resources Information Center

    Conte, Nelly

    2012-01-01

    Purpose: The paper's aim is to describe the first experiences of, opinions and attitudes toward, continuing education using the internet of a group of Puerto Rican pharmacists after an online course. Design/methodology/approach: This is a descriptive study using a focus group of practicing pharmacists who participated in continuing education using…

  19. A multi-scale modelling procedure to quantify hydrological impacts of upland land management

    NASA Astrophysics Data System (ADS)

    Wheater, H. S.; Jackson, B.; Bulygina, N.; Ballard, C.; McIntyre, N.; Marshall, M.; Frogbrook, Z.; Solloway, I.; Reynolds, B.

    2008-12-01

    Recent UK floods have focused attention on the effects of agricultural intensification on flood risk. However, quantification of these effects raises important methodological issues. Catchment-scale data have proved inadequate to support analysis of impacts of land management change, due to climate variability, uncertainty in input and output data, spatial heterogeneity in land use and lack of data to quantify historical changes in management practices. Manipulation experiments to quantify the impacts of land management change have necessarily been limited and small scale, and in the UK mainly focused on the lowlands and arable agriculture. There is a need to develop methods to extrapolate from small scale observations to predict catchment-scale response, and to quantify impacts for upland areas. With assistance from a cooperative of Welsh farmers, a multi-scale experimental programme has been established at Pontbren, in mid-Wales, an area of intensive sheep production. The data have been used to support development of a multi-scale modelling methodology to assess impacts of agricultural intensification and the potential for mitigation of flood risk through land use management. Data are available from replicated experimental plots under different land management treatments, from instrumented field and hillslope sites, including tree shelter belts, and from first and second order catchments. Measurements include climate variables, soil water states and hydraulic properties at multiple depths and locations, tree interception, overland flow and drainflow, groundwater levels, and streamflow from multiple locations. Fine resolution physics-based models have been developed to represent soil and runoff processes, conditioned using experimental data. The detailed models are used to calibrate simpler 'meta- models' to represent individual hydrological elements, which are then combined in a semi-distributed catchment-scale model. The methodology is illustrated using field and catchment-scale simulations to demonstrate the the response of improved and unimproved grassland, and the potential effects of land management interventions, including farm ponds, tree shelter belts and buffer strips. It is concluded that the methodology developed has the potential to represent and quantify catchment-scale effects of upland management; continuing research is extending the work to a wider range of upland environments and land use types, with the aim of providing generic simulation tools that can be used to provide strategic policy guidance.

  20. Short-rotation forestry for energy production in Hawaii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, V.C.; Liu, W.; Merriam, R.A.

    1993-12-31

    In Hawaii, imports of fossil fuels continue to accelerate and now provide over 90% of the total energy supply at a cost exceeding $1 {times} 10{sup 9} annually exported from the local economy. Concurrently, sugarcane and pineapple crops, the traditional mainstays of the state`s economy, have declined such that as much as 80,000 hectares of agricultural land are now available for alternative land uses. The feasibility of short-rotation forestry for sustainable energy production on these former sugarcane and pineapple plantation lands is being evaluated using species- and site-specific empirical models to predict yields of Eucalyptus grandis, E. saligna, and Leucaenamore » leucocephala, a system model to estimate delivered costs, and a geographic information system to extend the analysis to areas where no field trials exist and to present results in map form. The island of Hawaii is showcased as an application of the methodology. Modeling results of methanol, ethanol, and electricity production from tropical hardwoods are presented. Short-rotation forestry appears to hold promise for the greening of Hawaii`s energy system and agricultural lands for the benefit of the state`s citizens and visitors. The methodology is readily transferable to other regions of the United States and rest of the world.« less

  1. Built To Last: Using Iterative Development Models for Sustainable Scientific Software Development

    NASA Astrophysics Data System (ADS)

    Jasiak, M. E.; Truslove, I.; Savoie, M.

    2013-12-01

    In scientific research, software development exists fundamentally for the results they create. The core research must take focus. It seems natural to researchers, driven by grant deadlines, that every dollar invested in software development should be used to push the boundaries of problem solving. This system of values is frequently misaligned with those of the software being created in a sustainable fashion; short-term optimizations create longer-term sustainability issues. The National Snow and Ice Data Center (NSIDC) has taken bold cultural steps in using agile and lean development and management methodologies to help its researchers meet critical deadlines, while building in the necessary support structure for the code to live far beyond its original milestones. Agile and lean software development and methodologies including Scrum, Kanban, Continuous Delivery and Test-Driven Development have seen widespread adoption within NSIDC. This focus on development methods is combined with an emphasis on explaining to researchers why these methods produce more desirable results for everyone, as well as promoting developers interacting with researchers. This presentation will describe NSIDC's current scientific software development model, how this addresses the short-term versus sustainability dichotomy, the lessons learned and successes realized by transitioning to this agile and lean-influenced model, and the current challenges faced by the organization.

  2. Developing Risk Prediction Models for Postoperative Pancreatic Fistula: a Systematic Review of Methodology and Reporting Quality.

    PubMed

    Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao

    2016-04-01

    Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.

  3. Project ECHO: A Telementoring Network Model for Continuing Professional Development.

    PubMed

    Arora, Sanjeev; Kalishman, Summers G; Thornton, Karla A; Komaromy, Miriam S; Katzman, Joanna G; Struminger, Bruce B; Rayburn, William F

    2017-01-01

    A major challenge with current systems of CME is the inability to translate the explosive growth in health care knowledge into daily practice. Project ECHO (Extension for Community Healthcare Outcomes) is a telementoring network designed for continuing professional development (CPD) and improving patient outcomes. The purpose of this article was to describe how the model has complied with recommendations from several authoritative reports about redesigning and enhancing CPD. This model links primary care clinicians through a knowledge network with an interprofessional team of specialists from an academic medical center who provide telementoring and ongoing education enabling community clinicians to treat patients with a variety of complex conditions. Knowledge and skills are shared during weekly condition-specific videoconferences. The model exemplifies learning as described in the seven levels of CPD by Moore (participation, satisfaction, learning, competence, performance, patient, and community health). The model is also aligned with recommendations from four national reports intended to redesign knowledge transfer in improving health care. Efforts in learning sessions focus on information that is relevant to practice, focus on evidence, education methodology, tailoring of recommendations to individual needs and community resources, and interprofessionalism. Project ECHO serves as a telementoring network model of CPD that aligns with current best practice recommendations for CME. This transformative initiative has the potential to serve as a leading model for larger scale CPD, nationally and globally, to enhance access to care, improve quality, and reduce cost.

  4. Helical structure of the cardiac ventricular anatomy assessed by diffusion tensor magnetic resonance imaging with multiresolution tractography.

    PubMed

    Poveda, Ferran; Gil, Debora; Martí, Enric; Andaluz, Albert; Ballester, Manel; Carreras, Francesc

    2013-10-01

    Deeper understanding of the myocardial structure linking the morphology and function of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Several conceptual models of myocardial fiber organization have been proposed but the lack of an automatic and objective methodology prevented an agreement. We sought to deepen this knowledge through advanced computer graphical representations of the myocardial fiber architecture by diffusion tensor magnetic resonance imaging. We performed automatic tractography reconstruction of unsegmented diffusion tensor magnetic resonance imaging datasets of canine heart from the public database of the Johns Hopkins University. Full-scale tractographies have been built with 200 seeds and are composed by streamlines computed on the vector field of primary eigenvectors at the diffusion tensor volumes. We also introduced a novel multiscale visualization technique in order to obtain a simplified tractography. This methodology retains the main geometric features of the fiber tracts, making it easier to decipher the main properties of the architectural organization of the heart. Output analysis of our tractographic representations showed exact correlation with low-level details of myocardial architecture, but also with the more abstract conceptualization of a continuous helical ventricular myocardial fiber array. Objective analysis of myocardial architecture by an automated method, including the entire myocardium and using several 3-dimensional levels of complexity, reveals a continuous helical myocardial fiber arrangement of both right and left ventricles, supporting the anatomical model of the helical ventricular myocardial band described by F. Torrent-Guasp. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.

  5. A methodology for risk analysis based on hybrid Bayesian networks: application to the regasification system of liquefied natural gas onboard a floating storage and regasification unit.

    PubMed

    Martins, Marcelo Ramos; Schleder, Adriana Miralles; Droguett, Enrique López

    2014-12-01

    This article presents an iterative six-step risk analysis methodology based on hybrid Bayesian networks (BNs). In typical risk analysis, systems are usually modeled as discrete and Boolean variables with constant failure rates via fault trees. Nevertheless, in many cases, it is not possible to perform an efficient analysis using only discrete and Boolean variables. The approach put forward by the proposed methodology makes use of BNs and incorporates recent developments that facilitate the use of continuous variables whose values may have any probability distributions. Thus, this approach makes the methodology particularly useful in cases where the available data for quantification of hazardous events probabilities are scarce or nonexistent, there is dependence among events, or when nonbinary events are involved. The methodology is applied to the risk analysis of a regasification system of liquefied natural gas (LNG) on board an FSRU (floating, storage, and regasification unit). LNG is becoming an important energy source option and the world's capacity to produce LNG is surging. Large reserves of natural gas exist worldwide, particularly in areas where the resources exceed the demand. Thus, this natural gas is liquefied for shipping and the storage and regasification process usually occurs at onshore plants. However, a new option for LNG storage and regasification has been proposed: the FSRU. As very few FSRUs have been put into operation, relevant failure data on FSRU systems are scarce. The results show the usefulness of the proposed methodology for cases where the risk analysis must be performed under considerable uncertainty. © 2014 Society for Risk Analysis.

  6. On the relationship between wave based control, absolute vibration suppression and input shaping

    NASA Astrophysics Data System (ADS)

    Peled, I.; O'Connor, W. J.; Halevi, Y.

    2013-08-01

    The modeling and control of continuous flexible structures is one of the most challenging problems in control theory. This topic gains more interest with the development of slender space structures, light weight aeronautical components or even traditional gears and drive shafts with flexible properties. Several control schemes are based on the traveling wave approach, rather than the more common modal methods. In this work we investigate the relationships between two of these methods. The Absolute Vibration Suppression (AVS) controller, which was developed for infinite dimension systems, is compared to Wave Based Control (WBC) which was designed primarily for lumped systems. The WBC was first adjusted to continuous systems and then the two controllers, whose algorithms seem different, are compared. The investigation shows that for the flexible shaft these two control laws are actually the same. Furthermore, when converted into an equivalent open loop controller they appear as an extension to continuous systems of the Input Shaping (IS) methodology.

  7. How evidence-based workforce planning in Australia is informing policy development in the retention and distribution of the health workforce

    PubMed Central

    2014-01-01

    Background Australia’s health workforce is facing significant challenges now and into the future. Health Workforce Australia (HWA) was established by the Council of Australian Governments as the national agency to progress health workforce reform to address the challenges of providing a skilled, innovative and flexible health workforce in Australia. HWA developed Australia’s first major, long-term national workforce projections for doctors, nurses and midwives over a planning horizon to 2025 (called Health Workforce 2025; HW 2025), which provided a national platform for developing policies to help ensure Australia’s health workforce meets the community’s needs. Methods A review of existing workforce planning methodologies, in concert with the project brief and an examination of data availability, identified that the best fit-for-purpose workforce planning methodology was the stock and flow model for estimating workforce supply and the utilisation method for estimating workforce demand. Scenario modelling was conducted to explore the implications of possible alternative futures, and to demonstrate the sensitivity of the model to various input parameters. Extensive consultation was conducted to test the methodology, data and assumptions used, and also influenced the scenarios selected for modelling. Additionally, a number of other key principles were adopted in developing HW 2025 to ensure the workforce projections were robust and able to be applied nationally. Results The findings from HW 2025 highlighted that a ‘business as usual’ approach to Australia’s health workforce is not sustainable over the next 10 years, with a need for co-ordinated, long-term reforms by government, professions and the higher education and training sector for a sustainable and affordable health workforce. The main policy levers identified to achieve change were innovation and reform, immigration, training capacity and efficiency and workforce distribution. Conclusion While HW 2025 has provided a national platform for health workforce policy development, it is not a one-off project. It is an ongoing process where HWA will continue to develop and improve health workforce projections incorporating data and methodology improvements to support incremental health workforce changes. PMID:24490586

  8. How evidence-based workforce planning in Australia is informing policy development in the retention and distribution of the health workforce.

    PubMed

    Crettenden, Ian F; McCarty, Maureen V; Fenech, Bethany J; Heywood, Troy; Taitz, Michelle C; Tudman, Sam

    2014-02-03

    Australia's health workforce is facing significant challenges now and into the future. Health Workforce Australia (HWA) was established by the Council of Australian Governments as the national agency to progress health workforce reform to address the challenges of providing a skilled, innovative and flexible health workforce in Australia. HWA developed Australia's first major, long-term national workforce projections for doctors, nurses and midwives over a planning horizon to 2025 (called Health Workforce 2025; HW 2025), which provided a national platform for developing policies to help ensure Australia's health workforce meets the community's needs. A review of existing workforce planning methodologies, in concert with the project brief and an examination of data availability, identified that the best fit-for-purpose workforce planning methodology was the stock and flow model for estimating workforce supply and the utilisation method for estimating workforce demand. Scenario modelling was conducted to explore the implications of possible alternative futures, and to demonstrate the sensitivity of the model to various input parameters. Extensive consultation was conducted to test the methodology, data and assumptions used, and also influenced the scenarios selected for modelling. Additionally, a number of other key principles were adopted in developing HW 2025 to ensure the workforce projections were robust and able to be applied nationally. The findings from HW 2025 highlighted that a 'business as usual' approach to Australia's health workforce is not sustainable over the next 10 years, with a need for co-ordinated, long-term reforms by government, professions and the higher education and training sector for a sustainable and affordable health workforce. The main policy levers identified to achieve change were innovation and reform, immigration, training capacity and efficiency and workforce distribution. While HW 2025 has provided a national platform for health workforce policy development, it is not a one-off project. It is an ongoing process where HWA will continue to develop and improve health workforce projections incorporating data and methodology improvements to support incremental health workforce changes.

  9. Implementation of an active instructional design for teaching the concepts of current, voltage and resistance

    NASA Astrophysics Data System (ADS)

    Orlaineta-Agüero, S.; Del Sol-Fernández, S.; Sánchez-Guzmán, D.; García-Salcedo, R.

    2017-01-01

    In the present work we show the implementation of a learning sequence based on an active learning methodology for teaching Physics, this proposal tends to promote a better learning in high school students with the use of a comic book and it combines the use of different low-cost experimental activities for teaching the electrical concepts of Current, Resistance and Voltage. We consider that this kind of strategy can be easily extrapolated to higher-education levels like Engineering-college/university level and other disciplines of Science. To evaluate this proposal, we used some conceptual questions from the Electric Circuits Concept Evaluation survey developed by Sokoloff and the results from this survey was analysed with the Normalized Conceptual Gain proposed by Hake and the Concentration Factor that was proposed by Bao and Redish, to identify the effectiveness of the methodology and the models that the students presented after and before the instruction, respectively. We found that this methodology was more effective than only the implementation of traditional lectures, we consider that these results cannot be generalized but gave us the opportunity to view many important approaches in Physics Education; finally, we will continue to apply the same experiment with more students, in the same and upper levels of education, to confirm and validate the effectiveness of this methodology proposal.

  10. Algebra for Enterprise Ontology: towards analysis and synthesis of enterprise models

    NASA Astrophysics Data System (ADS)

    Suga, Tetsuya; Iijima, Junichi

    2018-03-01

    Enterprise modeling methodologies have made enterprises more likely to be the object of systems engineering rather than craftsmanship. However, the current state of research in enterprise modeling methodologies lacks investigations of the mathematical background embedded in these methodologies. Abstract algebra, a broad subfield of mathematics, and the study of algebraic structures may provide interesting implications in both theory and practice. Therefore, this research gives an empirical challenge to establish an algebraic structure for one aspect model proposed in Design & Engineering Methodology for Organizations (DEMO), which is a major enterprise modeling methodology in the spotlight as a modeling principle to capture the skeleton of enterprises for developing enterprise information systems. The results show that the aspect model behaves well in the sense of algebraic operations and indeed constructs a Boolean algebra. This article also discusses comparisons with other modeling languages and suggests future work.

  11. Methods for heat transfer and temperature field analysis of the insulated diesel phase 2 progress report

    NASA Technical Reports Server (NTRS)

    Morel, T.; Kerlbar, R.; Fort, E. F.; Blumberg, P. N.

    1985-01-01

    This report describes work done during Phase 2 of a 3 year program aimed at developing a comprehensive heat transfer and thermal analysis methodology for design analysis of insulated diesel engines. The overall program addresses all the key heat transfer issues: (1) spatially and time-resolved convective and radiative in-cylinder heat transfer, (2) steady-state conduction in the overall structure, and (3) cyclical and load/speed temperature transients in the engine structure. During Phase 2, radiation heat transfer model was developed, which accounts for soot formation and burn up. A methodology was developed for carrying out the multi-dimensional finite-element heat conduction calculations within the framework of thermodynamic cycle codes. Studies were carried out using the integrated methodology to address key issues in low heat rejection engines. A wide ranging design analysis matrix was covered, including a variety of insulation strategies, recovery devices and base engine configurations. A single cylinder Cummins engine was installed at Purdue University, and it was brought to a full operational status. The development of instrumentation was continued, concentrating on radiation heat flux detector, total heat flux probe, and accurate pressure-crank angle data acquisition.

  12. The influence of capture-recapture methodology on the evolution of the North American Bird Banding Program

    USGS Publications Warehouse

    Tautin, J.; Lebreton, J.-D.; North, P.M.

    1993-01-01

    Capture-recapture methodology has advanced greatly in the last twenty years and is now a major factor driving the continuing evolution of the North American bird banding program. Bird banding studies are becoming more scientific with improved study designs and analytical procedures. Researchers and managers are gaining more reliable knowledge which in turn betters the conservation of migratory birds. The advances in capture-recapture methodology have benefited gamebird studies primarily, but nongame bird studies will benefit similarly as they expand greatly in the next decade. Further theoretical development of capture-recapture methodology should be encouraged, and, to maximize benefits of the methodology, work on practical applications should be increased.

  13. The US Navy’s Helicopter Integrated Diagnostics System (HIDS) Program: Power Drive Train Crack Detection Diagnostics and Prognostics Life Usage Monitoring and Damage Tolerance; Techniques, Methodologies, and Experiences

    DTIC Science & Technology

    2000-02-01

    HIDS] Program: Power Drive Train Crack Detection Diagnostics and Prognostics ife Usage Monitoring and Damage Tolerance; Techniques, Methodologies, and...and Prognostics , Life Usage Monitoring , and Damage Tolerance; Techniques, Methodologies, and Experiences Andrew Hess Harrison Chin William Hardman...continuing program and deployed engine monitoring systems in fixed to evaluate helicopter diagnostic, prognostic , and wing aircraft, notably on the A

  14. Studies of regional-scale climate variability and change. Hidden Markov models and coupled ocean-atmosphere modes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghil, M.; Kravtsov, S.; Robertson, A. W.

    2008-10-14

    This project was a continuation of previous work under DOE CCPP funding, in which we had developed a twin approach of probabilistic network (PN) models (sometimes called dynamic Bayesian networks) and intermediate-complexity coupled ocean-atmosphere models (ICMs) to identify the predictable modes of climate variability and to investigate their impacts on the regional scale. We had developed a family of PNs (similar to Hidden Markov Models) to simulate historical records of daily rainfall, and used them to downscale GCM seasonal predictions. Using an idealized atmospheric model, we had established a novel mechanism through which ocean-induced sea-surface temperature (SST) anomalies might influencemore » large-scale atmospheric circulation patterns on interannual and longer time scales; we had found similar patterns in a hybrid coupled ocean-atmosphere-sea-ice model. The goal of the this continuation project was to build on these ICM results and PN model development to address prediction of rainfall and temperature statistics at the local scale, associated with global climate variability and change, and to investigate the impact of the latter on coupled ocean-atmosphere modes. Our main results from the grant consist of extensive further development of the hidden Markov models for rainfall simulation and downscaling together with the development of associated software; new intermediate coupled models; a new methodology of inverse modeling for linking ICMs with observations and GCM results; and, observational studies of decadal and multi-decadal natural climate results, informed by ICM results.« less

  15. Satellite-based terrestrial production efficiency modeling

    PubMed Central

    McCallum, Ian; Wagner, Wolfgang; Schmullius, Christiane; Shvidenko, Anatoly; Obersteiner, Michael; Fritz, Steffen; Nilsson, Sten

    2009-01-01

    Production efficiency models (PEMs) are based on the theory of light use efficiency (LUE) which states that a relatively constant relationship exists between photosynthetic carbon uptake and radiation receipt at the canopy level. Challenges remain however in the application of the PEM methodology to global net primary productivity (NPP) monitoring. The objectives of this review are as follows: 1) to describe the general functioning of six PEMs (CASA; GLO-PEM; TURC; C-Fix; MOD17; and BEAMS) identified in the literature; 2) to review each model to determine potential improvements to the general PEM methodology; 3) to review the related literature on satellite-based gross primary productivity (GPP) and NPP modeling for additional possibilities for improvement; and 4) based on this review, propose items for coordinated research. This review noted a number of possibilities for improvement to the general PEM architecture - ranging from LUE to meteorological and satellite-based inputs. Current PEMs tend to treat the globe similarly in terms of physiological and meteorological factors, often ignoring unique regional aspects. Each of the existing PEMs has developed unique methods to estimate NPP and the combination of the most successful of these could lead to improvements. It may be beneficial to develop regional PEMs that can be combined under a global framework. The results of this review suggest the creation of a hybrid PEM could bring about a significant enhancement to the PEM methodology and thus terrestrial carbon flux modeling. Key items topping the PEM research agenda identified in this review include the following: LUE should not be assumed constant, but should vary by plant functional type (PFT) or photosynthetic pathway; evidence is mounting that PEMs should consider incorporating diffuse radiation; continue to pursue relationships between satellite-derived variables and LUE, GPP and autotrophic respiration (Ra); there is an urgent need for satellite-based biomass measurements to improve Ra estimation; and satellite-based soil moisture data could improve determination of soil water stress. PMID:19765285

  16. Semi-Supervised Learning of Lift Optimization of Multi-Element Three-Segment Variable Camber Airfoil

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Nguyen, Nhan T.

    2017-01-01

    This chapter describes a new intelligent platform for learning optimal designs of morphing wings based on Variable Camber Continuous Trailing Edge Flaps (VCCTEF) in conjunction with a leading edge flap called the Variable Camber Krueger (VCK). The new platform consists of a Computational Fluid Dynamics (CFD) methodology coupled with a semi-supervised learning methodology. The CFD component of the intelligent platform comprises of a full Navier-Stokes solution capability (NASA OVERFLOW solver with Spalart-Allmaras turbulence model) that computes flow over a tri-element inboard NASA Generic Transport Model (GTM) wing section. Various VCCTEF/VCK settings and configurations were considered to explore optimal design for high-lift flight during take-off and landing. To determine globally optimal design of such a system, an extremely large set of CFD simulations is needed. This is not feasible to achieve in practice. To alleviate this problem, a recourse was taken to a semi-supervised learning (SSL) methodology, which is based on manifold regularization techniques. A reasonable space of CFD solutions was populated and then the SSL methodology was used to fit this manifold in its entirety, including the gaps in the manifold where there were no CFD solutions available. The SSL methodology in conjunction with an elastodynamic solver (FiDDLE) was demonstrated in an earlier study involving structural health monitoring. These CFD-SSL methodologies define the new intelligent platform that forms the basis for our search for optimal design of wings. Although the present platform can be used in various other design and operational problems in engineering, this chapter focuses on the high-lift study of the VCK-VCCTEF system. Top few candidate design configurations were identified by solving the CFD problem in a small subset of the design space. The SSL component was trained on the design space, and was then used in a predictive mode to populate a selected set of test points outside of the given design space. The new design test space thus populated was evaluated by using the CFD component by determining the error between the SSL predictions and the true (CFD) solutions, which was found to be small. This demonstrates the proposed CFD-SSL methodologies for isolating the best design of the VCK-VCCTEF system, and it holds promise for quantitatively identifying best designs of flight systems, in general.

  17. Roughness Based Crossflow Transition Control for a Swept Airfoil Design Relevant to Subsonic Transports

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Carpenter, Mark H.; Malik, Mujeeb R.; Eppink, Jenna; Chang, Chau-Lyan; Streett, Craig L.

    2010-01-01

    A high fidelity transition prediction methodology has been applied to a swept airfoil design at a Mach number of 0.75 and chord Reynolds number of approximately 17 million, with the dual goal of an assessment of the design for the implementation and testing of roughness based crossflow transition control and continued maturation of such methodology in the context of realistic aerodynamic configurations. Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes in order to weaken the growth of naturally occurring, linearly more unstable instability modes via a nonlinear modification of the mean boundary layer profiles. Therefore, a synthesis of receptivity, linear and nonlinear growth of crossflow disturbances, and high-frequency secondary instabilities becomes desirable to model this form of control. Because experimental data is currently unavailable for passive crossflow transition control for such high Reynolds number configurations, a holistic computational approach is used to assess the feasibility of roughness based control methodology. Potential challenges inherent to this control application as well as associated difficulties in modeling this form of control in a computational setting are highlighted. At high Reynolds numbers, a broad spectrum of stationary crossflow disturbances amplify and, while it may be possible to control a specific target mode using Discrete Roughness Elements (DREs), nonlinear interaction between the control and target modes may yield strong amplification of the difference mode that could have an adverse impact on the transition delay using spanwise periodic roughness elements.

  18. Implementing business continuity management systems and sharing best practices at a European bank.

    PubMed

    Aronis, Stelios; Stratopoulos, Georgios

    2016-01-01

    This paper provides an overview of the methodology applied by the Alpha Bank Group in order to implement a business continuity management (BCM) programme to its parent company (Alpha Bank SA), as well as to its subsidiaries in Albania, Bulgaria, Cyprus, Former Yugoslav Republic of Macedonia, Greece, Romania, Serbia, UK and Ukraine. It also reviews the problems faced, how they were overcome and the lessons learned. When implementing a BCM programme in a large organisation, it is very important to follow the methodology described by BCM standard ISO 22301, otherwise the business continuity plan is unlikely to work efficiently or comply with the business recovery requirements, as well as with the requirements of other interested parties, such as customers, regulatory authorities, vendors, service providers, critical associates, etc.

  19. Optimization of pyDock for the new CAPRI challenges: Docking of homology-based models, domain-domain assembly and protein-RNA binding.

    PubMed

    Pons, Carles; Solernou, Albert; Perez-Cano, Laura; Grosdidier, Solène; Fernandez-Recio, Juan

    2010-11-15

    We describe here our results in the last CAPRI edition. We have participated in all targets, both as predictors and as scorers, using our pyDock docking methodology. The new challenges (homology-based modeling of the interacting subunits, domain-domain assembling, and protein-RNA interactions) have pushed our computer tools to the limits and have encouraged us to devise new docking approaches. Overall, the results have been quite successful, in line with previous editions, especially considering the high difficulty of some of the targets. Our docking approaches succeeded in five targets as predictors or as scorers (T29, T34, T35, T41, and T42). Moreover, with the inclusion of available information on the residues expected to be involved in the interaction, our protocol would have also succeeded in two additional cases (T32 and T40). In the remaining targets (except T37), results were equally poor for most of the groups. We submitted the best model (in ligand RMSD) among scorers for the unbound-bound target T29, the second best model among scorers for the protein-RNA target T34, and the only correct model among predictors for the domain assembly target T35. In summary, our excellent results for the new proposed challenges in this CAPRI edition showed the limitations and applicability of our approaches and encouraged us to continue developing methodologies for automated biomolecular docking. © 2010 Wiley-Liss, Inc.

  20. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    NASA Astrophysics Data System (ADS)

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata

    2016-09-01

    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  1. Rational analyses of information foraging on the web.

    PubMed

    Pirolli, Peter

    2005-05-06

    This article describes rational analyses and cognitive models of Web users developed within information foraging theory. This is done by following the rational analysis methodology of (a) characterizing the problems posed by the environment, (b) developing rational analyses of behavioral solutions to those problems, and (c) developing cognitive models that approach the realization of those solutions. Navigation choice is modeled as a random utility model that uses spreading activation mechanisms that link proximal cues (information scent) that occur in Web browsers to internal user goals. Web-site leaving is modeled as an ongoing assessment by the Web user of the expected benefits of continuing at a Web site as opposed to going elsewhere. These cost-benefit assessments are also based on spreading activation models of information scent. Evaluations include a computational model of Web user behavior called Scent-Based Navigation and Information Foraging in the ACT Architecture, and the Law of Surfing, which characterizes the empirical distribution of the length of paths of visitors at a Web site. 2005 Lawrence Erlbaum Associates, Inc.

  2. Change Detection Analysis of Water Pollution in Coimbatore Region using Different Color Models

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Devi, R. Naveena

    2017-12-01

    The data acquired through remote sensing satellites furnish facts about the land and water at varying resolutions and has been widely used for several change detection studies. Apart from the existence of many change detection methodologies and techniques, emergence of new ones continues to subsist. Existing change detection techniques exploit images that are either in gray scale or RGB color model. In this paper we introduced color models for performing change detection for water pollution. Here the polluted lakes are classified and post-classification change detection techniques are applied to RGB images and results obtained are analysed for changes to exist or not. Furthermore RGB images obtained after classification when converted to any of the two color models YCbCr and YIQ is found to produce the same results as that of the RGB model images. Thus it can be concluded that other color models like YCbCr, YIQ can be used as substitution to RGB color model for analysing change detection with regard to water pollution.

  3. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    NASA Astrophysics Data System (ADS)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  4. Continuing Vocational Training in Belgian Companies: An Upward Tendency

    ERIC Educational Resources Information Center

    Buyens, Dirk; Wouters, Karen

    2005-01-01

    Purpose: As part of the European continuing vocational training survey, this paper aims to give an overview of the evolutions in continuing vocational training (CVT) in Belgian companies, by comparing both the results of the survey of 1994 and those of 2000/2001. Design/methodology/approach: In Belgium 1,129 companies took part in the survey of…

  5. Assessment of continuous oil and gas resources of the Cooper Basin, Australia, 2016

    USGS Publications Warehouse

    Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Klett, Timothy R.; Finn, Thomas M.; Le, Phuong A.; Brownfield, Michael E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.

    2016-07-15

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean continuous resources of 482 million barrels of oil and 29.8 trillion cubic feet of gas in the Cooper Basin of Australia.

  6. Assessment of continuous gas resources in the Khorat Plateau Province, Thailand and Laos, 2016

    USGS Publications Warehouse

    Schenk, Christopher J.; Klett, Timothy R.; Mercier, Tracey J.; Finn, Thomas M.; Tennyson, Marilyn E.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Drake, Ronald M.

    2017-05-25

    Using a geology-based assessment methodology, the U.S. Geological Survey assessed mean undiscovered, technically recoverable resources of 2.3 trillion cubic feet of continuous gas in the Khorat Plateau Province of Thailand and Laos.

  7. 77 FR 1434 - Proposed Confidentiality Determinations for Data Elements Under the Mandatory Reporting of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-10

    .... EPA-HQ- OAR-2011-0028, by one of the following methods: Federal eRulemaking Portal: http://www... document. BAMM Best Available Monitoring Methods CAA Clean Air Act CEMS continuous emission monitoring... Methodology and Methodological Tier..... X Data Elements Reported for Periods of Missing Data X that are Not...

  8. Methodological Choices in Peer Nomination Research

    ERIC Educational Resources Information Center

    Cillessen, Antonius H. N.; Marks, Peter E. L.

    2017-01-01

    Although peer nomination measures have been used by researchers for nearly a century, common methodological practices and rules of thumb (e.g., which variables to measure; use of limited vs. unlimited nomination methods) have continued to develop in recent decades. At the same time, other key aspects of the basic nomination procedure (e.g.,…

  9. ALLOCATING ENVIRONMENTAL BURDENS ACROSS CO-PRODUCTS TO CREATE A LIFE CYCLE INVENTORY: IS THERE A BEST WAY?

    EPA Science Inventory

    Allocation methodology for creating life cycle inventories is frequently addressed, discussed and debated, yet the methodology continues to be in a state of flux. ISO 14041 puts perspective on the issues but its one-size fits all framework is being challenged. It is clear that ...

  10. Design Research with a Focus on Learning Processes: An Overview on Achievements and Challenges

    ERIC Educational Resources Information Center

    Prediger, Susanne; Gravemeijer, Koeno; Confrey, Jere

    2015-01-01

    Design research continues to gain prominence as a significant methodology in the mathematics education research community. This overview summarizes the origins and the current state of design research practices focusing on methodological requirements and processes of theorizing. While recognizing the rich variations in the foci and scale of design…

  11. Using a Principle-Based Method to Support a Disability Aesthetic

    ERIC Educational Resources Information Center

    Anderson, Bailey

    2015-01-01

    This article calls choreographers and educators alike to continue building an awareness of methodologies that support a disability aesthetic. A disability aesthetic supports the embodiment of dancers with disabilities by allowing for their bodies to set guidelines of beauty and value. Principle-based work is a methodology that supports a…

  12. 40 CFR 132.4 - State adoption and application of methodologies, policies and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (2) The chronic water quality criteria and values for the protection of aquatic life, or site... AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY GUIDANCE FOR THE GREAT LAKES SYSTEM § 132.4 State...) The Methodologies for Development of Aquatic Life Criteria and Values in appendix A of this part; (3...

  13. 40 CFR 132.4 - State adoption and application of methodologies, policies and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (2) The chronic water quality criteria and values for the protection of aquatic life, or site... AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY GUIDANCE FOR THE GREAT LAKES SYSTEM § 132.4 State...) The Methodologies for Development of Aquatic Life Criteria and Values in appendix A of this part; (3...

  14. 40 CFR 132.4 - State adoption and application of methodologies, policies and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (2) The chronic water quality criteria and values for the protection of aquatic life, or site... AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY GUIDANCE FOR THE GREAT LAKES SYSTEM § 132.4 State...) The Methodologies for Development of Aquatic Life Criteria and Values in appendix A of this part; (3...

  15. 40 CFR 132.4 - State adoption and application of methodologies, policies and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (2) The chronic water quality criteria and values for the protection of aquatic life, or site... AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY GUIDANCE FOR THE GREAT LAKES SYSTEM § 132.4 State...) The Methodologies for Development of Aquatic Life Criteria and Values in appendix A of this part; (3...

  16. 40 CFR 132.4 - State adoption and application of methodologies, policies and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (2) The chronic water quality criteria and values for the protection of aquatic life, or site... AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY GUIDANCE FOR THE GREAT LAKES SYSTEM § 132.4 State...) The Methodologies for Development of Aquatic Life Criteria and Values in appendix A of this part; (3...

  17. Machine learning in sentiment reconstruction of the simulated stock market

    NASA Astrophysics Data System (ADS)

    Goykhman, Mikhail; Teimouri, Ali

    2018-02-01

    In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior. We demonstrate that the Hidden Markov Model can successfully recover the transition probabilities matrix for the hidden sentiment process of the Markov Chain type. We also demonstrate that the Recurrent Neural Network can successfully recover the hidden sentiment states from the observed simulated stock price time series.

  18. Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.

    1993-05-01

    This report contains the following papers: Implications in vivid logic; a self-learning bayesian expert system; a natural language generation system for a heterogeneous distributed database system; competence-switching'' managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries.more » These papers have been indexed separately.« less

  19. Proceedings of the Seventh International Symposium on Methodologies for Intelligent Systems (Poster Session)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harber, K.S.

    1993-05-01

    This report contains the following papers: Implications in vivid logic; a self-learning Bayesian Expert System; a natural language generation system for a heterogeneous distributed database system; ``competence-switching`` managed by intelligent systems; strategy acquisition by an artificial neural network: Experiments in learning to play a stochastic game; viewpoints and selective inheritance in object-oriented modeling; multivariate discretization of continuous attributes for machine learning; utilization of the case-based reasoning method to resolve dynamic problems; formalization of an ontology of ceramic science in CLASSIC; linguistic tools for intelligent systems; an application of rough sets in knowledge synthesis; and a relational model for imprecise queries.more » These papers have been indexed separately.« less

  20. Simplifying the complexity of resistance heterogeneity in metastasis

    PubMed Central

    Lavi, Orit; Greene, James M.; Levy, Doron; Gottesman, Michael M.

    2014-01-01

    The main goal of treatment regimens for metastasis is to control growth rates, not eradicate all cancer cells. Mathematical models offer methodologies that incorporate high-throughput data with dynamic effects on net growth. The ideal approach would simplify, but not over-simplify, a complex problem into meaningful and manageable estimators that predict a patient’s response to specific treatments. Here, we explore three fundamental approaches with different assumptions concerning resistance mechanisms, in which the cells are categorized into either discrete compartments or described by a continuous range of resistance levels. We argue in favor of modeling resistance as a continuum and demonstrate how integrating cellular growth rates, density-dependent versus exponential growth, and intratumoral heterogeneity improves predictions concerning the resistance heterogeneity of metastases. PMID:24491979

  1. Continuing harmonization of terminology and innovations for methodologies in developmental toxicology: Report of the 8th Berlin Workshop on Developmental Toxicity, 14-16 May 2014.

    PubMed

    Solecki, Roland; Rauch, Martina; Gall, Andrea; Buschmann, Jochen; Clark, Ruth; Fuchs, Antje; Kan, Haidong; Heinrich, Verena; Kellner, Rupert; Knudsen, Thomas B; Li, Weihua; Makris, Susan L; Ooshima, Yojiro; Paumgartten, Francisco; Piersma, Aldert H; Schönfelder, Gilbert; Oelgeschläger, Michael; Schaefer, Christof; Shiota, Kohei; Ulbrich, Beate; Ding, Xuncheng; Chahoud, Ibrahim

    2015-11-01

    This article is a report of the 8th Berlin Workshop on Developmental Toxicity held in May 2014. The main aim of the workshop was the continuing harmonization of terminology and innovations for methodologies used in the assessment of embryo- and fetotoxic findings. The following main topics were discussed: harmonized categorization of external, skeletal, visceral and materno-fetal findings into malformations, variations and grey zone anomalies, aspects of developmental anomalies in humans and laboratory animals, and innovations for new methodologies in developmental toxicology. The application of Version 2 terminology in the DevTox database was considered as a useful improvement in the categorization of developmental anomalies. Participants concluded that initiation of a project for comparative assessments of developmental anomalies in humans and laboratory animals could support regulatory risk assessment and university-based training. Improvement of new methodological approaches for alternatives to animal testing should be triggered for a better understanding of developmental outcomes. Copyright © 2015. Published by Elsevier Inc.

  2. Stability of Retained Austenite in High-Al, Low-Si TRIP-Assisted Steels Processed via Continuous Galvanizing Heat Treatments

    NASA Astrophysics Data System (ADS)

    McDermid, J. R.; Zurob, H. S.; Bian, Y.

    2011-12-01

    Two galvanizable high-Al, low-Si transformation-induced plasticity (TRIP)-assisted steels were subjected to isothermal bainitic transformation (IBT) temperatures compatible with the continuous galvanizing (CGL) process and the kinetics of the retained austenite (RA) to martensite transformation during room temperature deformation studied as a function of heat treatment parameters. It was determined that there was a direct relationship between the rate of strain-induced transformation and optimal mechanical properties, with more gradual transformation rates being favored. The RA to martensite transformation kinetics were successfully modeled using two methodologies: (1) the strain-based model of Olsen and Cohen and (2) a simple relationship with the normalized flow stress, ( {{{σ_{{flow}} - σ_{YS} }/{σ_{YS }}}} ) . For the strain-based model, it was determined that the model parameters were a strong function of strain and alloy thermal processing history and a weak function of alloy chemistry. It was verified that the strain-based model in the present work agrees well with those derived by previous workers using TRIP-assisted steels of similar composition. It was further determined that the RA to martensite transformation kinetics for all alloys and heat treatments could be described using a simple model vs the normalized flow stress, indicating that the RA to martensite transformation is stress-induced rather than strain-induced for temperatures above the Ms^{σ }.

  3. Determining the ventilation and aerosol deposition rates from routine indoor-air measurements.

    PubMed

    Halios, Christos H; Helmis, Costas G; Deligianni, Katerina; Vratolis, Sterios; Eleftheriadis, Konstantinos

    2014-01-01

    Measurement of air exchange rate provides critical information in energy and indoor-air quality studies. Continuous measurement of ventilation rates is a rather costly exercise and requires specific instrumentation. In this work, an alternative methodology is proposed and tested, where the air exchange rate is calculated by utilizing indoor and outdoor routine measurements of a common pollutant such as SO2, whereas the uncertainties induced in the calculations are analytically determined. The application of this methodology is demonstrated, for three residential microenvironments in Athens, Greece, and the results are also compared against ventilation rates calculated from differential pressure measurements. The calculated time resolved ventilation rates were applied to the mass balance equation to estimate the particle loss rate which was found to agree with literature values at an average of 0.50 h(-1). The proposed method was further evaluated by applying a mass balance numerical model for the calculation of the indoor aerosol number concentrations, using the previously calculated ventilation rate, the outdoor measured number concentrations and the particle loss rates as input values. The model results for the indoors' concentrations were found to be compared well with the experimentally measured values.

  4. Low energy electron-molecule scattering using the R-matrix method

    NASA Astrophysics Data System (ADS)

    Gorfinkiel, Jimena

    2014-10-01

    The study of electron-molecule collisions continues to attract significant interest stimulated, in no small part, by the need for collisional data to model a number of physical environments and applied processes (e.g. the modelling of focused electron beam induced deposition and the description of the interaction of radiation with biological matter). This need for electron scattering data (cross sections but also information on the temporary negative ions, TNI, that can be formed) has motivated the renewed development of theoretical methodology and their computational implementation. I will present the latest developments in the study of low energy electron scattering from molecules and molecular clusters using the R-matrix method. Recent calculations on electron collisions with biologically relevant molecules have shed light on the formation of core-excited TNI these larger targets. The picture that emerges is much more complex than previously thought. I will discuss some examples as well as current and future developments of the methodology and software in order to provide more accurate collisional data (in particular cross sections) for bigger targets. In collaboration with Zdenek Masin, The Open University. This work was partially supported by EPSRC.

  5. Computational Electrocardiography: Revisiting Holter ECG Monitoring.

    PubMed

    Deserno, Thomas M; Marx, Nikolaus

    2016-08-05

    Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.

  6. [Education at a distance as a methodological option for the development of continuing education processes for human resources in health].

    PubMed

    Müller, A K

    1987-01-01

    It is becoming increasingly essential to put into effect the processes of continuing education that will facilitate continuing access to education, adapt to political changes, technological advances and current situations, and reach all the population to be trained. In so doing, institutions must implement new educational methodologies that reduce costs and extend coverage. One of these methodologies is education at a distance, which is examined in this article. This type of approach uses methods and techniques for individual and team work (studies based on written and audiovisual materials which, along with back-up tutoring and practical equipment and instruments, make up an instructional package) and physical presence activities under direction and supervision; at the same time, it offers the possibility of a study schedule that complements the work day. It also facilitates the continuing education of in-service personnel and encourages to assume greater responsibility for their own instruction with a view to their own overall development and to the attainment of excellence in the performance of services. Education at a distance is an option for the training of manpower committed to the performance of health services, and it must be introduced into current educational programs slowly and by degrees. The Health Training Program for Central America and Panama (PASCAP) is preparing a methodological guide for the design of systems of education at a distance as a frame of reference that must be adapted to the specific characteristics and needs of each country and institution. These stages are, broadly, a conceptual framework and academic planning, academic production, teaching-learning, and evaluation.

  7. A structure-preserving method for a class of nonlinear dissipative wave equations with Riesz space-fractional derivatives

    NASA Astrophysics Data System (ADS)

    Macías-Díaz, J. E.

    2017-12-01

    In this manuscript, we consider an initial-boundary-value problem governed by a (1 + 1)-dimensional hyperbolic partial differential equation with constant damping that generalizes many nonlinear wave equations from mathematical physics. The model considers the presence of a spatial Laplacian of fractional order which is defined in terms of Riesz fractional derivatives, as well as the inclusion of a generic continuously differentiable potential. It is known that the undamped regime has an associated positive energy functional, and we show here that it is preserved throughout time under suitable boundary conditions. To approximate the solutions of this model, we propose a finite-difference discretization based on fractional centered differences. Some discrete quantities are proposed in this work to estimate the energy functional, and we show that the numerical method is capable of conserving the discrete energy under the same boundary conditions for which the continuous model is conservative. Moreover, we establish suitable computational constraints under which the discrete energy of the system is positive. The method is consistent of second order, and is both stable and convergent. The numerical simulations shown here illustrate the most important features of our numerical methodology.

  8. Objective assessment of MPEG-2 video quality

    NASA Astrophysics Data System (ADS)

    Gastaldo, Paolo; Zunino, Rodolfo; Rovetta, Stefano

    2002-07-01

    The increasing use of video compression standards in broadcasting television systems has required, in recent years, the development of video quality measurements that take into account artifacts specifically caused by digital compression techniques. In this paper we present a methodology for the objective quality assessment of MPEG video streams by using circular back-propagation feedforward neural networks. Mapping neural networks can render nonlinear relationships between objective features and subjective judgments, thus avoiding any simplifying assumption on the complexity of the model. The neural network processes an instantaneous set of input values, and yields an associated estimate of perceived quality. Therefore, the neural-network approach turns objective quality assessment into adaptive modeling of subjective perception. The objective features used for the estimate are chosen according to the assessed relevance to perceived quality and are continuously extracted in real time from compressed video streams. The overall system mimics perception but does not require any analytical model of the underlying physical phenomenon. The capability to process compressed video streams represents an important advantage over existing approaches, like avoiding the stream-decoding process greatly enhances real-time performance. Experimental results confirm that the system provides satisfactory, continuous-time approximations for actual scoring curves concerning real test videos.

  9. Management of the aging of critical safety-related concrete structures in light-water reactor plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naus, D.J.; Oland, C.B.; Arndt, E.G.

    1990-01-01

    The Structural Aging Program has the overall objective of providing the USNRC with an improved basis for evaluating nuclear power plant safety-related structures for continued service. The program consists of a management task and three technical tasks: materials property data base, structural component assessment/repair technology, and quantitative methodology for continued-service determinations. Objectives, accomplishments, and planned activities under each of these tasks are presented. Major program accomplishments include development of a materials property data base for structural materials as well as an aging assessment methodology for concrete structures in nuclear power plants. Furthermore, a review and assessment of inservice inspection techniquesmore » for concrete materials and structures has been complete, and work on development of a methodology which can be used for performing current as well as reliability-based future condition assessment of concrete structures is well under way. 43 refs., 3 tabs.« less

  10. Telephone-quality pathological speech classification using empirical mode decomposition.

    PubMed

    Kaleem, M F; Ghoraani, B; Guergachi, A; Krishnan, S

    2011-01-01

    This paper presents a computationally simple and effective methodology based on empirical mode decomposition (EMD) for classification of telephone quality normal and pathological speech signals. EMD is used to decompose continuous normal and pathological speech signals into intrinsic mode functions, which are analyzed to extract physically meaningful and unique temporal and spectral features. Using continuous speech samples from a database of 51 normal and 161 pathological speakers, which has been modified to simulate telephone quality speech under different levels of noise, a linear classifier is used with the feature vector thus obtained to obtain a high classification accuracy, thereby demonstrating the effectiveness of the methodology. The classification accuracy reported in this paper (89.7% for signal-to-noise ratio 30 dB) is a significant improvement over previously reported results for the same task, and demonstrates the utility of our methodology for cost-effective remote voice pathology assessment over telephone channels.

  11. Using Lean Process Improvement to Enhance Safety and Value in Orthopaedic Surgery: The Case of Spine Surgery.

    PubMed

    Sethi, Rajiv; Yanamadala, Vijay; Burton, Douglas C; Bess, Robert Shay

    2017-11-01

    Lean methodology was developed in the manufacturing industry to increase output and decrease costs. These labor organization methods have become the mainstay of major manufacturing companies worldwide. Lean methods involve continuous process improvement through the systematic elimination of waste, prevention of mistakes, and empowerment of workers to make changes. Because of the profit and productivity gains made in the manufacturing arena using lean methods, several healthcare organizations have adopted lean methodologies for patient care. Lean methods have now been implemented in many areas of health care. In orthopaedic surgery, lean methods have been applied to reduce complication rates and create a culture of continuous improvement. A step-by-step guide based on our experience can help surgeons use lean methods in practice. Surgeons and hospital centers well versed in lean methodology will be poised to reduce complications, improve patient outcomes, and optimize cost/benefit ratios for patient care.

  12. Proceedings of the 19th Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of applications software. The goals of the SEL are: (1) to understand the software development process in the GSFC environment; (2) to measure the effects of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include this document.

  13. Newberry EGS Seismic Velocity Model

    DOE Data Explorer

    Templeton, Dennise

    2013-10-01

    We use ambient noise correlation (ANC) to create a detailed image of the subsurface seismic velocity at the Newberry EGS site down to 5 km. We collected continuous data for the 22 stations in the Newberry network, together with 12 additional stations from the nearby CC, UO and UW networks. The data were instrument corrected, whitened and converted to single bit traces before cross correlation according to the methodology in Benson (2007). There are 231 unique paths connecting the 22 stations of the Newberry network. The additional networks extended that to 402 unique paths crossing beneath the Newberry site.

  14. JEDI Methodology | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    Methodology JEDI Methodology The intent of the Jobs and Economic Development Impact (JEDI) models costs) to demonstrate the employment and economic impacts that will likely result during the estimate of overall economic impacts from specific scenarios. Please see Limitations of JEDI Models for

  15. Le management des projets scientifiques

    NASA Astrophysics Data System (ADS)

    Perrier, Françoise

    2000-12-01

    We describe in this paper a new approach for the management of scientific projects. This approach is the result of a long reflexion carried out within the MQDP (Methodology and Quality in the Project Development) group of INSU-CNRS, and continued with Guy Serra. Our reflexion was initiated with the study of the so-called `North-American Paradigm' which was, initially considered as the only relevant management model. Through our active participation in several astrophysical projects we realized that this model could not be applied to our laboratories without major modifications. Therefore, step-by-step, we have constructed our own methodology, using to the fullest human potential resources existing in our research field, their habits and skills. We have also participated in various working groups in industrial and scientific organisms for the benefits of CNRS. The management model presented here is based on a systemic and complex approach. This approach lets us describe the multiple aspects of a scientific project specially taking into account the human dimension. The project system model includes three major interconnected systems, immersed within an influencing and influenced environment: the `System to be Realized' which defines scientific and technical tasks leading to the scientific goals, the `Realizing System' which describes procedures, processes and organization, and the `Actors' System' which implements and boosts all the processes. Each one exists only through a series of successive models, elaborated at predefined dates of the project called `key-points'. These systems evolve with time and under often-unpredictable circumstances and the models have to take it into account. At these key-points, each model is compared to reality and the difference between the predicted and realized tasks is evaluated in order to define the data for the next model. This model can be applied to any kind of projects.

  16. A Methodology for the Parametric Reconstruction of Non-Steady and Noisy Meteorological Time Series

    NASA Astrophysics Data System (ADS)

    Rovira, F.; Palau, J. L.; Millán, M.

    2009-09-01

    Climatic and meteorological time series often show some persistence (in time) in the variability of certain features. One could regard annual, seasonal and diurnal time variability as trivial persistence in the variability of some meteorological magnitudes (as, e.g., global radiation, air temperature above surface, etc.). In these cases, the traditional Fourier transform into frequency space will show the principal harmonics as the components with the largest amplitude. Nevertheless, meteorological measurements often show other non-steady (in time) variability. Some fluctuations in measurements (at different time scales) are driven by processes that prevail on some days (or months) of the year but disappear on others. By decomposing a time series into time-frequency space through the continuous wavelet transformation, one is able to determine both the dominant modes of variability and how those modes vary in time. This study is based on a numerical methodology to analyse non-steady principal harmonics in noisy meteorological time series. This methodology combines both the continuous wavelet transform and the development of a parametric model that includes the time evolution of the principal and the most statistically significant harmonics of the original time series. The parameterisation scheme proposed in this study consists of reproducing the original time series by means of a statistically significant finite sum of sinusoidal signals (waves), each defined by using the three usual parameters: amplitude, frequency and phase. To ensure the statistical significance of the parametric reconstruction of the original signal, we propose a standard statistical t-student analysis of the confidence level of the amplitude in the parametric spectrum for the different wave components. Once we have assured the level of significance of the different waves composing the parametric model, we can obtain the statistically significant principal harmonics (in time) of the original time series by using the Fourier transform of the modelled signal. Acknowledgements The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (València, Spain). This study has been partially funded by the European Commission (FP VI, Integrated Project CIRCE - No. 036961) and by the Ministerio de Ciencia e Innovación, research projects "TRANSREG” (CGL2007-65359/CLI) and "GRACCIE” (CSD2007-00067, Program CONSOLIDER-INGENIO 2010).

  17. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    PubMed

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  19. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  20. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  1. Assessment of undiscovered continuous oil and gas resources in the Hanoi Trough, Vietnam, 2017

    USGS Publications Warehouse

    Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Le, Phuong A.; Klett, Timothy R.; Finn, Thomas M.; Leathers-Miller, Heidi M.; Gaswirth, Stephanie B.; Marra, Kristen R.

    2018-02-13

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 52 million barrels of oil and 591 billion cubic feet of gas in the Hanoi Trough of Vietnam.

  2. Profilometric characterization of DOEs with continuous microrelief

    NASA Astrophysics Data System (ADS)

    Korolkov, V. P.; Ostapenko, S. V.; Shimansky, R. V.

    2008-09-01

    Methodology of local characterization of continuous-relief diffractive optical elements has been discussed. The local profile depth can be evaluated using "approximated depth" defined without taking a profile near diffractive zone boundaries into account. Several methods to estimate the approximated depth have been offered.

  3. 47 CFR 65.800 - Rate base.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...

  4. Robust inference in discrete hazard models for randomized clinical trials.

    PubMed

    Nguyen, Vinh Q; Gillen, Daniel L

    2012-10-01

    Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.

  5. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    DTIC Science & Technology

    2016-06-01

    characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira

  6. Measuring and statistically testing the size of the effect of a chemical compound on a continuous in-vitro pharmacological response through a new statistical model of response detection limit

    PubMed Central

    Diaz, Francisco J.; McDonald, Peter R.; Pinter, Abraham; Chaguturu, Rathnam

    2018-01-01

    Biomolecular screening research frequently searches for the chemical compounds that are most likely to make a biochemical or cell-based assay system produce a strong continuous response. Several doses are tested with each compound and it is assumed that, if there is a dose-response relationship, the relationship follows a monotonic curve, usually a version of the median-effect equation. However, the null hypothesis of no relationship cannot be statistically tested using this equation. We used a linearized version of this equation to define a measure of pharmacological effect size, and use this measure to rank the investigated compounds in order of their overall capability to produce strong responses. The null hypothesis that none of the examined doses of a particular compound produced a strong response can be tested with this approach. The proposed approach is based on a new statistical model of the important concept of response detection limit, a concept that is usually neglected in the analysis of dose-response data with continuous responses. The methodology is illustrated with data from a study searching for compounds that neutralize the infection by a human immunodeficiency virus of brain glioblastoma cells. PMID:24905187

  7. Design and implementation of a system for laser assisted milling of advanced materials

    NASA Astrophysics Data System (ADS)

    Wu, Xuefeng; Feng, Gaocheng; Liu, Xianli

    2016-09-01

    Laser assisted machining is an effective method to machine advanced materials with the added benefits of longer tool life and increased material removal rates. While extensive studies have investigated the machining properties for laser assisted milling(LAML), few attempts have been made to extend LAML to machining parts with complex geometric features. A methodology for continuous path machining for LAML is developed by integration of a rotary and movable table into an ordinary milling machine with a laser beam system. The machining strategy and processing path are investigated to determine alignment of the machining path with the laser spot. In order to keep the material removal temperatures above the softening temperature of silicon nitride, the transformation is coordinated and the temperature interpolated, establishing a transient thermal model. The temperatures of the laser center and cutting zone are also carefully controlled to achieve optimal machining results and avoid thermal damage. These experiments indicate that the system results in no surface damage as well as good surface roughness, validating the application of this machining strategy and thermal model in the development of a new LAML system for continuous path processing of silicon nitride. The proposed approach can be easily applied in LAML system to achieve continuous processing and improve efficiency in laser assisted machining.

  8. A general description of detachment for multidimensional modelling of biofilms.

    PubMed

    Xavier, Joao de Bivar; Picioreanu, Cristian; van Loosdrecht, Mark C M

    2005-09-20

    A general method for describing biomass detachment in multidimensional biofilm modelling is introduced. Biomass losses from processes acting on the entire surface of the biofilm, such as erosion, are modelled using a continuous detachment speed function F(det). Discrete detachment events, i.e. sloughing, are implicitly derived from simulations. The method is flexible to allow F(det) to take several forms, including expressions dependent on any state variables such as the local biofilm density. This methodology for biomass detachment was integrated with multidimensional (2D and 3D) particle-based multispecies biofilm models by using a novel application of the level set method. Application of the method is illustrated by trends in the dynamics of biofilms structure and activity derived from simulations performed on a simple model considering uniform biomass (case study I) and a model discriminating biomass composition in heterotrophic active mass, extracellular polymeric substances (EPS) and inert mass (case study II). Results from case study I demonstrate the effect of applied detachment forces as a fundamental factor influencing steady-state biofilm activity and structure. Trends from experimental observations reported in literature were correctly described. For example, simulation results indicated that biomass sloughing is reduced when erosion forces are increased. Case study II illustrates the application of the detachment methodology to systems with non-uniform biomass composition. Simulations carried out at different bulk concentrations of substrate show changes in biofilm structure (in terms of shape, density and spatial distribution of biomass components) and activity (in terms of oxygen and substrate consumption) as a consequence of either oxygen-limited or substrate-limited growth. (c) 2005 Wiley Periodicals, Inc.

  9. A Process Evaluation of Project Developmental Continuity. Interim Report IV, Volume I: Pilot Year Impact Study--Instrument Characteristics and Attrition Trends.

    ERIC Educational Resources Information Center

    Granville, Arthur C.; And Others

    This interim report of a pilot year impact study on evaluation methodology is part of a series of documents on the evaluation of Project Developmental Continuity, a Head Start demonstration program aimed at promoting educational and developmental continuity between children's Head Start and primary school experiences. This report deals with…

  10. Horizontal violence in nursing: the continuing silence.

    PubMed

    McCall, E

    1996-04-01

    Horizontal violence continues to affect the lives of many nurses today. This paper will present the findings of research which utilised a feminist methodology to document the stories of nurses illustrating their experiences of horizontal violence in their workplaces, and their perceptions of the reasons for the continued oppression of nurses, despite almost fifteen years of academic writing on the subject.

  11. Resiliency scoring for business continuity plans.

    PubMed

    Olson, Anna; Anderson, Jamie

    Through this paper readers will learn of a scoring methodology, referred to as resiliency scoring, which enables the evaluation of business continuity plans based upon analysis of their alignment with a predefined set of criteria that can be customised and are adaptable to the needs of any organisation. This patent pending tool has been successful in driving engagement and is a powerful resource to improve reporting capabilities, identify risks and gauge organisational resilience. The role of business continuity professionals is to aid their organisations in planning and preparedness activities aimed at mitigating the impacts of potential disruptions and ensuring critical business functions can continue in the event of unforeseen circumstances. This may seem like a daunting task for what can typically be a small team of individuals. For this reason, it is important to be able to leverage industry standards, documented best practices and effective tools to streamline and support your continuity programme. The resiliency scoring methodology developed and implemented at Target has proven to be a valuable tool in taking the organisation's continuity programme to the next level. This paper will detail how the tool was developed and provide guidance on how it can be customised to fit your organisation's unique needs.

  12. Teaching Research through Field Studies: A Cumulative Opportunity for Teaching Methodology to Human Geography Undergraduates

    ERIC Educational Resources Information Center

    Panelli, Ruth; Welch, Richard V.

    2005-01-01

    Notwithstanding its iconic status within geography, the debate continues about how fieldwork should be taught to undergraduate students. The authors engage with this debate and argue that field studies should follow the teaching of research methodology. In this paper they review relevant literature on the place of fieldwork in geography training,…

  13. Using Indigenous Educational Research to Transform Mainstream Education: A Guide for P-12 School Leaders

    ERIC Educational Resources Information Center

    Harrington, Billie Graham; CHiXapkaid (Pavel, D. Michael)

    2013-01-01

    The principal assertion of this article is that Indigenous research methodologies should be used to develop educational policies and practices for Native students. The history of American educational research is marred by a near complete dismissal of Indigenous knowledge, as Western research methodologies continue to define the landscape of P-12…

  14. Students' Involvement in Continuous Assessment Methodologies: A Case Study for a Distributed Information Systems Course

    ERIC Educational Resources Information Center

    Cano, M.-D.

    2011-01-01

    The creation of the new European Higher Education Area (EHEA), with the corresponding changes in the structure and content of university degrees, offers a great opportunity to review learning methodologies. This paper investigates the effect on students of moving from a traditional learning process, based on lectures and laboratory work, to an…

  15. Big Data Analytics Methodology in the Financial Industry

    ERIC Educational Resources Information Center

    Lawler, James; Joseph, Anthony

    2017-01-01

    Firms in industry continue to be attracted by the benefits of Big Data Analytics. The benefits of Big Data Analytics projects may not be as evident as frequently indicated in the literature. The authors of the study evaluate factors in a customized methodology that may increase the benefits of Big Data Analytics projects. Evaluating firms in the…

  16. An Examination of the State of Imitation Research in Children with Autism: Issues of Definition and Methodology

    ERIC Educational Resources Information Center

    Sevlever, Melina; Gillis, Jennifer M.

    2010-01-01

    Several authors have suggested that children with autism are impaired in their ability to imitate others. However, diverse methodologies, contradictory findings, and varying theoretical explanations continue to exist in the literature despite decades of research. A comprehensive account of imitation in children with autism is hampered by the lack…

  17. Force 2025 and Beyond Strategic Force Design Analytic Model

    DTIC Science & Technology

    2017-01-12

    depiction of the core ideas of our force design model. Figure 1: Description of Force Design Model Figure 2 shows an overview of our methodology ...the F2025B Force Design Analytic Model research conducted by TRAC- MTRY and the Naval Postgraduate School. Our research develops a methodology for...designs. We describe a data development methodology that characterizes the data required to construct a force design model using our approach. We

  18. ORACLS: A system for linear-quadratic-Gaussian control law design

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  19. Graceful Failure and Societal Resilience Analysis Via Agent-Based Modeling and Simulation

    NASA Astrophysics Data System (ADS)

    Schopf, P. S.; Cioffi-Revilla, C.; Rogers, J. D.; Bassett, J.; Hailegiorgis, A. B.

    2014-12-01

    Agent-based social modeling is opening up new methodologies for the study of societal response to weather and climate hazards, and providing measures of resiliency that can be studied in many contexts, particularly in coupled human and natural-technological systems (CHANTS). Since CHANTS are complex adaptive systems, societal resiliency may or may not occur, depending on dynamics that lack closed form solutions. Agent-based modeling has been shown to provide a viable theoretical and methodological approach for analyzing and understanding disasters and societal resiliency in CHANTS. Our approach advances the science of societal resilience through computational modeling and simulation methods that complement earlier statistical and mathematical approaches. We present three case studies of social dynamics modeling that demonstrate the use of these agent based models. In Central Asia, we exmaine mutltiple ensemble simulations with varying climate statistics to see how droughts and zuds affect populations, transmission of wealth across generations, and the overall structure of the social system. In Eastern Africa, we explore how successive episodes of drought events affect the adaptive capacity of rural households. Human displacement, mainly, rural to urban migration, and livelihood transition particularly from pastoral to farming are observed as rural households interacting dynamically with the biophysical environment and continually adjust their behavior to accommodate changes in climate. In the far north case we demonstrate one of the first successful attempts to model the complete climate-permafrost-infrastructure-societal interaction network as a complex adaptive system/CHANTS implemented as a ``federated'' agent-based model using evolutionary computation. Analysis of population changes resulting from extreme weather across these and other cases provides evidence for the emergence of new steady states and shifting patterns of resilience.

  20. PyMCT: A Very High Level Language Coupling Tool For Climate System Models

    NASA Astrophysics Data System (ADS)

    Tobis, M.; Pierrehumbert, R. T.; Steder, M.; Jacob, R. L.

    2006-12-01

    At the Climate Systems Center of the University of Chicago, we have been examining strategies for applying agile programming techniques to complex high-performance modeling experiments. While the "agile" development methodology differs from a conventional requirements process and its associated milestones, the process remain a formal one. It is distinguished by continuous improvement in functionality, large numbers of small releases, extensive and ongoing testing strategies, and a strong reliance on very high level languages (VHLL). Here we report on PyMCT, which we intend as a core element in a model ensemble control superstructure. PyMCT is a set of Python bindings for MCT, the Fortran-90 based Model Coupling Toolkit, which forms the infrastructure for the inter-component communication in the Community Climate System Model (CCSM). MCT provides a scalable model communication infrastructure. In order to take maximum advantage of agile software development methodologies, we exposed MCT functionality to Python, a prominent VHLL. We describe how the scalable architecture of MCT allows us to overcome the relatively weak runtime performance of Python, so that the performance of the combined system is not severely impacted. To demonstrate these advantages, we reimplemented the CCSM coupler in Python. While this alone offers no new functionality, it does provide a rigorous test of PyMCT functionality and performance. We reimplemented the CPL6 library, presenting an interesting case study of the comparison between conventional Fortran-90 programming and the higher abstraction level provided by a VHLL. The powerful abstractions provided by Python will allow much more complex experimental paradigms. In particular, we hope to build on the scriptability of our coupling strategy to enable systematic sensitivity tests. Our most ambitious objective is to combine our efforts with Bayesian inverse modeling techniques toward objective tuning at the highest level, across model architectures.

  1. Dose Transition Pathways: The Missing Link Between Complex Dose-Finding Designs and Simple Decision-Making.

    PubMed

    Yap, Christina; Billingham, Lucinda J; Cheung, Ying Kuen; Craddock, Charlie; O'Quigley, John

    2017-12-15

    The ever-increasing pace of development of novel therapies mandates efficient methodologies for assessment of their tolerability and activity. Evidence increasingly support the merits of model-based dose-finding designs in identifying the recommended phase II dose compared with conventional rule-based designs such as the 3 + 3 but despite this, their use remains limited. Here, we propose a useful tool, dose transition pathways (DTP), which helps overcome several commonly faced practical and methodologic challenges in the implementation of model-based designs. DTP projects in advance the doses recommended by a model-based design for subsequent patients (stay, escalate, de-escalate, or stop early), using all the accumulated information. After specifying a model with favorable statistical properties, we utilize the DTP to fine-tune the model to tailor it to the trial's specific requirements that reflect important clinical judgments. In particular, it can help to determine how stringent the stopping rules should be if the investigated therapy is too toxic. Its use to design and implement a modified continual reassessment method is illustrated in an acute myeloid leukemia trial. DTP removes the fears of model-based designs as unknown, complex systems and can serve as a handbook, guiding decision-making for each dose update. In the illustrated trial, the seamless, clear transition for each dose recommendation aided the investigators' understanding of the design and facilitated decision-making to enable finer calibration of a tailored model. We advocate the use of the DTP as an integral procedure in the co-development and successful implementation of practical model-based designs by statisticians and investigators. Clin Cancer Res; 23(24); 7440-7. ©2017 AACR . ©2017 American Association for Cancer Research.

  2. Assessment of undiscovered continuous gas resources in Upper Devonian Shales of the Appalachian Basin Province, 2017

    USGS Publications Warehouse

    Enomoto, Catherine B.; Trippi, Michael H.; Higley, Debra K.; Rouse, William A.; Dulong, Frank T.; Klett, Timothy R.; Mercier, Tracey J.; Brownfield, Michael E.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Marra, Kristen R.; Le, Phuong A.; Woodall, Cheryl A.; Schenk, Christopher J.

    2018-04-19

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 10.7 trillion cubic feet of natural gas in Upper Devonian shales of the Appalachian Basin Province.

  3. Continuity and Variation in Chinese Patterns of Socialization.

    ERIC Educational Resources Information Center

    Ho, David Y. F.

    1989-01-01

    Reviews literature on Chinese patterns of socialization. Discusses methodological issues with respect to continuity versus change through time, and variation across geographical locations, systematically considering variables of gender, age, and social class. Concludes that departures from traditional pattern in different locations are evident,…

  4. 40 CFR Table Nn-2 to Subpart Hh of... - Lookup Default Values for Calculation Methodology 2 of This Subpart

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Municipal Solid Waste Landfills Pt. 98, Subpt. NN, Table NN-2 Table NN-2 to Subpart HH of Part 98—Lookup Default Values...

  5. Challenging accepted wisdom: looking at the gender and science education question through a different lens

    NASA Astrophysics Data System (ADS)

    Gilbert, Jane; Calvert, Sarah

    2003-07-01

    This article reports on a research project designed to explore a group of women scientists' understandings of themselves and science. The project uses an unconventional methodology: - a mixture of conventional qualitative research methods and techniques developed for use in psychotherapy. Its preliminary results appear to contradict some of the assumptions on which much of past work on girls and science education is based. For example, we found that, for the women involved in this project, factors such as the presence in their lives of strong female role models and/or the use of 'girl-friendly' curriculum materials were not important in their decision to continue the study of science to university level. Other factors - some of which were quite unexpected - had a much greater effect. The article outlines the methodology of this project and some of its findings, and explores the implications of these findings for future work on the gender and science education question.

  6. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    NASA Astrophysics Data System (ADS)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  7. An overview of sensor calibration inter-comparison and applications

    USGS Publications Warehouse

    Xiong, Xiaoxiong; Cao, Changyong; Chander, Gyanesh

    2010-01-01

    Long-term climate data records (CDR) are often constructed using observations made by multiple Earth observing sensors over a broad range of spectra and a large scale in both time and space. These sensors can be of the same or different types operated on the same or different platforms. They can be developed and built with different technologies and are likely operated over different time spans. It has been known that the uncertainty of climate models and data records depends not only on the calibration quality (accuracy and stability) of individual sensors, but also on their calibration consistency across instruments and platforms. Therefore, sensor calibration inter-comparison and validation have become increasingly demanding and will continue to play an important role for a better understanding of the science product quality. This paper provides an overview of different methodologies, which have been successfully applied for sensor calibration inter-comparison. Specific examples using different sensors, including MODIS, AVHRR, and ETM+, are presented to illustrate the implementation of these methodologies.

  8. Using forbidden ordinal patterns to detect determinism in irregularly sampled time series.

    PubMed

    Kulp, C W; Chobot, J M; Niskala, B J; Needhammer, C J

    2016-02-01

    It is known that when symbolizing a time series into ordinal patterns using the Bandt-Pompe (BP) methodology, there will be ordinal patterns called forbidden patterns that do not occur in a deterministic series. The existence of forbidden patterns can be used to identify deterministic dynamics. In this paper, the ability to use forbidden patterns to detect determinism in irregularly sampled time series is tested on data generated from a continuous model system. The study is done in three parts. First, the effects of sampling time on the number of forbidden patterns are studied on regularly sampled time series. The next two parts focus on two types of irregular-sampling, missing data and timing jitter. It is shown that forbidden patterns can be used to detect determinism in irregularly sampled time series for low degrees of sampling irregularity (as defined in the paper). In addition, comments are made about the appropriateness of using the BP methodology to symbolize irregularly sampled time series.

  9. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  10. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  11. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across different temporal lines and local time stepping control. Critical aspect of time integration accuracy is construction of spatial stencil due to accurate calculation of spatial derivatives. Since common approach applied for wavelets and splines uses a finite difference operator, we developed here collocation one including solution values and differential operator. In this way, new improved algorithm is adaptive in space and time enabling accurate solution for groundwater flow problems, especially in highly heterogeneous porous media with large lnK variances and different correlation length scales. In addition, differences between collocation and finite volume approaches are discussed. Finally, results show application of methodology to the groundwater flow problems in highly heterogeneous confined and unconfined aquifers.

  12. Realistic computer network simulation for network intrusion detection dataset generation

    NASA Astrophysics Data System (ADS)

    Payer, Garrett

    2015-05-01

    The KDD-99 Cup dataset is dead. While it can continue to be used as a toy example, the age of this dataset makes it all but useless for intrusion detection research and data mining. Many of the attacks used within the dataset are obsolete and do not reflect the features important for intrusion detection in today's networks. Creating a new dataset encompassing a large cross section of the attacks found on the Internet today could be useful, but would eventually fall to the same problem as the KDD-99 Cup; its usefulness would diminish after a period of time. To continue research into intrusion detection, the generation of new datasets needs to be as dynamic and as quick as the attacker. Simply examining existing network traffic and using domain experts such as intrusion analysts to label traffic is inefficient, expensive, and not scalable. The only viable methodology is simulation using technologies including virtualization, attack-toolsets such as Metasploit and Armitage, and sophisticated emulation of threat and user behavior. Simulating actual user behavior and network intrusion events dynamically not only allows researchers to vary scenarios quickly, but enables online testing of intrusion detection mechanisms by interacting with data as it is generated. As new threat behaviors are identified, they can be added to the simulation to make quicker determinations as to the effectiveness of existing and ongoing network intrusion technology, methodology and models.

  13. Continuous-flow laboratory simulation of stream water quality changes downstream of an untreated wastewater discharge.

    PubMed

    Finnegan, C J; van Egmond, R A; Price, O R; Whelan, M J

    2009-04-01

    In regions of the world with poor provision of wastewater treatment, raw sewage is often discharged directly into surface waters. This paper describes an experimental evaluation of the fate of two organic chemicals under these conditions using an artificial channel cascade fed with a mix of settled sewage and river water at its upstream end and operated under continuous steady-state conditions. The experiments underpin an environmental risk assessment methodology based on the idea of an "impact zone" (IZ) - the zone downstream of wastewater emission in which water quality is severely impaired by high concentrations of unionised ammonia, nitrite and biochemical oxygen demand (BOD). Radiolabelled dodecane-6-benzene sulphonate (DOBS) and aniline hydrochloride were used as the model chemical and reference compound respectively. Rapid changes in (14)C counts were observed with flow-time for both these materials. These changes were most likely to be due to complete mineralisation. A dissipation half-life of approximately 7.1 h was observed for the (14)C label with DOBS. The end of the IZ was defined as the point at which the concentration of both unionised ammonia and nitrite fell below their respective predicted no-effect concentrations for salmonids. At these points in the cascade, approximately 83 and 90% of the initial concentration of (14)C had been removed from the water column, respectively. A simple model of mineral nitrogen transformations based on Michaelis-Menten kinetics was fitted to observed concentrations of NH(4), NO(2) and NO(3). The cascade is intended to provide a confirmatory methodology for assessing the ecological risks of chemicals under direct discharge conditions.

  14. On representing the prognostic value of continuous gene expression biomarkers with the restricted mean survival curve.

    PubMed

    Eng, Kevin H; Schiller, Emily; Morrell, Kayla

    2015-11-03

    Researchers developing biomarkers for cancer prognosis from quantitative gene expression data are often faced with an odd methodological discrepancy: while Cox's proportional hazards model, the appropriate and popular technique, produces a continuous and relative risk score, it is hard to cast the estimate in clear clinical terms like median months of survival and percent of patients affected. To produce a familiar Kaplan-Meier plot, researchers commonly make the decision to dichotomize a continuous (often unimodal and symmetric) score. It is well known in the statistical literature that this procedure induces significant bias. We illustrate the liabilities of common techniques for categorizing a risk score and discuss alternative approaches. We promote the use of the restricted mean survival (RMS) and the corresponding RMS curve that may be thought of as an analog to the best fit line from simple linear regression. Continuous biomarker workflows should be modified to include the more rigorous statistical techniques and descriptive plots described in this article. All statistics discussed can be computed via standard functions in the Survival package of the R statistical programming language. Example R language code for the RMS curve is presented in the appendix.

  15. Implementation of Strategies in Continuing Education

    ERIC Educational Resources Information Center

    Kettunen, Juha

    2005-01-01

    Purpose--The purpose of this paper is to provide higher education institutions with strategies of continuing education and methods to communicate and implement these strategies. Design/methodology/approach--The balanced scorecard approach is used to implement the strategy. It translates the strategy into tangible objectives, measures and targets…

  16. Assessment of continuous oil and gas resources in the Pannonian Basin Province, Hungary, 2016

    USGS Publications Warehouse

    Schenk, Christopher J.; Klett, Timothy R.; Le, Phuong A.; Brownfield, Michael E.; Leathers-Miller, Heidi M.

    2017-06-29

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 119 million barrels of oil and 944 billion cubic feet of gas in the Hungarian part of the Pannonian Basin Province.

  17. Assessment of undiscovered continuous oil and gas resources in the Bohaiwan Basin Province, China, 2017

    USGS Publications Warehouse

    Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Woodall, Cheryl A.; Finn, Thomas M.; Brownfield, Michael E.; Le, Phuong A.; Klett, Timothy R.; Gaswirth, Stephanie B.; Marra, Kristen R.; Leathers-Miller, Heidi M.; Potter, Christopher J.

    2018-02-07

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 2.0 billion barrels of oil and 20.3 trillion cubic feet of gas in the Bohaiwan Basin Province, China.

  18. Enceladus Plume Density Modeling and Reconstruction for Cassini Attitude Control System

    NASA Technical Reports Server (NTRS)

    Sarani, Siamak

    2010-01-01

    In 2005, Cassini detected jets composed mostly of water, spouting from a set of nearly parallel rifts in the crust of Enceladus, an icy moon of Saturn. During an Enceladus flyby, either reaction wheels or attitude control thrusters on the Cassini spacecraft are used to overcome the external torque imparted on Cassini due to Enceladus plume or jets, as well as to slew the spacecraft in order to meet the pointing needs of the on-board science instruments. If the estimated imparted torque is larger than it can be controlled by the reaction wheel control system, thrusters are used to control the spacecraft. Having an engineering model that can predict and simulate the external torque imparted on Cassini spacecraft due to the plume density during all projected low-altitude Enceladus flybys is important. Equally important is being able to reconstruct the plume density after each flyby in order to calibrate the model. This paper describes an engineering model of the Enceladus plume density, as a function of the flyby altitude, developed for the Cassini Attitude and Articulation Control Subsystem, and novel methodologies that use guidance, navigation, and control data to estimate the external torque imparted on the spacecraft due to the Enceladus plume and jets. The plume density is determined accordingly. The methodologies described have already been used to reconstruct the plume density for three low-altitude Enceladus flybys of Cassini in 2008 and will continue to be used on all remaining low-altitude Enceladus flybys in Cassini's extended missions.

  19. Multi-scale Material Parameter Identification Using LS-DYNA® and LS-OPT®

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stander, Nielen; Basudhar, Anirban; Basu, Ushnish

    2015-06-15

    Ever-tightening regulations on fuel economy and carbon emissions demand continual innovation in finding ways for reducing vehicle mass. Classical methods for computational mass reduction include sizing, shape and topology optimization. One of the few remaining options for weight reduction can be found in materials engineering and material design optimization. Apart from considering different types of materials by adding material diversity, an appealing option in automotive design is to engineer steel alloys for the purpose of reducing thickness while retaining sufficient strength and ductility required for durability and safety. Such a project was proposed and is currently being executed under themore » auspices of the United States Automotive Materials Partnership (USAMP) funded by the Department of Energy. Under this program, new steel alloys (Third Generation Advanced High Strength Steel or 3GAHSS) are being designed, tested and integrated with the remaining design variables of a benchmark vehicle Finite Element model. In this project the principal phases identified are (i) material identification, (ii) formability optimization and (iii) multi-disciplinary vehicle optimization. This paper serves as an introduction to the LS-OPT methodology and therefore mainly focuses on the first phase, namely an approach to integrate material identification using material models of different length scales. For this purpose, a multi-scale material identification strategy, consisting of a Crystal Plasticity (CP) material model and a Homogenized State Variable (SV) model, is discussed and demonstrated. The paper concludes with proposals for integrating the multi-scale methodology into the overall vehicle design.« less

  20. Generation of segmental chips in metal cutting modeled with the PFEM

    NASA Astrophysics Data System (ADS)

    Rodriguez Prieto, J. M.; Carbonell, J. M.; Cante, J. C.; Oliver, J.; Jonsén, P.

    2018-06-01

    The Particle Finite Element Method, a lagrangian finite element method based on a continuous Delaunay re-triangulation of the domain, is used to study machining of Ti6Al4V. In this work the method is revised and applied to study the influence of the cutting speed on the cutting force and the chip formation process. A parametric methodology for the detection and treatment of the rigid tool contact is presented. The adaptive insertion and removal of particles are developed and employed in order to sidestep the difficulties associated with mesh distortion, shear localization as well as for resolving the fine-scale features of the solution. The performance of PFEM is studied with a set of different two-dimensional orthogonal cutting tests. It is shown that, despite its Lagrangian nature, the proposed combined finite element-particle method is well suited for large deformation metal cutting problems with continuous chip and serrated chip formation.

  1. Monitoring of freeze-thaw cycles in concrete using embedded sensors and ultrasonic imaging.

    PubMed

    Ranz, Javier; Aparicio, Sofía; Romero, Héctor; Casati, María Jesús; Molero, Miguel; González, Margarita

    2014-01-29

    This paper deals with the study of damage produced during freeze-thaw (F-T) cycles using two non-destructive measurement approaches-the first approach devoted to continuous monitoring using embedded sensors during the cycles, and the second one, performing ultrasonic imaging before and after the cycles. Both methodologies have been tested in two different types of concrete specimens, with and without air-entraining agents. Using the first measurement approach, the size and distribution of pores were estimated using a thermoporometrical model and continuous measurements of temperature and ultrasonic velocity along cycles. These estimates have been compared with the results obtained using mercury porosimetry testing. In the second approach, the damage due to F-T cycles has been evaluated by automated ultrasonic transmission and pulse-echo inspections made before and after the cycles. With these inspections the variations in the dimensions, velocity and attenuation caused by the accelerated F-T cycles were determined.

  2. Generation of segmental chips in metal cutting modeled with the PFEM

    NASA Astrophysics Data System (ADS)

    Rodriguez Prieto, J. M.; Carbonell, J. M.; Cante, J. C.; Oliver, J.; Jonsén, P.

    2017-09-01

    The Particle Finite Element Method, a lagrangian finite element method based on a continuous Delaunay re-triangulation of the domain, is used to study machining of Ti6Al4V. In this work the method is revised and applied to study the influence of the cutting speed on the cutting force and the chip formation process. A parametric methodology for the detection and treatment of the rigid tool contact is presented. The adaptive insertion and removal of particles are developed and employed in order to sidestep the difficulties associated with mesh distortion, shear localization as well as for resolving the fine-scale features of the solution. The performance of PFEM is studied with a set of different two-dimensional orthogonal cutting tests. It is shown that, despite its Lagrangian nature, the proposed combined finite element-particle method is well suited for large deformation metal cutting problems with continuous chip and serrated chip formation.

  3. IT Operational Risk Measurement Model Based on Internal Loss Data of Banks

    NASA Astrophysics Data System (ADS)

    Hao, Xiaoling

    Business operation of banks relies increasingly on information technology (IT) and the most important role of IT is to guarantee the operational continuity of business process. Therefore, IT Risk management efforts need to be seen from the perspective of operational continuity. Traditional IT risk studies focused on IT asset-based risk analysis and risk-matrix based qualitative risk evaluation. In practice, IT risk management practices of banking industry are still limited to the IT department and aren't integrated into business risk management, which causes the two departments to work in isolation. This paper presents an improved methodology for dealing with IT operational risk. It adopts quantitative measurement method, based on the internal business loss data about IT events, and uses Monte Carlo simulation to predict the potential losses. We establish the correlation between the IT resources and business processes to make sure risk management of IT and business can work synergistically.

  4. Pandemic influenza and critical infrastructure dependencies: possible impact on hospitals.

    PubMed

    Itzwerth, Ralf L; Macintyre, C Raina; Shah, Smita; Plant, Aileen J

    2006-11-20

    Hospitals will be particularly challenged when pandemic influenza spreads. Within the health sector in general, existing pandemic plans focus on health interventions to control outbreaks. The critical relationship between the health sector and other sectors is not well understood and addressed. Hospitals depend on critical infrastructure external to the organisation itself. Existing plans do not adequately consider the complexity and interdependency of systems upon which hospitals rely. The failure of one such system can trigger a failure of another, causing cascading breakdowns. Health is only one of the many systems that struggle at maximum capacity during "normal" times, as current business models operate with no or minimal "excess" staff and have become irreducible operations. This makes interconnected systems highly vulnerable to acute disruptions, such as a pandemic. Companies use continuity plans and highly regulated business continuity management to overcome process interruptions. This methodology can be applied to hospitals to minimise the impact of a pandemic.

  5. Monitoring of Freeze-Thaw Cycles in Concrete Using Embedded Sensors and Ultrasonic Imaging

    PubMed Central

    Ranz, Javier; Aparicio, Sofía; Romero, Héctor; Casati, María Jesús; Molero, Miguel; González, Margarita

    2014-01-01

    This paper deals with the study of damage produced during freeze-thaw (F-T) cycles using two non-destructive measurement approaches—the first approach devoted to continuous monitoring using embedded sensors during the cycles, and the second one, performing ultrasonic imaging before and after the cycles. Both methodologies have been tested in two different types of concrete specimens, with and without air-entraining agents. Using the first measurement approach, the size and distribution of pores were estimated using a thermoporometrical model and continuous measurements of temperature and ultrasonic velocity along cycles. These estimates have been compared with the results obtained using mercury porosimetry testing. In the second approach, the damage due to F-T cycles has been evaluated by automated ultrasonic transmission and pulse-echo inspections made before and after the cycles. With these inspections the variations in the dimensions, velocity and attenuation caused by the accelerated F-T cycles were determined. PMID:24481231

  6. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ

    PubMed Central

    2012-01-01

    Background The syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research. Methods We conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity. Results The Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings. Conclusions The ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement. PMID:23185978

  7. Methodology to predict a maximum follow-up period for breast cancer patients without significantly reducing the chance of detecting a local recurrence

    NASA Astrophysics Data System (ADS)

    Mould, Richard F.; Asselain, Bernard; DeRycke, Yann

    2004-03-01

    For breast cancer where the prognosis of early stage disease is very good and even when local recurrences do occur they can present several years after treatment, the hospital resources required for annual follow-up examinations of what can be several hundreds of patients are financially significant. If, therefore, there is some method to estimate a maximum length of follow-up Tmax necessary, then cost savings of physicians' time as well as outpatient workload reductions can be achieved. In modern oncology where expenses continue to increase exponentially due to staff salaries and the expense of chemotherapy drugs and of new treatment and imaging technology, the economic situation can no longer be ignored. The methodology of parametric modelling, based on the lognormal distribution is described, showing that useful estimates for Tmax can be made, by making a trade-off between Tmax and the fraction of patients who will experience a delay in detection of their local recurrence. This trade-off depends on the chosen tail of the lognormal. The methodology is described for stage T1 and T2 breast cancer and it is found that Tmax = 4 years which is a significant reduction on the usual maximum of 10 years of follow-up which is employed by many hospitals for breast cancer patients. The methodology is equally applicable for cancers at other sites where the prognosis is good and some local recurrences may not occur until several years post-treatment.

  8. Studying the effect of cracks on the ultrasonic wave propagation in a two dimensional gearbox finite element model

    NASA Astrophysics Data System (ADS)

    Ozevin, Didem; Fazel, Hossein; Cox, Justin; Hardman, William; Kessler, Seth S.; Timmons, Alan

    2014-04-01

    Gearbox components of aerospace structures are typically made of brittle materials with high fracture toughness, but susceptible to fatigue failure due to continuous cyclic loading. Structural Health Monitoring (SHM) methods are used to monitor the crack growth in gearbox components. Damage detection methodologies developed in laboratory-scale experiments may not represent the actual gearbox structural configuration, and are usually not applicable to real application as the vibration and wave properties depend on the material, structural layers and thicknesses. Also, the sensor types and locations are key factors for frequency content of ultrasonic waves, which are essential features for pattern recognition algorithm development in noisy environments. Therefore, a deterministic damage detection methodology that considers all the variables influencing the waveform signature should be considered in the preliminary computation before any experimental test matrix. In order to achieve this goal, we developed two dimensional finite element models of a gearbox cross section from front view and shaft section. The cross section model consists of steel revolving teeth, a thin layer of oil, and retention plate. An ultrasonic wave up to 1 MHz frequency is generated, and waveform histories along the gearbox are recorded. The received waveforms under pristine and cracked conditions are compared in order to analyze the crack influence on the wave propagation in gearbox, which can be utilized by both active and passive SHM methods.

  9. Interconnection Assessment Methodology and Cost Benefit Analysis for High-Penetration PV Deployment in the Arizona Public Service System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baggu, Murali; Giraldez, Julieta; Harris, Tom

    In an effort to better understand the impacts of high penetrations of photovoltaic (PV) generators on distribution systems, Arizona Public Service and its partners completed a multi-year project to develop the tools and knowledge base needed to safely and reliably integrate high penetrations of utility- and residential-scale PV. Building upon the APS Community Power Project-Flagstaff Pilot, this project investigates the impact of PV on a representative feeder in northeast Flagstaff. To quantify and catalog the effects of the estimated 1.3 MW of PV that will be installed on the feeder (both smaller units at homes and large, centrally located systems),more » high-speed weather and electrical data acquisition systems and digital 'smart' meters were designed and installed to facilitate monitoring and to build and validate comprehensive, high-resolution models of the distribution system. These models are being developed to analyze the impacts of PV on distribution circuit protection systems (including coordination and anti-islanding), predict voltage regulation and phase balance issues, and develop volt/VAr control schemes. This paper continues from a paper presented at the 2014 IEEE PVSC conference that described feeder model evaluation and high penetration advanced scenario analysis, specifically feeder reconfiguration. This paper presents results from Phase 5 of the project. Specifically, the paper discusses tool automation; interconnection assessment methodology and cost benefit analysis.« less

  10. Extensions to regret-based decision curve analysis: an application to hospice referral for terminal patients.

    PubMed

    Tsalatsanis, Athanasios; Barnes, Laura E; Hozo, Iztok; Djulbegovic, Benjamin

    2011-12-23

    Despite the well documented advantages of hospice care, most terminally ill patients do not reap the maximum benefit from hospice services, with the majority of them receiving hospice care either prematurely or delayed. Decision systems to improve the hospice referral process are sorely needed. We present a novel theoretical framework that is based on well-established methodologies of prognostication and decision analysis to assist with the hospice referral process for terminally ill patients. We linked the SUPPORT statistical model, widely regarded as one of the most accurate models for prognostication of terminally ill patients, with the recently developed regret based decision curve analysis (regret DCA). We extend the regret DCA methodology to consider harms associated with the prognostication test as well as harms and effects of the management strategies. In order to enable patients and physicians in making these complex decisions in real-time, we developed an easily accessible web-based decision support system available at the point of care. The web-based decision support system facilitates the hospice referral process in three steps. First, the patient or surrogate is interviewed to elicit his/her personal preferences regarding the continuation of life-sustaining treatment vs. palliative care. Then, regret DCA is employed to identify the best strategy for the particular patient in terms of threshold probability at which he/she is indifferent between continuation of treatment and of hospice referral. Finally, if necessary, the probabilities of survival and death for the particular patient are computed based on the SUPPORT prognostication model and contrasted with the patient's threshold probability. The web-based design of the CDSS enables patients, physicians, and family members to participate in the decision process from anywhere internet access is available. We present a theoretical framework to facilitate the hospice referral process. Further rigorous clinical evaluation including testing in a prospective randomized controlled trial is required and planned.

  11. Extensions to Regret-based Decision Curve Analysis: An application to hospice referral for terminal patients

    PubMed Central

    2011-01-01

    Background Despite the well documented advantages of hospice care, most terminally ill patients do not reap the maximum benefit from hospice services, with the majority of them receiving hospice care either prematurely or delayed. Decision systems to improve the hospice referral process are sorely needed. Methods We present a novel theoretical framework that is based on well-established methodologies of prognostication and decision analysis to assist with the hospice referral process for terminally ill patients. We linked the SUPPORT statistical model, widely regarded as one of the most accurate models for prognostication of terminally ill patients, with the recently developed regret based decision curve analysis (regret DCA). We extend the regret DCA methodology to consider harms associated with the prognostication test as well as harms and effects of the management strategies. In order to enable patients and physicians in making these complex decisions in real-time, we developed an easily accessible web-based decision support system available at the point of care. Results The web-based decision support system facilitates the hospice referral process in three steps. First, the patient or surrogate is interviewed to elicit his/her personal preferences regarding the continuation of life-sustaining treatment vs. palliative care. Then, regret DCA is employed to identify the best strategy for the particular patient in terms of threshold probability at which he/she is indifferent between continuation of treatment and of hospice referral. Finally, if necessary, the probabilities of survival and death for the particular patient are computed based on the SUPPORT prognostication model and contrasted with the patient's threshold probability. The web-based design of the CDSS enables patients, physicians, and family members to participate in the decision process from anywhere internet access is available. Conclusions We present a theoretical framework to facilitate the hospice referral process. Further rigorous clinical evaluation including testing in a prospective randomized controlled trial is required and planned. PMID:22196308

  12. "If You Have to Ask, You'll Never Know": Effects of Specialised Stylistic Expertise on Predictive Processing of Music

    PubMed Central

    Vuust, Peter; Pearce, Marcus

    2016-01-01

    Musical expertise entails meticulous stylistic specialisation and enculturation. Even so, research on musical training effects has focused on generalised comparisons between musicians and non-musicians, and cross-cultural work addressing specialised expertise has traded cultural specificity and sensitivity for other methodological limitations. This study aimed to experimentally dissociate the effects of specialised stylistic training and general musical expertise on the perception of melodies. Non-musicians and professional musicians specialising in classical music or jazz listened to sampled renditions of saxophone solos improvised by Charlie Parker in the bebop style. Ratings of explicit uncertainty and expectedness for different continuations of each melodic excerpt were collected. An information-theoretic model of expectation enabled selection of stimuli affording highly certain continuations in the bebop style, but highly uncertain continuations in the context of general tonal expectations, and vice versa. The results showed that expert musicians have acquired probabilistic characteristics of music influencing their experience of expectedness and predictive uncertainty. While classical musicians had internalised key aspects of the bebop style implicitly, only jazz musicians’ explicit uncertainty ratings reflected the computational estimates, and jazz-specific expertise modulated the relationship between explicit and inferred uncertainty data. In spite of this, there was no evidence that non-musicians and classical musicians used a stylistically irrelevant cognitive model of general tonal music providing support for the theory of cognitive firewalls between stylistic models in predictive processing of music. PMID:27732612

  13. "If You Have to Ask, You'll Never Know": Effects of Specialised Stylistic Expertise on Predictive Processing of Music.

    PubMed

    Hansen, Niels Chr; Vuust, Peter; Pearce, Marcus

    2016-01-01

    Musical expertise entails meticulous stylistic specialisation and enculturation. Even so, research on musical training effects has focused on generalised comparisons between musicians and non-musicians, and cross-cultural work addressing specialised expertise has traded cultural specificity and sensitivity for other methodological limitations. This study aimed to experimentally dissociate the effects of specialised stylistic training and general musical expertise on the perception of melodies. Non-musicians and professional musicians specialising in classical music or jazz listened to sampled renditions of saxophone solos improvised by Charlie Parker in the bebop style. Ratings of explicit uncertainty and expectedness for different continuations of each melodic excerpt were collected. An information-theoretic model of expectation enabled selection of stimuli affording highly certain continuations in the bebop style, but highly uncertain continuations in the context of general tonal expectations, and vice versa. The results showed that expert musicians have acquired probabilistic characteristics of music influencing their experience of expectedness and predictive uncertainty. While classical musicians had internalised key aspects of the bebop style implicitly, only jazz musicians' explicit uncertainty ratings reflected the computational estimates, and jazz-specific expertise modulated the relationship between explicit and inferred uncertainty data. In spite of this, there was no evidence that non-musicians and classical musicians used a stylistically irrelevant cognitive model of general tonal music providing support for the theory of cognitive firewalls between stylistic models in predictive processing of music.

  14. Continuous Tidal Streamflow and Gage-Height Data for Bass and Cinder Creeks on Kiawah Island, South Carolina, September 2007

    USGS Publications Warehouse

    Conrads, Paul; Erbland, John W.

    2009-01-01

    A three-dimensional model of Bass and Cinder Creeks on Kiawah Island, South Carolina, was developed to evaluate methodologies for determining fecal coliform total maximum daily loads for shellfish waters. To calibrate the model, two index-velocity sites on the creeks were instrumented with continuous acoustic velocity meters and water-level sensors to compute a 21-day continuous record of tidal streamflows. In addition to monitoring tidal cycles, streamflow measurements were made at the index-velocity sites, and tidal-cycle streamflow measurements were made at the mouth of Bass Creek and on the Stono River to characterize the streamflow dynamics near the ocean boundary of the three-dimensional model at the beginning, September 6, 2007, and end, September 26, 2007, of the index-velocity meter deployment. The maximum floodtide and ebbtide measured on the Stono River by the mouth of Bass Creek for the two measurements were -155,000 and 170,000 cubic feet per second (ft3/s). At the mouth of Bass Creek, the maximum floodtide and ebbtide measurements during the 2 measurement days were +/-10,200 ft3/s. Tidal streamflows for the 21-day deployment on Bass Creek ranged from -2,510 ft3/s for an incoming tide to 4,360 ft3/s for an outgoing tide. On Cinder Creek, the incoming and outgoing tide varied from -2,180 to 2,400 ft3/s during the same period.

  15. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  16. Modelling unsupervised online-learning of artificial grammars: linking implicit and statistical learning.

    PubMed

    Rohrmeier, Martin A; Cross, Ian

    2014-07-01

    Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Removal of singularity in radial Langmuir probe models for non-zero ion temperature

    NASA Astrophysics Data System (ADS)

    Regodón, Guillermo Fernando; Fernández Palop, José Ignacio; Tejero-del-Caz, Antonio; Díaz-Cabrera, Juan Manuel; Carmona-Cabezas, Rafael; Ballesteros, Jerónimo

    2017-10-01

    We solve a radial theoretical model that describes the ion sheath around a cylindrical Langmuir probe with finite non-zero ion temperature in which singularity in an a priori unknown point prevents direct integration. The singularity appears naturally in fluid models when the velocity of the ions reaches the local ion speed of sound. The solutions are smooth and continuous and are valid from the plasma to the probe with no need for asymptotic matching. The solutions that we present are valid for any value of the positive ion to electron temperature ratio and for any constant polytropic coefficient. The model is numerically solved to obtain the electric potential and the ion population density profiles for any given positive ion current collected by the probe. The ion-current to probe-voltage characteristic curves and the Sonin plot are calculated in order to use the results of the model in plasma diagnosis. The proposed methodology is adaptable to other geometries and in the presence of other presheath mechanisms.

  18. The rise of machine consciousness: studying consciousness with computational models.

    PubMed

    Reggia, James A

    2013-08-01

    Efforts to create computational models of consciousness have accelerated over the last two decades, creating a field that has become known as artificial consciousness. There have been two main motivations for this controversial work: to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness. This review begins by briefly explaining some of the concepts and terminology used by investigators working on machine consciousness, and summarizes key neurobiological correlates of human consciousness that are particularly relevant to past computational studies. Models of consciousness developed over the last twenty years are then surveyed. These models are largely found to fall into five categories based on the fundamental issue that their developers have selected as being most central to consciousness: a global workspace, information integration, an internal self-model, higher-level representations, or attention mechanisms. For each of these five categories, an overview of past work is given, a representative example is presented in some detail to illustrate the approach, and comments are provided on the contributions and limitations of the methodology. Three conclusions are offered about the state of the field based on this review: (1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness, (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations, and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible. The paper concludes by discussing the importance of continuing work in this area, considering the ethical issues it raises, and making predictions concerning future developments. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. The Wide-Field Imaging Interferometry Testbed: Enabling Techniques for High Angular Resolution Astronomy

    NASA Technical Reports Server (NTRS)

    Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.; hide

    2007-01-01

    The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.

  20. The family living the child recovery process after hospital discharge.

    PubMed

    Pinto, Júlia Peres; Mandetta, Myriam Aparecida; Ribeiro, Circéa Amalia

    2015-01-01

    to understand the meaning attributed by the family to its experience in the recovery process of a child affected by an acute disease after discharge, and to develop a theoretical model of this experience. Symbolic interactionism was adopted as a theoretical reference, and grounded theory was adopted as a methodological reference. data were collected through interviews and participant observation with 11 families, totaling 15 interviews. A theoretical model consisting of two interactive phenomena was formulated from the analysis: Mobilizing to restore functional balance and Suffering from the possibility of a child's readmission. the family remains alert to identify early changes in the child's health, in an attempt to avoid rehospitalization. the effects of the disease and hospitalization continue to manifest in family functioning, causing suffering even after the child's discharge and recovery.

  1. The Continued Salience of Methodological Issues for Measuring Psychiatric Disorders in International Surveys

    ERIC Educational Resources Information Center

    Tausig, Mark; Subedi, Janardan; Broughton, Christopher; Pokimica, Jelena; Huang, Yinmei; Santangelo, Susan L.

    2011-01-01

    We investigated the extent to which methodological concerns explicitly addressed by the designers of the World Mental Health Surveys persist in the results that were obtained using the WMH-CIDI instrument. We compared rates of endorsement of mental illness symptoms in the United States (very high) and Nepal (very low) as they were affected by…

  2. Biochemical Assays of Cultured Cells

    NASA Technical Reports Server (NTRS)

    Barlow, G. H.

    1985-01-01

    Subpopulations of human embryonic kidney cells isolated from continuous flow electrophoresis experiments performed at McDonnell Douglas and on STS-8 have been analyzed. These analyses have included plasminogen activator assays involving indirect methodology on fibrin plated and direct methodology using chromogenic substrates. Immunological studies were performed and the conditioned media for erythropoietin activity and human granulocyte colony stimulating (HGCSF) activity was analyzed.

  3. Utilization of Lean Methodology to Refine Hiring Practices in a Clinical Research Center Setting

    ERIC Educational Resources Information Center

    Johnson, Marcus R.; Bullard, A. Jasmine; Whitley, R. Lawrence

    2018-01-01

    Background & Aims: Lean methodology is a continuous process improvement approach that is used to identify and eliminate unnecessary steps (or waste) in a process. It increases the likelihood that the highest level of value possible is provided to the end-user, or customer, in the form of the product delivered through that process. Lean…

  4. An Effective Methodology for Processing and Analyzing Large, Complex Spacecraft Data Streams

    ERIC Educational Resources Information Center

    Teymourlouei, Haydar

    2013-01-01

    The emerging large datasets have made efficient data processing a much more difficult task for the traditional methodologies. Invariably, datasets continue to increase rapidly in size with time. The purpose of this research is to give an overview of some of the tools and techniques that can be utilized to manage and analyze large datasets. We…

  5. Journal Benchmarking for Strategic Publication Management and for Improving Journal Positioning in the World Ranking Systems

    ERIC Educational Resources Information Center

    Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.

    2014-01-01

    Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…

  6. Comparison of 250 MHz electron spin echo and continuous wave oxygen EPR imaging methods for in vivo applications

    PubMed Central

    Epel, Boris; Sundramoorthy, Subramanian V.; Barth, Eugene D.; Mailer, Colin; Halpern, Howard J.

    2011-01-01

    Purpose: The authors compare two electron paramagnetic resonance imaging modalities at 250 MHz to determine advantages and disadvantages of those modalities for in vivo oxygen imaging. Methods: Electron spin echo (ESE) and continuous wave (CW) methodologies were used to obtain three-dimensional images of a narrow linewidth, water soluble, nontoxic oxygen-sensitive trityl molecule OX063 in vitro and in vivo. The authors also examined sequential images obtained from the same animal injected intravenously with trityl spin probe to determine temporal stability of methodologies. Results: A study of phantoms with different oxygen concentrations revealed a threefold advantage of the ESE methodology in terms of reduced imaging time and more precise oxygen resolution for samples with less than 70 torr oxygen partial pressure. Above∼100 torr, CW performed better. The images produced by both methodologies showed pO2 distributions with similar mean values. However, ESE images demonstrated superior performance in low pO2 regions while missing voxels in high pO2 regions. Conclusions: ESE and CW have different areas of applicability. ESE is superior for hypoxia studies in tumors. PMID:21626937

  7. System Dynamics Modeling for Proactive Intelligence

    DTIC Science & Technology

    2010-01-01

    5  4. Modeling Resources as Part of an Integrated Multi- Methodology System .................. 16  5. Formalizing Pro-Active...Observable Data With and Without Simulation Analysis ............................... 15  Figure 13. Summary of Probe Methodology and Results...Strategy ............................................................................. 22  Figure 22. Overview of Methodology

  8. A systems modeling methodology for evaluation of vehicle aggressivity in the automotive accident environment

    DOT National Transportation Integrated Search

    2001-03-05

    A systems modeling approach is presented for assessment of harm in the automotive accident environment. The methodology is presented in general form and then applied to evaluate vehicle aggressivity in frontal crashes. The methodology consists of par...

  9. SNPs selection using support vector regression and genetic algorithms in GWAS

    PubMed Central

    2014-01-01

    Introduction This paper proposes a new methodology to simultaneously select the most relevant SNPs markers for the characterization of any measurable phenotype described by a continuous variable using Support Vector Regression with Pearson Universal kernel as fitness function of a binary genetic algorithm. The proposed methodology is multi-attribute towards considering several markers simultaneously to explain the phenotype and is based jointly on statistical tools, machine learning and computational intelligence. Results The suggested method has shown potential in the simulated database 1, with additive effects only, and real database. In this simulated database, with a total of 1,000 markers, and 7 with major effect on the phenotype and the other 993 SNPs representing the noise, the method identified 21 markers. Of this total, 5 are relevant SNPs between the 7 but 16 are false positives. In real database, initially with 50,752 SNPs, we have reduced to 3,073 markers, increasing the accuracy of the model. In the simulated database 2, with additive effects and interactions (epistasis), the proposed method matched to the methodology most commonly used in GWAS. Conclusions The method suggested in this paper demonstrates the effectiveness in explaining the real phenotype (PTA for milk), because with the application of the wrapper based on genetic algorithm and Support Vector Regression with Pearson Universal, many redundant markers were eliminated, increasing the prediction and accuracy of the model on the real database without quality control filters. The PUK demonstrated that it can replicate the performance of linear and RBF kernels. PMID:25573332

  10. The ISO 50001 Impact Estimator Tool (IET 50001 V1.1.4) - User Guide and Introduction to the ISO 50001 Impacts Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Therkelsen, Peter L.; Rao, Prakash; Aghajanzadeh, Arian

    ISO 50001-Energy management systems – Requirements with guidance for use, is an internationally developed standard that provides organizations with a flexible framework for implementing an energy management system (EnMS) with the goal of continual energy performance improvement. The ISO 50001 standard was first published in 2011 and has since seen growth in the number of certificates issued around the world, primarily in the industrial (agriculture, manufacturing, and mining) and service (commercial) sectors. Policy makers in many regions and countries are looking to or are already using ISO 50001 as a basis for energy efficiency, carbon reduction, and other energy performancemore » improvement schemes. The Impact Estimator Tool 50001 (IET 50001 Tool) is a computational model developed to assist researchers and policy makers determine the potential impact of ISO 50001 implementation in the industrial and service (commercial) sectors for a given region or country. The IET 50001 Tool is based upon a methodology initially developed by the Lawrence Berkeley National Laboratory that has been improved upon and vetted by a group of international researchers. By using a commonly accepted and transparent methodology, users of the IET 50001 Tool can easily and clearly communicate the potential impact of ISO 50001 for a region or country.« less

  11. An Overview of Prognosis Health Management Research at Glenn Research Center for Gas Turbine Engine Structures With Special Emphasis on Deformation and Damage Modeling

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.

    2009-01-01

    Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include (1) diagnostic/detection methodology, (2) prognosis/lifing methodology, (3) diagnostic/prognosis linkage, (4) experimental validation, and (5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multimechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.

  12. An Overview of Prognosis Health Management Research at GRC for Gas Turbine Engine Structures With Special Emphasis on Deformation and Damage Modeling

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Goldberg, Robert K.; Lerch, Bradley A.; Saleeb, Atef F.

    2009-01-01

    Herein a general, multimechanism, physics-based viscoelastoplastic model is presented in the context of an integrated diagnosis and prognosis methodology which is proposed for structural health monitoring, with particular applicability to gas turbine engine structures. In this methodology, diagnostics and prognostics will be linked through state awareness variable(s). Key technologies which comprise the proposed integrated approach include 1) diagnostic/detection methodology, 2) prognosis/lifing methodology, 3) diagnostic/prognosis linkage, 4) experimental validation and 5) material data information management system. A specific prognosis lifing methodology, experimental characterization and validation and data information management are the focal point of current activities being pursued within this integrated approach. The prognostic lifing methodology is based on an advanced multi-mechanism viscoelastoplastic model which accounts for both stiffness and/or strength reduction damage variables. Methods to characterize both the reversible and irreversible portions of the model are discussed. Once the multiscale model is validated the intent is to link it to appropriate diagnostic methods to provide a full-featured structural health monitoring system.

  13. A Methodological Framework to Estimate the Site Fidelity of Tagged Animals Using Passive Acoustic Telemetry

    PubMed Central

    Capello, Manuela; Robert, Marianne; Soria, Marc; Potin, Gael; Itano, David; Holland, Kim; Deneubourg, Jean-Louis; Dagorn, Laurent

    2015-01-01

    The rapid expansion of the use of passive acoustic telemetry technologies has facilitated unprecedented opportunities for studying the behavior of marine organisms in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for the variety of timescales involved in the remote detection of tagged animals related to instrumental, environmental and behavioral events. In this paper we propose a methodological framework for estimating the site fidelity (“residence times”) of acoustic tagged animals at different timescales, based on the survival analysis of continuous residence times recorded at multiple receivers. Our approach is validated through modeling and applied on two distinct datasets obtained from a small coastal pelagic species (bigeye scad, Selar crumenophthalmus) and a large, offshore pelagic species (yellowfin tuna, Thunnus albacares), which show very distinct spatial scales of behavior. The methodological framework proposed herein allows estimating the most appropriate temporal scale for processing passive acoustic telemetry data depending on the scientific question of interest. Our method provides residence times free of the bias inherent to environmental and instrumental noise that can be used to study the small scale behavior of acoustic tagged animals. At larger timescales, it can effectively identify residence times that encompass the diel behavioral excursions of fish out of the acoustic detection range. This study provides a systematic framework for the analysis of passive acoustic telemetry data that can be employed for the comparative study of different species and study sites. The same methodology can be used each time discrete records of animal detections of any nature are employed for estimating the site fidelity of an animal at different timescales. PMID:26261985

  14. A Methodological Framework to Estimate the Site Fidelity of Tagged Animals Using Passive Acoustic Telemetry.

    PubMed

    Capello, Manuela; Robert, Marianne; Soria, Marc; Potin, Gael; Itano, David; Holland, Kim; Deneubourg, Jean-Louis; Dagorn, Laurent

    2015-01-01

    The rapid expansion of the use of passive acoustic telemetry technologies has facilitated unprecedented opportunities for studying the behavior of marine organisms in their natural environment. This technological advance would greatly benefit from the parallel development of dedicated methodologies accounting for the variety of timescales involved in the remote detection of tagged animals related to instrumental, environmental and behavioral events. In this paper we propose a methodological framework for estimating the site fidelity ("residence times") of acoustic tagged animals at different timescales, based on the survival analysis of continuous residence times recorded at multiple receivers. Our approach is validated through modeling and applied on two distinct datasets obtained from a small coastal pelagic species (bigeye scad, Selar crumenophthalmus) and a large, offshore pelagic species (yellowfin tuna, Thunnus albacares), which show very distinct spatial scales of behavior. The methodological framework proposed herein allows estimating the most appropriate temporal scale for processing passive acoustic telemetry data depending on the scientific question of interest. Our method provides residence times free of the bias inherent to environmental and instrumental noise that can be used to study the small scale behavior of acoustic tagged animals. At larger timescales, it can effectively identify residence times that encompass the diel behavioral excursions of fish out of the acoustic detection range. This study provides a systematic framework for the analysis of passive acoustic telemetry data that can be employed for the comparative study of different species and study sites. The same methodology can be used each time discrete records of animal detections of any nature are employed for estimating the site fidelity of an animal at different timescales.

  15. The methodology for modeling queuing systems using Petri nets

    NASA Astrophysics Data System (ADS)

    Kotyrba, Martin; Gaj, Jakub; Tvarůžka, Matouš

    2017-07-01

    This papers deals with the use of Petri nets in modeling and simulation of queuing systems. The first part is focused on the explanation of basic concepts and properties of Petri nets and queuing systems. The proposed methodology for the modeling of queuing systems using Petri nets is described in the practical part. The proposed methodology will be tested on specific cases.

  16. An Agent-Based Data Mining System for Ontology Evolution

    NASA Astrophysics Data System (ADS)

    Hadzic, Maja; Dillon, Darshan

    We have developed an evidence-based mental health ontological model that represents mental health in multiple dimensions. The ongoing addition of new mental health knowledge requires a continual update of the Mental Health Ontology. In this paper, we describe how the ontology evolution can be realized using a multi-agent system in combination with data mining algorithms. We use the TICSA methodology to design this multi-agent system which is composed of four different types of agents: Information agent, Data Warehouse agent, Data Mining agents and Ontology agent. We use UML 2.1 sequence diagrams to model the collaborative nature of the agents and a UML 2.1 composite structure diagram to model the structure of individual agents. The Mental Heath Ontology has the potential to underpin various mental health research experiments of a collaborative nature which are greatly needed in times of increasing mental distress and illness.

  17. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    NASA Astrophysics Data System (ADS)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  18. Developing a job-exposure matrix with exposure uncertainty from expert elicitation and data modeling.

    PubMed

    Fischer, Heidi J; Vergara, Ximena P; Yost, Michael; Silva, Michael; Lombardi, David A; Kheifets, Leeka

    2017-01-01

    Job exposure matrices (JEMs) are tools used to classify exposures for job titles based on general job tasks in the absence of individual level data. However, exposure uncertainty due to variations in worker practices, job conditions, and the quality of data has never been quantified systematically in a JEM. We describe a methodology for creating a JEM which defines occupational exposures on a continuous scale and utilizes elicitation methods to quantify exposure uncertainty by assigning exposures probability distributions with parameters determined through expert involvement. Experts use their knowledge to develop mathematical models using related exposure surrogate data in the absence of available occupational level data and to adjust model output against other similar occupations. Formal expert elicitation methods provided a consistent, efficient process to incorporate expert judgment into a large, consensus-based JEM. A population-based electric shock JEM was created using these methods, allowing for transparent estimates of exposure.

  19. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    NASA Technical Reports Server (NTRS)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  20. The use of hierarchical clustering for the design of optimized monitoring networks

    NASA Astrophysics Data System (ADS)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different sampling methodologies as well as outliers (stations' time series which are markedly different from all others in a given dataset).

  1. An Initial Multi-Domain Modeling of an Actively Cooled Structure

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur

    1997-01-01

    A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.

  2. Continuous Improvement Framework: Implications for Academia

    ERIC Educational Resources Information Center

    Temponi, Cecilia

    2005-01-01

    Purpose: To analyze the main elements of continuous improvement (CI) in higher education and the concerns of academia's stakeholders in the implementation of such an approach. Suggests guidelines for the development of a culture more receptive to the implementation and maintenance of a CI approach in higher education. Design/methodology/approach:…

  3. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification

    DOT National Transportation Integrated Search

    2012-03-31

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  4. Experimental evaluation of the Continuous Risk Profile (CRP) approach to the current Caltrans methodology for high collision concentration location identification.

    DOT National Transportation Integrated Search

    2012-03-01

    This report evaluates the performance of Continuous Risk Profile (CRP) compared with the : Sliding Window Method (SWM) and Peak Searching (PS) methods. These three network : screening methods all require the same inputs: traffic collision data and Sa...

  5. Assessment of continuous oil and gas resources of the Maracaibo Basin Province of Venezuela and Colombia, 2016

    USGS Publications Warehouse

    Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phoung A.; Pitman, Janet K.; Brownfield, Michael E.; Hawkins, Sarah J.; Leathers-Miller, Heidi M.; Finn, Thomas M.; Klett, Timothy R.

    2017-03-27

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean continuous resources of 656 million barrels of oil and 5.7 trillion cubic feet of gas in the Maracaibo Basin Province, Venezuela and Colombia.

  6. Factors Influencing Continuing Professional Development: A Delphi Study among Nursing Experts

    ERIC Educational Resources Information Center

    Brekelmans, Gerard; Poell, Rob F.; van Wijk, Kees

    2013-01-01

    Purpose: The aim of this paper is to present an inventory of expert opinions on the factors that influence the participation of registered nurses in continuing professional development (CPD) activities. Design/methodology/approach: A Delphi study was conducted among 38 Dutch experts (nursing employers, managers, education institutions, and…

  7. Assessment of continuous oil and gas resources in the Middle and Upper Magdalena Basins, Colombia, 2017

    USGS Publications Warehouse

    Schenk, Christopher J.; Brownfield, Michael E.; Tennyson, Marilyn E.; Le, Phuong A.; Mercier, Tracey J.; Finn, Thomas M.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Leathers-Miller, Heidi M.; Woodall, Cheryl A.

    2017-09-22

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 0.45 billion barrels of oil and 1.0 trillion cubic feet of gas in the Middle and Upper Magdalena Basins, Colombia.

  8. Assessment of undiscovered continuous gas resources in the Amu Darya Basin Province of Turkmenistan, Uzbekistan, Iran, and Afghanistan, 2017

    USGS Publications Warehouse

    Schenk, Christopher J.; Tennyson, Marilyn E.; Mercier, Tracey J.; Hawkins, Sarah J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Klett, Timothy R.; Le, Phuong A.; Brownfield, Michael E.; Woodall, Cheryl A.

    2017-08-17

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated mean undiscovered, technically recoverable continuous resources of 35.1 trillion cubic feet of gas in the Amu Darya Basin Province of Turkmenistan, Uzbekistan, Iran, and Afghanistan.

  9. Assessment of continuous oil and gas resources in the Neuquén Basin Province, Argentina, 2016

    USGS Publications Warehouse

    Schenk, Christopher J.; Klett, Timothy R.; Tennyson, Marilyn E.; Mercier, Tracey J.; Pitman, Janet K.; Gaswirth, Stephanie B.; Finn, Thomas M.; Brownfield, Michael E.; Le, Phuong A.; Leathers-Miller, Heidi M.; Marra, Kristen R.

    2017-05-23

    Using a geology-based assessment methodology, the U.S. Geological Survey assessed undiscovered, technically recoverable mean continuous resources of 14.4 billion barrels of oil and 38 trillion cubic feet of gas in the Neuquén Basin Province, Argentina.

  10. Assessment of continuous oil and gas resources of the South Sumatra Basin Province, Indonesia, 2016

    USGS Publications Warehouse

    Schenk, Christopher J.; Tennyson, Marilyn E.; Klett, Timothy R.; Finn, Thomas M.; Mercier, Tracey J.; Gaswirth, Stephanie B.; Marra, Kristen R.; Le, Phuong A.; Hawkins, Sarah J.

    2016-12-09

    Using a geology-based assessment methodology, the U.S. Geological Survey estimated undiscovered, technically recoverable mean resources of 689 million barrels of continuous shale oil and 3.9 trillion cubic feet of shale gas in the South Sumatra Basin Province in Indonesia.

  11. Infrared measurement and composite tracking algorithm for air-breathing hypersonic vehicles

    NASA Astrophysics Data System (ADS)

    Zhang, Zhao; Gao, Changsheng; Jing, Wuxing

    2018-03-01

    Air-breathing hypersonic vehicles have capabilities of hypersonic speed and strong maneuvering, and thus pose a significant challenge to conventional tracking methodologies. To achieve desirable tracking performance for hypersonic targets, this paper investigates the problems related to measurement model design and tracking model mismatching. First, owing to the severe aerothermal effect of hypersonic motion, an infrared measurement model in near space is designed and analyzed based on target infrared radiation and an atmospheric model. Second, using information from infrared sensors, a composite tracking algorithm is proposed via a combination of the interactive multiple models (IMM) algorithm, fitting dynamics model, and strong tracking filter. During the procedure, the IMMs algorithm generates tracking data to establish a fitting dynamics model of the target. Then, the strong tracking unscented Kalman filter is employed to estimate the target states for suppressing the impact of target maneuvers. Simulations are performed to verify the feasibility of the presented composite tracking algorithm. The results demonstrate that the designed infrared measurement model effectively and continuously observes hypersonic vehicles, and the proposed composite tracking algorithm accurately and stably tracks these targets.

  12. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    PubMed

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future researchers.

  13. 78 FR 13874 - Watershed Modeling To Assess the Sensitivity of Streamflow, Nutrient, and Sediment Loads to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-01

    ... an improved understanding of methodological challenges associated with integrating existing tools and... methodological challenges associated with integrating existing tools (e.g., climate models, downscaling... sensitivity to methodological choices such as different approaches for downscaling global climate change...

  14. Continuity of monolayer-bilayer junctions for localization of lipid raft microdomains in model membranes

    DOE PAGES

    Ryu, Yong -Sang; Wittenberg, Nathan J.; Suh, Jeng -Hun; ...

    2016-05-27

    We show that the selective localization of cholesterol-rich domains and associated ganglioside receptors prefer to occur in the monolayer across continuous monolayer-bilayer junctions (MBJs) in supported lipid membranes. For the MBJs, glass substrates were patterned with poly(dimethylsiloxane) (PDMS) oligomers by thermally-assisted contact printing, leaving behind 3 nm-thick PDMS patterns. The hydrophobicity of the transferred PDMS patterns was precisely tuned by the stamping temperature. Lipid monolayers were formed on the PDMS patterned surface while lipid bilayers were on the bare glass surface. Due to the continuity of the lipid membranes over the MBJs, essentially free diffusion of lipids was allowed betweenmore » the monolayer on the PDMS surface and the upper leaflet of the bilayer on the glass substrate. The preferential localization of sphingomyelin, ganglioside GM1 and cholesterol in the monolayer region enabled to develop raft microdomains through coarsening of nanorafts. Furthermore, our methodology provides a simple and effective scheme of non-disruptive manipulation of the chemical landscape associated with lipid phase separations, which leads to more sophisticated applications in biosensors and as cell culture substrates.« less

  15. Continuity of Monolayer-Bilayer Junctions for Localization of Lipid Raft Microdomains in Model Membranes

    PubMed Central

    Ryu, Yong-Sang; Wittenberg, Nathan J.; Suh, Jeng-Hun; Lee, Sang-Wook; Sohn, Youngjoo; Oh, Sang-Hyun; Parikh, Atul N.; Lee, Sin-Doo

    2016-01-01

    We show that the selective localization of cholesterol-rich domains and associated ganglioside receptors prefer to occur in the monolayer across continuous monolayer-bilayer junctions (MBJs) in supported lipid membranes. For the MBJs, glass substrates were patterned with poly(dimethylsiloxane) (PDMS) oligomers by thermally-assisted contact printing, leaving behind 3 nm-thick PDMS patterns. The hydrophobicity of the transferred PDMS patterns was precisely tuned by the stamping temperature. Lipid monolayers were formed on the PDMS patterned surface while lipid bilayers were on the bare glass surface. Due to the continuity of the lipid membranes over the MBJs, essentially free diffusion of lipids was allowed between the monolayer on the PDMS surface and the upper leaflet of the bilayer on the glass substrate. The preferential localization of sphingomyelin, ganglioside GM1 and cholesterol in the monolayer region enabled to develop raft microdomains through coarsening of nanorafts. Our methodology provides a simple and effective scheme of non-disruptive manipulation of the chemical landscape associated with lipid phase separations, which leads to more sophisticated applications in biosensors and as cell culture substrates. PMID:27230411

  16. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  17. Object-oriented analysis and design: a methodology for modeling the computer-based patient record.

    PubMed

    Egyhazy, C J; Eyestone, S M; Martino, J; Hodgson, C L

    1998-08-01

    The article highlights the importance of an object-oriented analysis and design (OOAD) methodology for the computer-based patient record (CPR) in the military environment. Many OOAD methodologies do not adequately scale up, allow for efficient reuse of their products, or accommodate legacy systems. A methodology that addresses these issues is formulated and used to demonstrate its applicability in a large-scale health care service system. During a period of 6 months, a team of object modelers and domain experts formulated an OOAD methodology tailored to the Department of Defense Military Health System and used it to produce components of an object model for simple order processing. This methodology and the lessons learned during its implementation are described. This approach is necessary to achieve broad interoperability among heterogeneous automated information systems.

  18. A modular inverse elastostatics approach to resolve the pressure-induced stress state for in vivo imaging based cardiovascular modeling.

    PubMed

    Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno

    2018-05-28

    Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Investigation of Nonlinear Pressurization and Model Restart in MSC/NASTRAN for Modeling Thin Film Inflatable Structures

    NASA Technical Reports Server (NTRS)

    Smalley, Kurt B.; Tinker, Michael L.; Fischer, Richard T.

    2001-01-01

    This paper is written for the purpose of providing an introduction and set of guidelines for the use of a methodology for NASTRAN eigenvalue modeling of thin film inflatable structures. It is hoped that this paper will spare the reader from the problems and headaches the authors were confronted with during their investigation by presenting here not only an introduction and verification of the methodology, but also a discussion of the problems that this methodology can ensue. Our goal in this investigation was to verify the basic methodology through the creation and correlation of a simple model. An overview of thin film structures, their history, and their applications is given. Previous modeling work is then briefly discussed. An introduction is then given for the method of modeling. The specific mechanics of the method are then discussed in parallel with a basic discussion of NASTRAN s implementation of these mechanics. The problems encountered with the method are then given along with suggestions for their work-a-rounds. The methodology is verified through the correlation between an analytical model and modal test results of a thin film strut. Recommendations are given for the needed advancement of our understanding of this method and ability to accurately model thin film structures. Finally, conclusions are drawn regarding the usefulness of the methodology.

  20. Use of software engineering techniques in the design of the ALEPH data acquisition system

    NASA Astrophysics Data System (ADS)

    Charity, T.; McClatchey, R.; Harvey, J.

    1987-08-01

    The SASD methodology is being used to provide a rigorous design framework for various components of the ALEPH data acquisition system. The Entity-Relationship data model is used to describe the layout and configuration of the control and acquisition systems and detector components. State Transition Diagrams are used to specify control applications such as run control and resource management and Data Flow Diagrams assist in decomposing software tasks and defining interfaces between processes. These techniques encourage rigorous software design leading to enhanced functionality and reliability. Improved documentation and communication ensures continuity over the system life-cycle and simplifies project management.

Top